Aller au contenu principal

Time-Based Filter QoS Policy

The Time-Based Filter policy enforces a minimum time separation between samples delivered to a DataReader. Samples arriving faster than the minimum separation are silently discarded on the reader side.

Purpose

Time-Based Filter enables reader-side downsampling:

  • High-frequency publishers can send data at full rate
  • Slow consumers receive only a subset of samples at their desired rate
  • No writer changes needed: filtering is purely a reader-side decision

Configuration

use hdds::{Participant, QoS, TransportMode};

let participant = Participant::builder("filter_app")
.domain_id(0)
.with_transport(TransportMode::UdpMulticast)
.build()?;

// Writer publishes at full rate (no filter)
let writer = participant
.topic::<SensorData>("sensors/imu")?
.writer()
.qos(QoS::reliable())
.build()?;

// Reader A: receives all samples (no filter)
let reader_all = participant
.topic::<SensorData>("sensors/imu")?
.reader()
.qos(QoS::reliable())
.build()?;

// Reader B: receives at most one sample every 500ms
let reader_filtered = participant
.topic::<SensorData>("sensors/imu")?
.reader()
.qos(QoS::best_effort().time_based_filter_millis(500))
.build()?;
#include <hdds.h>

/* Reader with no filter (receives all samples) */
struct HddsQoS* qos_all = hdds_qos_best_effort();
struct HddsDataReader* reader_all = hdds_reader_create_with_qos(
participant, "sensors/imu", qos_all);
hdds_qos_destroy(qos_all);

/* Reader with 500ms time-based filter */
struct HddsQoS* qos_filtered = hdds_qos_best_effort();
hdds_qos_set_time_based_filter_ns(qos_filtered, 500000000ULL); /* 500ms */

struct HddsDataReader* reader_filtered = hdds_reader_create_with_qos(
participant, "sensors/imu", qos_filtered);
hdds_qos_destroy(qos_filtered);

Default Value

Default is zero (no filtering, all samples are delivered):

let qos = QoS::best_effort();
// time_based_filter.minimum_separation = Duration::ZERO (disabled)

Fluent Builder Methods

MethodDescription
.time_based_filter_millis(n)Set minimum separation in milliseconds
.time_based_filter_secs(n)Set minimum separation in seconds

How Filtering Works

Publisher sends at 100ms intervals (20 messages over ~2s):

[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]
| | | | | | | | | | | | | | | | | | | |
v v v v

Reader A (no filter): receives ALL 20 messages

Reader B (filter=500ms): receives ~4 messages
[1]----skip----[6]----skip----[11]----skip----[16]----skip----[20]
|<-- 500ms -->|<-- 500ms -->|<--- 500ms --->|<--- 500ms --->|

The filtering logic:

  1. The first sample is always accepted
  2. After accepting a sample, a timer starts
  3. Subsequent samples arriving before minimum_separation has elapsed are dropped
  4. The next sample arriving after the separation period is accepted

Compatibility Rules

Time-Based Filter must be consistent with the Deadline policy:

Rule: time_based_filter.minimum_separation <= deadline.period

If the filter separation is longer than the deadline, every deadline period would be missed because the filter suppresses samples.

attention

Setting time_based_filter_millis(1000) with deadline(Duration::from_millis(500)) would cause constant deadline violations since the filter prevents samples from arriving within the 500ms deadline window.

Use Cases

Downsample High-Frequency Sensors

use hdds::{Participant, QoS, TransportMode};

let participant = Participant::builder("display_app")
.domain_id(0)
.with_transport(TransportMode::UdpMulticast)
.build()?;

// 1000Hz IMU sensor downsampled to 10Hz for UI display
let reader = participant
.topic::<ImuData>("sensors/imu")?
.reader()
.qos(QoS::best_effort().time_based_filter_millis(100))
.build()?;

Reduce CPU Load

use hdds::{Participant, QoS, TransportMode};

let participant = Participant::builder("slow_consumer")
.domain_id(0)
.with_transport(TransportMode::UdpMulticast)
.build()?;

// Slow consumer processes one sample per second
let reader = participant
.topic::<SensorData>("sensors/temperature")?
.reader()
.qos(QoS::best_effort().time_based_filter_secs(1))
.build()?;

Bandwidth-Limited Subscribers

use hdds::{Participant, QoS, TransportMode};

let participant = Participant::builder("remote_monitor")
.domain_id(0)
.with_transport(TransportMode::UdpMulticast)
.build()?;

// Remote monitoring over a slow link: accept 1 sample per 2 seconds
let reader = participant
.topic::<TelemetryData>("vehicle/telemetry")?
.reader()
.qos(QoS::best_effort().time_based_filter_secs(2))
.build()?;

UI Refresh Rate Limiting

use hdds::{Participant, QoS, TransportMode};

let participant = Participant::builder("dashboard")
.domain_id(0)
.with_transport(TransportMode::UdpMulticast)
.build()?;

// No need to redraw faster than 30 FPS (~33ms)
let reader = participant
.topic::<DisplayData>("ui/position")?
.reader()
.qos(QoS::best_effort().time_based_filter_millis(33))
.build()?;

Interaction with Other Policies

Time-Based Filter + Deadline

FilterDeadlineValid?Behavior
100ms500msYesFilter allows at most 10/s, deadline checks at 2/s
500ms100msNoFilter suppresses too many samples
0ms100msYesNo filter, deadline monitored normally

Time-Based Filter + Reliability

FilterReliabilityBehavior
Setbest_effort()Filtered samples are simply not delivered
Setreliable()Filtered samples are acknowledged but not delivered to application

Time-Based Filter + History

The filter operates before the history cache. Only accepted samples enter the history buffer:

  • keep_last(N) stores the last N accepted (post-filter) samples
  • keep_all() stores all accepted samples

Common Pitfalls

  1. Filter too aggressive: Setting a very long separation may cause you to miss important state changes. Consider whether the last sample in each period is sufficient.

  2. Combining with Deadline: Ensure filter_separation <= deadline_period to avoid constant deadline violations.

  3. Not a content filter: Time-Based Filter only filters by time, not by data values. For content-based filtering, use content filter expressions.

  4. Per-reader, not per-instance: The filter tracks time globally for the reader. With keyed topics, one instance receiving data resets the timer, potentially suppressing samples from other instances.

Performance Notes

  • Filter checking adds negligible overhead (single timestamp comparison)
  • Filtered samples consume no reader cache memory
  • Reduces CPU load proportionally to the filter ratio
  • Network bandwidth is still consumed on the writer side

Next Steps