Partition QoS Policy
The Partition policy provides logical isolation of data flows within a DDS domain. Writers and readers only communicate when their partition lists have at least one common element.
Purpose
Partitions create logical namespaces:
- Isolate data flows without creating separate topics
- Group entities by function, region, or tenant
- Dynamic reconfiguration at runtime without recreating entities
Configuration
Single Partition
use hdds::{Participant, QoS, TransportMode};
let participant = Participant::builder("partitioned_app")
.domain_id(0)
.with_transport(TransportMode::UdpMulticast)
.build()?;
// Writer publishes to partition "sensors"
let writer = participant
.topic::<SensorData>("data/readings")?
.writer()
.qos(QoS::reliable().partition_single("sensors"))
.build()?;
// Reader in same partition - will receive data
let reader = participant
.topic::<SensorData>("data/readings")?
.reader()
.qos(QoS::reliable().partition_single("sensors"))
.build()?;
Multiple Partitions
use hdds::{Participant, QoS, TransportMode};
use hdds::qos::partition::Partition;
let participant = Participant::builder("multi_partition_app")
.domain_id(0)
.with_transport(TransportMode::UdpMulticast)
.build()?;
// Writer publishes to both "sensors" and "actuators" partitions
let writer = participant
.topic::<DeviceData>("data/devices")?
.writer()
.qos(QoS::reliable().partition(Partition::new(
vec!["sensors".to_string(), "actuators".to_string()]
)))
.build()?;
// Reader in "actuators" partition - will match (intersection exists)
let reader = participant
.topic::<DeviceData>("data/devices")?
.reader()
.qos(QoS::reliable().partition_single("actuators"))
.build()?;
#include <hdds.h>
/* Writer in partition "sensors" */
struct HddsQoS* qos_writer = hdds_qos_reliable();
hdds_qos_add_partition(qos_writer, "sensors");
struct HddsDataWriter* writer = hdds_writer_create_with_qos(
participant, "data/readings", qos_writer);
hdds_qos_destroy(qos_writer);
/* Reader in same partition */
struct HddsQoS* qos_reader = hdds_qos_reliable();
hdds_qos_add_partition(qos_reader, "sensors");
struct HddsDataReader* reader = hdds_reader_create_with_qos(
participant, "data/readings", qos_reader);
hdds_qos_destroy(qos_reader);
Default Value
Default is the empty partition (default partition). Entities in the default partition only match other entities in the default partition.
let qos = QoS::reliable();
// partition = [] (default partition)
Fluent Builder Methods
| Method | Description |
|---|---|
.partition_single(name) | Set a single partition name |
.partition(Partition) | Set a custom Partition with multiple names |
How Partitions Work
Same Topic: "DataTopic"
Partition "A": Partition "B":
+--------------+ +--------------+
| Writer [A] | | Writer [B] |
| | | | | |
| v | | v |
| Reader [A] | | Reader [B] |
+--------------+ +--------------+
x No cross-communication x
Partition matching follows set intersection:
- Both entities must have at least one common partition name
- If both are in the default partition (empty list), they match
- If one is default and the other has named partitions, they do not match
Compatibility Rules
| Writer | Reader | Match? |
|---|---|---|
["sensor"] | ["sensor"] | Yes |
["sensor"] | ["actuator"] | No |
[] (default) | [] (default) | Yes |
["sensor"] | [] (default) | No |
[] (default) | ["sensor"] | No |
["sensor", "actuator"] | ["actuator"] | Yes (intersection) |
["sensor", "actuator"] | ["camera", "lidar"] | No (no intersection) |
["sensor", "camera"] | ["camera", "lidar"] | Yes ("camera" in common) |
Partition names are case-sensitive. "Sensor" and "sensor" are different partitions and will not match.
Use Cases
Multi-Robot Systems
use hdds::{Participant, QoS, TransportMode};
let participant = Participant::builder("robot_system")
.domain_id(0)
.with_transport(TransportMode::UdpMulticast)
.build()?;
// Robot 1 publishes to its own partition
let robot1_writer = participant
.topic::<StatusData>("robots/status")?
.writer()
.qos(QoS::reliable().partition_single("robot_001"))
.build()?;
// Robot 2 publishes to its own partition
let robot2_writer = participant
.topic::<StatusData>("robots/status")?
.writer()
.qos(QoS::reliable().partition_single("robot_002"))
.build()?;
// Ground station reads from robot_001 only
let gs_reader = participant
.topic::<StatusData>("robots/status")?
.reader()
.qos(QoS::reliable().partition_single("robot_001"))
.build()?;
Multi-Tenant Data Isolation
use hdds::{Participant, QoS, TransportMode};
let participant = Participant::builder("multi_tenant")
.domain_id(0)
.with_transport(TransportMode::UdpMulticast)
.build()?;
// Tenant A data
let tenant_a_writer = participant
.topic::<AppData>("app/data")?
.writer()
.qos(QoS::reliable().partition_single("tenant_a"))
.build()?;
// Tenant B data - completely isolated from Tenant A
let tenant_b_writer = participant
.topic::<AppData>("app/data")?
.writer()
.qos(QoS::reliable().partition_single("tenant_b"))
.build()?;
// Admin reader can read from both tenants
let admin_reader = participant
.topic::<AppData>("app/data")?
.reader()
.qos(QoS::reliable().partition(hdds::qos::partition::Partition::new(
vec!["tenant_a".to_string(), "tenant_b".to_string()]
)))
.build()?;
Environment Separation
use hdds::{Participant, QoS, TransportMode};
let participant = Participant::builder("env_app")
.domain_id(0)
.with_transport(TransportMode::UdpMulticast)
.build()?;
// Production data isolated from development
let prod_writer = participant
.topic::<SensorData>("sensors/data")?
.writer()
.qos(QoS::reliable().partition_single("production"))
.build()?;
let dev_writer = participant
.topic::<SensorData>("sensors/data")?
.writer()
.qos(QoS::reliable().partition_single("development"))
.build()?;
// Production reader only sees production data
let prod_reader = participant
.topic::<SensorData>("sensors/data")?
.reader()
.qos(QoS::reliable().partition_single("production"))
.build()?;
Geographic Regions
use hdds::{Participant, QoS, TransportMode};
use hdds::qos::partition::Partition;
let participant = Participant::builder("regional_app")
.domain_id(0)
.with_transport(TransportMode::UdpMulticast)
.build()?;
// Regional data writers
let us_writer = participant
.topic::<WeatherData>("weather/forecast")?
.writer()
.qos(QoS::reliable().partition_single("region/us"))
.build()?;
let eu_writer = participant
.topic::<WeatherData>("weather/forecast")?
.writer()
.qos(QoS::reliable().partition_single("region/eu"))
.build()?;
// Global reader subscribes to all regions
let global_reader = participant
.topic::<WeatherData>("weather/forecast")?
.reader()
.qos(QoS::reliable().partition(Partition::new(
vec!["region/us".to_string(), "region/eu".to_string()]
)))
.build()?;
Interaction with Other Policies
Partition + Reliability
Partitions affect matching, not delivery semantics. Once a writer and reader match (partitions intersect), the reliability policy governs delivery:
| Partitions | Reliability | Behavior |
|---|---|---|
| Match | reliable() | Guaranteed delivery within partition |
| Match | best_effort() | Best-effort within partition |
| No match | Any | No communication at all |
Partition + Durability
With transient_local(), late-joining readers only receive cached data from writers in matching partitions.
Partition + Ownership
Ownership arbitration occurs within the matching set. With EXCLUSIVE ownership and partitions:
- Only writers in matching partitions participate in strength arbitration
- A high-strength writer in a different partition has no effect
Partition vs Domain ID
| Feature | Domain ID | Partition |
|---|---|---|
| Scope | Network-level isolation | Logical isolation within domain |
| Discovery | Separate SPDP | Shared discovery, filtered matching |
| Overhead | Full protocol stack per domain | No additional overhead |
| Runtime change | Requires entity recreation | Can change dynamically |
| Use case | Hard isolation (security) | Soft isolation (organization) |
Common Pitfalls
-
Default vs named partition mismatch: A writer in the default partition (empty) does not match a reader in a named partition, and vice versa. Both must be in the default partition or share a named partition.
-
Case sensitivity:
"Sensors"and"sensors"are different partitions. Use consistent naming conventions. -
Forgetting intersection semantics: A writer in
["A", "B"]matches a reader in["B", "C"]because they share"B". This is often desired but can cause unexpected data flow. -
Order does not matter for matching:
["A", "B"]matches["B", "A"]. However, for equality comparison, order does matter. -
Empty partition list is special: It represents the default partition, not "no partition". Entities in the default partition form their own group.
Performance Notes
- Partition matching is performed during discovery (not per-sample)
- Matching cost is O(N*M) where N and M are the partition list sizes
- Partitions add zero per-sample overhead once matched
- Small partition lists are recommended for faster discovery matching
Next Steps
- Ownership - Writer arbitration
- Reliability - Delivery guarantees
- Overview - All QoS policies