OR Parameter Hub for Multimodal Sync and Data Aggregation
← Back to: Medical Imaging & Patient Monitoring
What an OR Parameter Hub is (and what it is not)
An OR Parameter Hub is a surgical-room aggregation layer that aligns multi-device waveforms, numerics, and events onto one coherent timeline. Using isolated I/O, timestamp normalization, trigger routing, and buffering, it turns fragmented device data into a unified view for OR display, replay, and recorder handoff—without changing each device’s native measurement function.
In an operating room, the hardest part is rarely “getting signals”; it is proving causal order and timing across devices when decisions depend on seconds, and adverse events require audit-ready reconstruction. This chapter sets the boundary so the hub is evaluated as a sync-and-aggregation component, not confused with monitors, timing infrastructure, or network gateways.
- Fragmented time bases: each device timestamps differently (or not at all), making cross-device replay ambiguous; the hub outputs one normalized timeline with explicit alignment behavior.
- Events without context: alarms, mode changes, and operator actions live in separate logs; the hub binds events to neighboring waveforms/numerics for investigation and training.
- Integration chaos: ad-hoc cabling and mixed interfaces increase risk; the hub standardizes I/O categories, isolation domains, and routing rules so expansion stays predictable.
- Not a patient monitor replacement: it does not replace device-native measurement, alarm logic, or clinical parameter computation; it aligns and aggregates outputs.
- Not a hospital-wide timing system: it does not define enterprise time governance; it consumes/provides a local clock/trigger spine for the OR boundary.
- Not a full network gateway: it does not expand into hospital network architecture; it offers defined handoff outputs for display/recorder/IT integration.
- Not the entire safety/EMC playbook: it defines isolation partitioning and interface constraints, while detailed compliance engineering belongs to dedicated isolation/EMC pages.
| Category | Typical payload | Timing requirement | Hub responsibility |
|---|---|---|---|
| Waveforms | Continuous streams (e.g., ECG/pressure/flow equivalents) | Stable sampling timestamps; reorder tolerance via buffer window | Timestamp normalization, gap marking, alignment buffer, source tagging |
| Numerics | Low-rate values (trendable parameters) | Event-time association; consistent update cadence | Unit/label normalization, provenance, timeline anchoring |
| Events | Alarms, mode changes, start/stop, operator markers | Monotonic ordering; sequence IDs for audit | Debounce, ordering, priority, attach to time windows |
| Trigger / Control | Footswitch/TTL/isolated contacts; gating signals (room-local) | Deterministic routing; bounded rate to prevent storms | Routing rules, gating, rate limiting, safe defaults on fault |
Practical takeaway: evaluate the hub by alignment clarity, routing determinism, and traceable outputs—not by how much it “re-implements” device functions.
OR use-cases & workflows (why sync matters in surgery)
OR teams need synchronized views because clinical actions (operator steps, device mode changes, alarms) and physiologic responses must be interpreted as a single cause-and-effect chain. Without alignment, replay becomes a collage of disconnected charts; with alignment, it becomes a defensible timeline that supports decision review, training, and incident analysis.
- Need: align incision / device activation / medication events with subsequent waveform shifts.
- Hub output: event anchors plus synchronized waveforms within a defined alignment window.
- Why it matters: turns subjective recollection into a time-stamped narrative suitable for review.
- Need: determine if an alarm correlates with changes in other streams or is isolated to one device.
- Hub output: alarm events linked to surrounding waveform/numeric windows and source labels.
- Why it matters: reduces “alarm fatigue” by improving interpretability under pressure.
- Need: correlate ventilation mode changes, pump start/stop, and workstation states with patient response.
- Hub output: ordered event stream (with sequence IDs) over the same timeline as waveforms and trends.
- Why it matters: supports safe transitions by making timing relationships visible and reviewable.
- Need: export synchronized segments with clearly defined time reference and source attribution.
- Hub output: unified timeline segments with gap markers and minimal ambiguity about ordering.
- Why it matters: makes downstream storage/review systems far more reliable and interpretable.
- Ingest: waveforms, numerics, events, and triggers arrive via isolated interface ports.
- Normalize: each stream is tagged with a consistent timestamp meaning (sample-time vs arrival-time) plus source identity.
- Align: a bounded re-order buffer forms an alignment window to handle jitter, re-ordering, and minor drift.
- Aggregate: streams are merged into a single timeline with event anchors and provenance labels for every value.
- Deliver: the timeline feeds OR display and recorder handoff outputs with defined latency and loss signaling.
| Metric | What it answers | How it is evidenced |
|---|---|---|
| Alignment error | Are events and waveforms aligned within the intended window? | Inject known event markers; report distribution (e.g., P95/P99) across streams. |
| End-to-end latency | How long from capture to display/record output? | Time-stamp checkpoints: ingest → align → render → handoff; budget per stage. |
| Loss signaling | When data is missing, is it obvious and traceable? | Gap markers on timeline + event log entries; no “silent interpolation” in critical windows. |
| Resync recovery | After dropouts, how quickly does stable alignment return? | Defined recovery state transitions and time-to-stability under controlled disconnect/reconnect tests. |
Signal & event inventory (modalities, rates, and semantics)
A parameter hub succeeds or fails on inventory discipline. Before choosing ports, buffers, and acceptance tests, every input must be classified by data semantics (what the value means), rate (how often it changes), and timestamp meaning (sample-time vs arrival-time). This prevents “invisible gaps” and makes multi-device replay defensible.
- Typical rate: tens to hundreds of samples per second (or higher), depending on source stream.
- Timestamp priority: sample-time is preferred; if only arrival-time exists, jitter and reordering must be absorbed by a bounded buffer window.
- Loss policy: gaps must be explicitly marked; silent “filling” creates false clinical narratives.
- Acceptance hint: replay should show consistent spacing and obvious gap markers under dropouts.
- Typical rate: 1–5 Hz updates (sometimes slower), driven by the source device’s update cadence.
- Timestamp meaning: arrival-time can be acceptable if provenance (source identity and units) is retained.
- Loss policy: short gaps may be tolerated, but must remain visible for audit and correlation with events.
- Acceptance hint: value changes should align with the surrounding event window without “teleporting” across time.
- Typical rate: low, but high impact (alarms, mode changes, start/stop, operator markers).
- Ordering rule: must be monotonic; a sequence ID prevents ambiguous replays after reconnection.
- Loss policy: missing key events breaks causality; retries/confirmation should be visible in logs (concept-level).
- Acceptance hint: two events must never swap order in the unified timeline once committed.
- Typical rate: sparse edges, but can burst; treat as “hard anchors” for clip capture and workflow markers.
- Timestamp meaning: edge capture must map directly into hub reference time (rise/fall defined).
- Loss policy: missed edges shift clip boundaries and corrupt audit; prioritize capture and apply rate limiting to avoid storms.
- Acceptance hint: repeated edge injections yield stable event timing distribution (no drift across minutes).
| Source (example) | Type | Rate (typ.) | Timestamp meaning | Time precision need | Priority | Allowed latency | Loss policy |
|---|---|---|---|---|---|---|---|
| Monitor waveform stream (representative) | Waveform | 50–500+ samples/s | Sample-time preferred | ms-level | High | Budgeted (stable) | Gap marker required |
| Ventilator trend values (representative) | Numeric | 1–2 Hz | Arrival-time acceptable | 10–100 ms | Med | Higher OK | Drop allowed with flag |
| Pump start/stop and state changes | Event | Sporadic | Event-time + sequence ID | ms-level | High | Low | No silent loss |
| Alarm assertions/clears (representative) | Event | Sporadic bursts | Event-time preferred | ms-level | High | Low | Logged; ordered |
| Footswitch rising edge / gate input | Trigger | Edge events | Hub edge capture | ms-level | Top | Very low | No miss; rate cap |
Tip: keep ranges broad in the public page; lock exact limits in internal specs and acceptance test plans.
Time alignment model (timestamps, epochs, and drift handling)
“Sync” is not a slogan; it is a contract about timestamp meaning, timebase mapping, and bounded ambiguity. A robust OR hub makes every ordering decision explicit: what time a value represents, how it is mapped into a shared reference, how out-of-order packets are handled, and how drift is detected without breaking continuity.
- Device local time: each source’s own clock and timestamp conventions (may drift or restart independently).
- Hub reference time: the hub’s internal monotonic reference used to align streams and triggers consistently.
- Display/record timeline: the user-facing timeline exported to display and recorder outputs, including explicit gap markers and ordering.
Key rule: these layers must remain distinguishable so failures are traceable rather than silently “smoothed away.”
- Normalize timestamp meaning: tag each stream as sample-time, arrival-time, or event-time so the replay engine never guesses.
- Bound re-ordering: hold data inside a finite alignment window (reorder buffer) to absorb jitter and minor reordering without unbounded latency.
- Detect drift: estimate drift slope between device local time and hub reference time; raise warnings when slope exceeds limits.
- Resync without rewinding: re-map timebases while keeping the unified timeline monotonic; if continuity cannot be preserved, insert explicit gaps and log the transition.
| Criterion | Target behavior | Failure signal | What must be visible |
|---|---|---|---|
| Alignment error budget | Event anchors align within the intended window across streams | Error distribution widens or becomes bimodal | Measured distribution (e.g., P95/P99) and test method |
| Reorder window limits | Window absorbs jitter without excessive latency | Either frequent mis-ordering or unacceptable delay | Configured window and observed jitter envelope |
| Drift limit | Drift slope stays within limits or triggers controlled resync | Unexplained time skew growth over minutes | Warnings and slope traces (concept-level) |
| Resync continuity | Unified timeline never rewinds; discontinuities are explicit | Time goes backward or events reorder after commit | Gap markers + logged resync boundaries |
A hub should prefer honest visibility over “pretty” timelines: explicit gaps and clear resync boundaries beat hidden smoothing.
Clock/trigger tree architecture (routing without chaos)
Trigger routing in an OR must be treated as a governed system, not a bundle of wires. The hub’s job is to turn edges and device events into auditable anchors that remain stable under bounce, reconnection, and bursty workflows. This section focuses on routing policy and storm prevention from the hub perspective.
Practical rule: every source must declare whether it is an edge (physical pulse) or an event (logical state change). The hub should never guess timestamp meaning or ordering.
- Debounce: collapse mechanical bounce or duplicate edges into one event anchor; avoid “multi-fire” artifacts.
- Gate (arming conditions): only propagate triggers when the OR workflow state is valid (armed/mode/operator confirm).
- Priority: reserve bandwidth for high-criticality anchors (e.g., footswitch or emergency-related markers) under bursts.
- Rate limit: cap propagation to prevent floods; when throttling happens, record the dropped/throttled condition explicitly.
- Sequence ID: assign monotonically increasing IDs so replays remain deterministic even after link recovery.
Non-negotiable: once a trigger is committed to the unified timeline, it must not reorder later. If continuity breaks, insert an explicit gap and log the boundary.
| Source | Edge/Event | Destinations | Condition (gate) | Max rate policy | Transform | Fail-safe default |
|---|---|---|---|---|---|---|
| Footswitch | Edge | Timeline marker, clip capture, isolated trigger ports | Armed + operator enabled | Rate cap + burst allowance | Edge → event anchor | Block propagation; log |
| Energy device event | Event | Timeline marker, display overlay | Procedure state matches | Dedup + rate cap | Attach seq + provenance | Log-only on mismatch |
| Anesthesia event | Event | Unified timeline + recorder | Always log; propagate when armed | Ordering + throttling | Normalize timestamp meaning | Log-only if uncertain |
| External imaging trigger | Edge/Event | Isolated trigger ports + marker | Mode matches + interlock OK | Strict cap (avoid storms) | Edge → gated routing | Fail-closed; log |
| Operator marker | Event | Timeline marker + clip capture | Always allowed (logged) | Soft cap | Sequence ID only | Log-only if overload |
The routing table should be treated as configuration data. A change should be reviewable and testable, not an ad-hoc wiring decision.
Isolated I/O partitioning (patient-side safety meets interoperability)
Isolation is not “add it everywhere.” A clean OR hub uses isolation to create responsibility boundaries: faults should be contained to a domain, cross-domain flows should follow minimal rules, and unsafe propagation should be fail-closed while still remaining visible in logs.
- Isolated digital I/O: discrete lines for triggers, interlocks, and status.
- Isolated serial bridging: serial endpoints bridged into hub core with strict direction and logging.
- Isolated Ethernet-style bridging: bridged links treated as policy-controlled crossings (not an open switch).
- Isolated contact inputs: footswitch or relay contacts captured as edge events and anchored to hub reference time.
- Prefer one-way export: data and markers flow outward from hub core to external systems by default.
- Gate inward control: any inbound influence must be explicitly gated (mode, arming state, interlocks).
- Never amplify faults: storms, retries, or malformed bursts must not propagate across domains.
- Make crossings auditable: every cross-domain event should carry provenance and sequence continuity when applicable.
The “data diode” idea can be applied as a policy: outward flows are easy; inward flows require stronger gating and visibility.
- Fail-closed for triggers: if an isolation crossing is unhealthy, trigger propagation is blocked while events remain logged.
- Fail-visible for continuity: if a stream cannot be aligned honestly, insert explicit gaps and log the boundary.
- No cross-domain amplification: retries, reconnection bursts, or duplicated events are throttled at the boundary.
- Safe defaults are deterministic: default actions must be reviewable: block, gate, or log-only—never “best effort” guessing.
| From → To | Allowed types | Direction | Default state | Audit requirement |
|---|---|---|---|---|
| Patient-contact → Hub core | Waveforms, numerics, events | One-way preferred | Allow (logged) | Provenance + gap visibility |
| Device domain → Hub core | Waveforms, numerics, events, triggers | One-way + gated control | Allow data; gate triggers | Sequence IDs for key events |
| Hub core → Device domain | Triggers, control markers | Two-way gated | Fail-closed | Log every propagate/block |
| Hub core → External network | Unified timeline, exports | One-way preferred | Allow (logged) | Export boundaries visible |
| External network → Hub core | Commands / configs (if any) | Inbound gated | Block by default | Audit + reviewable changes |
The matrix prevents accidental “open bridging.” It makes every crossing explicit: what flows, which way, and what happens on faults.
Data transport & buffering (throughput, loss, and determinism)
In an OR, “fast” is not enough. A parameter hub must stay deterministic under bursts and short outages: the unified timeline should remain replayable, gaps must be visible, and overload behavior must be controlled rather than random. This section describes a hub-centric transport pipeline that separates real-time ingestion from rendering and recording.
- Ingest: accept packets/frames and attach source ID + timestamp meaning (sample-time vs arrival-time).
- Parse & normalize: validate sequence counters, normalize fields/units, and extract event anchors.
- Align & reorder: use a bounded reorder window; commit only what can be ordered honestly.
- Render & export: decouple UI refresh from recording/export so display jitter cannot corrupt the timeline.
Key design rule: alignment and recording are timeline-centric; rendering is a consumer that can drop fidelity (refresh rate) without changing what the timeline says happened.
- Missing segments are explicit: mark gaps on the unified timeline and in recorded metadata.
- No-interpolation zone: do not bridge waveform gaps with line drawing or interpolation near the boundary.
- Late packets are classified: accept if still inside reorder bounds; otherwise log as late-discard.
- Audit-first behavior: every drop/throttle action is counted with stage + duration + affected streams.
Determinism requirement: replay should reproduce the same gaps and anchors, not a “best effort” reconstruction.
| Stage | Target behavior | Over-budget action | What stays deterministic | Audit signals |
|---|---|---|---|---|
| Ingest | Never stall; accept and tag source + timestamp meaning | Throttle low-priority inputs; keep event anchors | Sequence continuity for events | rx rate, drop counters, burst duration |
| Parse/normalize | Validate counters; normalize fields/units; reject malformed frames | Skip optional transforms; log parse faults | Known-good frames only | parse fault count, invalid frames, CPU time |
| Align/reorder | Bounded reorder window; commit ordered segments to timeline | Late-discard beyond bounds; insert explicit gap markers | Unified timeline ordering | reorder depth, late packets, gap duration |
| Display render | Smooth enough UI; timeline-driven overlays | Reduce refresh / simplify visuals; do not alter timeline | Timeline content and markers | render FPS, UI queue depth, frame drops |
| Recorder/export | Write timeline + metadata; preserve provenance | Pause non-critical exports; keep minimal audit stream | Audit + gaps + anchors | write backlog, commit latency, export throttles |
Aggregation & fusion (from raw streams to a coherent OR view)
Aggregation is not about showing “more signals.” It is about producing a coherent OR view: one timeline, consistent semantics, and traceable sources. When multiple sources publish the same parameter, the hub should select deterministically, flag conflicts instead of hiding them, and tag every value with provenance.
- Semantic normalization: align naming, units, and scaling before comparing values.
- Selection, not re-computation: choose among candidates; avoid clinical algorithm claims.
- Conflict visibility: if candidates disagree beyond tolerance, show conflict and keep provenance.
- Stable switching: avoid flapping via hold-off rules when changing the selected source.
| Parameter group | Candidate sources | Selector policy | Conflict rule | Provenance output | Audit fields |
|---|---|---|---|---|---|
| HR | Source A / Source B / Source C | Health gate → fixed priority → hold-off on switch | Flag conflict; show both candidates | Selected source tag + switch marker | switch count, conflict count, stale count |
| SpO₂ | Source A / Source B | Prefer freshest continuous source; deterministic tie-break | Conflict flag + provenance retention | Value + source label | gap duration, throttle time |
| Pressure | Source A / Source B | Unit normalization → select per health + priority | Conflict flag; do not average | Source label + unit tag | unit mismatch count, conflict count |
| Resp rate / CO₂ | Source A / Source B | Health gate + hold-off; keep last-known-good only if explicitly marked | Flag conflict + show staleness | Source label + freshness indicator | stale duration, switch count |
The goal is not to hide disagreement. The goal is to make the OR view coherent while keeping provenance and conflict visibility.
Display, overlays & operator UX (usable under OR pressure)
In the OR, a usable hub display must answer three questions in seconds: what happened, when it happened, and where the data came from. The UI should stay timeline-driven under stress, make gaps and conflicts visible, and keep operator actions short and safe (gloves, distractions, and multiple staff).
- Event markers: operator markers, device state changes, and trigger edges appear as vertical anchors.
- Source switches: any change of selected source is shown as a timeline marker with a short reason tag.
- Conflict flags: when candidate values disagree, the UI flags conflict instead of “averaging away” reality.
- Gap visibility: missing segments remain visible in both waveform and event tracks.
- Freshness indicators: every numeric card shows fresh/stale/gap state without requiring a submenu.
Display rule: a clean screen is achieved by cue design (icons + short tags), not by hiding important truth.
- Mis-touch protection: destructive or high-impact actions (clear, source override, export cancel) require confirm or long-press.
- Colorblind-safe cues: status uses icons/shape (gap break, conflict badge) in addition to color.
- Alarm-storm display governance: cluster and rate-limit visual notifications; keep anchors visible instead of scrolling floods.
Reliability, fail-safe & observability (24/7 behavior inside the OR)
A hub is trusted only if it degrades safely. When sync is lost, inputs disappear, or resources hit watermarks, the system should show a minimal truthful view, keep provenance and gaps visible, and log every mode change with enough detail for rapid on-site troubleshooting.
| Mode | Typical triggers | User-visible changes | Guaranteed | Not guaranteed | Audit events |
|---|---|---|---|---|---|
| Normal | healthy ports, stable sync, within budgets | full overlays and aligned view | timeline, gaps, provenance, markers | — | periodic health snapshots |
| Degraded: no-sync | sync lost, drift exceeded, reorder overflow | banner + disable “strictly aligned overlays” | markers, gaps, provenance labels | tight alignment across sources | sync-lost, drift-exceeded, recovery attempts |
| Degraded: partial inputs | input missing, gap burst, port unhealthy | affected sources greyed; focus set reduced | truthful gaps; source tags; audit continuity | complete parameter coverage | port-down, gap-burst, throttle actions |
| Safe display | conflict risk, ordering uncertain, sustained overload | minimal truthful view + strong status banner | no fake alignment; explicit limits shown | advanced overlays and mixed-source fusion | enter-safe, exit-safe, reason codes |
| Recovering | ports healthy + stable sync window | gradual restore; hold-off prevents flapping | audit + provenance + consistent markers | instant full-fidelity restore | recover-start, hold-off, recover-success/fail |
- Check: which stage reports drops (ingest vs align)
- Check: port health and gap burst counters
- Action: throttle non-critical flows; keep markers/events
- Check: drift exceeded vs sync lost reason code
- Check: reorder overflow / timestamp anomalies
- Action: stay in safe display until stable window passes
- Check: render FPS and UI queue depth
- Check: recorder backlog (should remain bounded)
- Action: reduce refresh; simplify overlays; preserve timeline
All actions are hub-centric and avoid clinical logic. The goal is a truthful minimum view with auditable reasons.
Validation & commissioning checklist (prove alignment, prove safety, prove usability)
Commissioning an OR parameter hub is successful only when results are measured (not “looks aligned”), failures are truthful (gaps and conflicts remain visible), and operator tasks are repeatable under OR constraints. This checklist groups acceptance into three buckets: time alignment, stress & disconnection, and on-site usability.
- Inject known anchors: clean edges (1–10 Hz) and encoded events (sequence ID or pulse-width code) into multiple hub ports.
- Use a single truth source: split from one generator through matched paths; keep cable length comparable across ports.
- Compute distribution: report p50 / p95 / p99 and max alignment error; log drift slope and resync events.
- Do not “smooth” missing data: alignment stats must treat gaps as gaps; no hidden interpolation.
- Event anchors (Mark/Trigger): p95 ≤ 2 ms, p99 ≤ 5 ms, max ≤ 10 ms
- Waveform alignment (across sources): p95 ≤ 5 ms, p99 ≤ 10 ms
- Resync behavior: no timeline rollback; recovery must be logged with reason code + duration
- Congestion & jitter: apply bandwidth limits, latency jitter, burst loss and duplication; verify backpressure actions are logged.
- Out-of-order & bursts: inject reorder patterns; verify reorder buffers do not overflow silently.
- Drop & reconnect: link-down / link-up cycles; verify the hub enters truthful degraded modes and recovers only after a stable window.
- Truth under stress: UI must show gaps, source tags, and mode banners—never “fake” a clean aligned view.
- Per-stage counters: ingest → parse → align buffer → render → record
- Gap statistics: gap count + duration distribution
- Mode transitions: Normal / Degraded / Safe display / Recovering with reason codes
- Exported clip metadata: provenance, gaps, source switches, mode state
| Test item | Method | Pass threshold (template) | Evidence |
|---|---|---|---|
| Event alignment | Inject shared edge + sequence ID to multiple ports; compute error vs truth anchor | p95 ≤ 2 ms; p99 ≤ 5 ms; max ≤ 10 ms | Histogram, p50/p95/p99 report, raw timestamp log |
| Waveform alignment | Inject periodic markers onto waveform streams; verify cross-source overlay error | p95 ≤ 5 ms; p99 ≤ 10 ms | Overlay screenshots + alignment stats |
| Drift detection | Long run (e.g., 30–60 min); measure drift slope and resync actions | Drift slope bounded; resync logged; no rollback | Trend plots + mode-change log |
| Congestion tolerance | Bandwidth limit + jitter + burst loss; validate backpressure + truthful UI | No silent overflow; gaps visible; events auditable | Stage counters, gap stats, UI evidence |
| Reorder handling | Inject out-of-order sequences; validate reorder buffer does not hide anomalies | Ordering preserved or explicitly flagged; no fake alignment | Reorder counters + reason codes |
| Drop & reconnect | Link down/up cycles; verify degrade → safe display → recovering behavior | Truthful mode transitions; stable window required for recovery | Mode log + screenshots + exported clip metadata |
| Usability tasks | T1 Mark, T2 Freeze/Review, T3 Export closure with gloves | Success ≥ 95%; mis-touch rate documented; export closure verified | Timing sheet, video evidence, export package verification |
| Audit completeness | Verify reason codes + duration + affected ports are logged for every transition | No missing entries; consistent IDs across UI/log/export | Audit log bundle + checksum/signing if used |
- Pulse / waveform generator: Keysight 33600A series, Tektronix AFG31000 series
- Oscilloscope (edge arrival comparisons): Keysight InfiniiVision 4000 X series, Tektronix MSO series
- Time interval counter (precise edge timing): Keysight 53230A
- Network impairment (loss/jitter/reorder): Linux tc/netem (software) + traffic generator (e.g., TRex)
- Pulse shaping (fixed-width edges): TI SN74LVC1G123
- Edge cleanup comparator: TI LMV7219
- LVDS trigger distribution: TI SN65LVDS31 / SN65LVDS32
- Fan-out / buffering: TI SN74LVC244A
- Digital isolation for injected triggers: ADI ADuM1100 / ADuM1401, TI ISO7741
- Optocoupler for isolated contact input: Vishay VO615A (or equivalent)
- Time-to-Digital Converter (alignment error statistics): TI TDC7200 / TDC7201
- Microcontroller for sequence ID encoding: ST STM32G0 family or RP2040
- (Optional) HW timestamp NIC for time validation: Intel I210