123 Main Street, New York, NY 10001

UWB Ranging Node: Precise ToF with Timestamps and Secure Ranging

← Back to: IoT & Edge Computing

A UWB ranging node delivers reliable distance by running a controlled timestamp loop (typically DS-TWR), then reducing error with clock/antenna-delay management and reporting CIR-based quality flags so upper layers can degrade safely under NLOS or power events.

Chapter H2-1

What This Page Solves: Definition, Boundary, Success Metrics

A UWB ranging node is defined by a closed engineering loop: transmit/receive UWB pulses, capture reliable timestamps (ToF), compute distance with DS-TWR, attach quality evidence (CIR + flags), and enforce secure ranging (auth/STS). System-level RTLS engines and anchor deployment are explicitly out of scope.

Definition (as a closed loop, not a glossary)

UWB ranging node = UWB transceiver (RF/BB + CIR + timestamp registers) + host MCU/SoC (TWR sequencing, scheduling, logging) + clocking (XTAL/TCXO/PLL + drift/CFO handling) + security (auth + secure ranging primitives such as STS) + diagnostics (CIR/first-path confidence, NLOS indicators, error codes) + optional IMU hooks (event trigger and time-alignment only).

  • Primary objective: stable distance under real multipath/NLOS conditions, not just a “mean LOS number”.
  • Primary artifact: distance + quality flags + logs that allow field debugging and production control.

Boundary (what must NOT be expanded here)

Allowed Banned

Allowed: ToF (SS/DS-TWR), timestamp chain, clock drift/CFO, CIR interpretation, antenna delay calibration, secure ranging/auth, NLOS handling, node-local logs and verification.

Banned: RTLS location engines (AoA/AoD/TDoA), anchor placement, multi-anchor network sync, gateway aggregation, and unrelated wireless stacks.

Recommended links only (no deep-dive here): RTLS / Geofencing at Edge · Edge Timing & Sync · Asset Tracking Tag

Success metrics (measurable, comparable, and decision-driving)

  • Accuracy vs tail: mean LOS error is not enough; track p95/p99 (or worst-case bins) and failure rate.
  • Update rate & duty cycle: the chosen Hz sets airtime, collision probability, and peak current stress.
  • Power reality: average current must be paired with TX/RX burst peak and brownout evidence.
  • Robustness: multipath/NLOS requires quality flags; otherwise “pretty numbers” hide wrong distance.
  • Security level: link encryption alone is not secure ranging; relay/replay resistance must be designed explicitly.
Typical target Engineering meaning (what must be controlled)
10 cm @ LOS Requires antenna delay calibration and stable timestamp detection; otherwise fixed bias dominates the number.
p95 < 30 cm Requires CIR/first-path confidence and NLOS flags; tail behavior is driven by multipath and threshold instability.
1 Hz vs 10 Hz Higher Hz increases airtime and peak current exposure; scheduling and retries must be bounded and logged (avoid “silent failure”).
Stable over temperature Requires clock drift/CFO strategy (XTAL vs TCXO choice + runtime correction) and temperature-indexed calibration points.
Secure ranging enabled Requires STS/auth workflow and anti-replay/anti-relay considerations; plan for latency/power overhead and versioned key policies.
Peak current bounded Requires verified supply headroom during TX bursts; store brownout/reset causes and timestamp anomalies as diagnostics evidence.
Figure F1 — UWB ranging node scope: what is inside the node (and what is out of scope)

Block-diagram style; minimal labels; key artifacts: timestamps, CIR, quality flags, secure ranging, logs.

NODE SCOPE (this page) OUT OF SCOPE UWB Transceiver RF/BB · CIR · Timestamp Antenna + Matching Antenna delay · Group delay Clocking XTAL/TCXO · PLL · CFO/drift Host MCU / SoC DS-TWR sequencing · Scheduling Distance + Quality flags + Logs Security Auth · STS · Anti-replay IMU Hook Trigger · Time-align RTLS Engine AoA/AoD/TDoA fusion Anchors Placement · coverage Network Sync TDoA timing · PTP Artifacts Distance + Flags + Logs

Chapter H2-2

Requirements Decomposition: Use-Case → KPIs → Error-Budget Entry

Engineering requirements must translate into an error budget. The goal is not to list scenarios, but to map each scenario to KPI combinations and then to the dominant error buckets: timestamp chain jitter, clock drift/CFO, multipath/NLOS, antenna delay, and digital thresholds.

Step 1 — Bucket scenarios by “ranging behavior” (not by product category)

Scenarios often look different on the surface, but share the same ranging pain points. Group them by the stresses they create:

  • Short-range high certainty: close interaction and confirmation distances. Dominant risk: fixed bias and detection thresholds.
  • Medium-range low power: periodic ranging with strict battery budget. Dominant risk: duty-cycle trade-offs and missed first-path.
  • Dynamic motion / fast updates: moving nodes or rapid approach. Dominant risk: scheduling, collisions, and unstable CIR under motion.
  • Industrial multipath / NLOS: metal reflections and human blocking. Dominant risk: tail behavior (p95/p99) and false confidence.

Step 2 — Split KPIs into mean, tail, and failure rate (the “hidden” spec)

A single accuracy number rarely predicts field behavior. The KPI set should include:

  • Mean accuracy (LOS): useful for baseline, but easy to “win” while still failing in real environments.
  • Tail stability (p95/p99): where multipath/NLOS dominates; this is what drives user experience and safety margins.
  • Failure rate: timeout/no first-path/invalid timestamps. A system with “good mean” but high failure rate is not robust.
  • Update rate & duty cycle: drives airtime, retries, and peak current exposure; needs a bounded scheduler and retry policy.
  • Security requirement: secure ranging adds overhead; treat it as a KPI constraint, not an afterthought.
Common traps What they hide
  • “Accuracy = mean” → hides tail spikes; p95/p99 and failure rate must be tracked.
  • “Encrypted link = secure ranging” → does not stop relay/mafia fraud; secure ranging primitives must be designed.
  • “Distance jumps = algorithm issue” → often threshold/CIR/antenna delay/clock drift evidence is missing.

Step 3 — Build an error-budget entry model (five buckets + observable evidence)

Once KPIs are set, the shortest path to stability is to map symptoms to the dominant bucket and collect evidence that confirms it. The five-bucket model below is designed to remain node-local and testable.

  • Bucket A — Timestamp chain jitter: SFD/STS detection stability, timestamp register variance, first-path confidence behavior.
  • Bucket B — Clock drift/CFO: temperature-correlated distance drift, residual CFO estimates, PLL/oscillator sensitivity.
  • Bucket C — Multipath/NLOS: CIR shape changes, peak/first-path ratio, path ambiguity flags and tail spikes.
  • Bucket D — Antenna delay & assembly: fixed bias, device-to-device offset, enclosure/hand effects shifting delays.
  • Bucket E — Digital thresholds & policies: sudden jumps after config changes, threshold-induced first-path switching, retry explosions.

The next chapters expand each bucket: timestamp math and DS-TWR (H2-3), error budget mechanics (H2-4), clocking (H2-5), RF/antenna delay (H2-6), parameter choices (H2-7), and validation/diagnostics (H2-11).

Figure F2 — From scenario to KPI to error buckets (entry model)

Single-node viewpoint: map requirements into five measurable error buckets and chapter pointers.

SCENARIO BUCKETS KPI SET ERROR BUCKETS Short-range certainty Medium-range low power Dynamic motion / fast Hz Industrial multipath / NLOS Accuracy mean + p95/p99 + failure Update rate Hz · duty cycle · retries Power avg + peak + brownout logs Security auth · STS · anti-relay A · Timestamp jitter SFD/STS detect stability B · Clock drift / CFO temp-correlated errors C · Multipath / NLOS CIR shape + tail spikes D · Antenna delay fixed bias + assembly spread E · Threshold policies config → sudden jumps Next expansions: H2-3 / H2-4 / H2-5 / H2-6 / H2-7 / H2-11
Chapter H2-3

Ranging That Actually Ships: ToF and SS/DS Two-Way Ranging

This chapter keeps ranging theory at the point where a node can implement it: packets on air, hardware timestamps, and how DS-TWR reduces clock-bias sensitivity compared with SS-TWR. Multi-anchor TDoA and network synchronization remain out of scope.

ToF in practice: a distance estimate driven by detection events

Time-of-Flight (ToF) converts propagation time into distance. In a real node, ToF is not measured as a continuous waveform delay; it is built from discrete detection events (SFD or STS) that produce hardware timestamps. When the detection point is unstable, distance becomes unstable even if the link “looks fine”.

  • What matters: where the timestamp is latched (event definition) and how repeatable that event is.
  • What does NOT matter: MCU read latency (reading registers later does not change the latched time).

SS-TWR vs DS-TWR: a decision view (bias sensitivity vs overhead)

SS-TWR uses fewer messages, but is more exposed to uncompensated clock effects. DS-TWR adds one message and forms a combination of timestamps that better cancels clock bias terms.

  • SS-TWR (2 messages): lower airtime and lower peak exposure, but higher sensitivity to clock drift and processing asymmetry.
  • DS-TWR (3 messages): higher airtime, but improved robustness against clock bias and better repeatability under temperature variation.
  • Engineering trade: DS-TWR typically wins for tail stability (p95/p99), while SS-TWR can fit ultra-tight duty-cycle cases.
Boundary

TDoA requires network-level synchronization and multi-anchor timing control. Only node-local TWR is covered here.

Where timestamps are taken: TX/RX edges and the register path

Hardware timestamps are typically latched at well-defined PHY events: TX timestamp at a transmit boundary and RX timestamp at a stable detection point (SFD or STS correlation peak). These timestamps enter the ranging calculation; the MCU reads and logs them for verification and debugging.

  • Stable detection: SFD/STS stability dominates ranging repeatability.
  • Evidence to log: per-exchange timestamps, fail codes, and at least a compact CIR quality summary.
Figure F3 — DS-TWR packet timing and which timestamps feed distance

Block-timeline style; minimal labels; hardware-latched TX/RX timestamps (SFD/STS) are the inputs.

INITIATOR RESPONDER time → POLL RESPONSE FINAL t1 t2 t3 t4 t5 t6 Timestamp latch point TX/RX timestamps are latched at SFD/STS events (hardware), then read/logged by MCU. Distance combine t1..t6

Chapter H2-4

Timestamp Chain & Error Budget: Where “Certainty” Gets Lost

When a link is up but distance drifts or jumps, the root cause is usually not the ToF formula. The cause is an end-to-end timestamp chain plus stacked error sources: clock terms, channel terms, and implementation terms.

The timestamp chain (RX front-end → register latch)

A ranging node turns an RF waveform into a timestamp through a deterministic chain: RF front-end → correlator / matched filter → SFD/STS detection → digital counter latch → timestamp register → MCU log. The distance result inherits the stability of the detection and latch points.

  • Stable latch: repeatable SFD/STS detection produces repeatable timestamps.
  • Unstable latch: first-path ambiguity and threshold switching produce “distance jumps”.

Error buckets (clock / channel / implementation)

Field behavior becomes explainable when errors are grouped into buckets with observable evidence. Each bucket suggests what to log and what to tighten first.

Clock Channel Implementation
  • Clock-related: ppm/temperature drift, CFO residuals, PLL phase noise → shows as temperature-correlated drift and update-rate sensitivity.
  • Channel-related: multipath/NLOS, pulse distortion → shows as tail spikes, first-path confidence collapse, and CIR shape changes.
  • Implementation-related: antenna delay, calibration, detection thresholds, temperature effects → shows as fixed bias, device-to-device spread, or sudden jumps after config changes.

Practical error-budget template (measure → log evidence → reduce)

An error budget must be actionable. The template below ties each error source to symptoms, log fields, measurement methods, and reduction levers. This keeps optimization focused and prevents “blind tuning”.

Error source Field symptom Evidence to log How to measure How to reduce
Clock drift / CFO slow distance drift vs temperature temperature, CFO estimate/residual, timestamp variance temperature sweep, hold distance constant, observe drift TCXO or correction strategy, calibration points, bounded updates
Multipath / NLOS tail spikes, “good mean but bad p99” CIR summary, first-path confidence, peak/first-path ratio LOS vs NLOS A/B tests, metal/human blocking trials quality flags, threshold tuning policy, NLOS downgrade behavior
Antenna delay fixed offset, enclosure/hand effect shift per-unit offset, antenna version, mechanical configuration golden reference, swap enclosure conditions, compare offsets calibrate antenna delay, control assembly, lock RF BOM variants
Detection thresholds sudden “distance jumps” after config changes threshold settings, first-path index, retries/fail codes parameter sweeps, controlled SNR tests, replay logs versioned configs, bounded adaptive policies, stable detection points

DS-TWR reduces clock-bias terms, but does not eliminate channel ambiguity (multipath/NLOS) or implementation bias (antenna delay). Those must be handled with evidence, calibration, and stable detection policy.

Figure F2 — Error budget: how dominant contributors shift by condition

Stacked view; minimal labels; shows Clock / Channel / Implementation share under LOS, low-SNR, and NLOS.

CONDITION ERROR SHARE (STACKED) PRIMARY FOCUS Clock Channel Impl LOS (short) Antenna delay Low-SNR (long) Clock + detect NLOS (industrial) CIR + flags Chapter pointers H2-5 Clocking H2-11 Validation H2-6 RF/Delay H2-7 Parameters
Chapter H2-5

Clocking (Node-Local): XTAL/TCXO/PLL, CFO Correction, Drift & Aging

Accurate ranging depends on a clean, observable, and correctable timebase. DS-TWR reduces some bias terms, but timestamp stability still degrades when oscillator drift, CFO residuals, and PLL noise increase. Only node-local clocking is covered (no PTP/SyncE or network synchronization).

Why a “correctable clock” matters more than a “perfect clock”

A ranging node converts RF detection events into hardware timestamps. The distance result inherits short-term jitter (timestamp noise) and long-term drift (temperature and aging). A clock strategy is strong when it provides stable behavior and a clear correction loop (evidence → estimate → small adjustment).

  • Short-term: PLL phase noise and timestamp latch repeatability drive p95/p99 distance stability.
  • Long-term: temperature drift and aging drive slow distance drift and device-to-device spread.
  • Key point: MCU read latency does not change the latched timestamp; detection/latch stability does.

XTAL vs TCXO: choose by constraints (accuracy / boot time / power)

XTAL and TCXO are not “better vs worse”; they are different constraint optimizers. The selection should be driven by the target ranging stability, warm-up/lock time, and power budget for repeated measurements.

Accuracy Boot/Lock Power Correctability
  • XTAL path: lowest cost/power, but drift and warm-up behavior must be managed by calibration and bounded correction.
  • TCXO path: improved temperature stability and repeatability, typically better tails, with added cost and startup considerations.
  • PLL role: a cleaner / more stable derived clock can reduce timestamp jitter, but lock and spurs must be verified.

How CFO and drift still impact DS-TWR timestamps

DS-TWR combines multiple TX/RX timestamps to reduce clock-bias sensitivity, but residual CFO/drift still affects the effective timing of detection events and the stability of the timestamp counter. In field data this often shows up as: stable mean distance but degraded tail stability (p95/p99) and temperature-correlated drift.

  • CFO residual: small frequency error that remains after estimation/correction; appears as slow drift or update-rate sensitivity.
  • Temperature drift: distance slowly walks with temperature even when geometry is fixed.
  • Aging: long-term shift that accumulates over months; shows as calibration “going stale”.

Node-local correction strategy (boot → temperature points → runtime trim)

A practical correction plan is layered, with explicit evidence and bounded adjustments. The goal is to improve repeatability without chasing noise.

  • Boot calibration: establish a usable baseline after power-up; log the calibration version and environmental conditions.
  • Temperature-point calibration: build a temperature-to-correction map; verify hysteresis and warm-up behavior.
  • Runtime micro-trim: small, bounded updates using CFO evidence or statistical residuals; avoid large steps that amplify tails.
Boundary

Only node-local timing is covered. Multi-node synchronization (PTP/SyncE) and network timing belong to the Edge Timing & Sync page.

Debug table: field symptom → clock evidence → first action

Clock issues often masquerade as channel issues. The table below ties visible symptoms to evidence that should be logged, and a first corrective action that is safe and bounded.

Field symptom Clock evidence to check Likely interpretation First action (bounded)
Distance drifts with temperature temperature vs distance correlation, CFO residual trend, timestamp variance temperature drift dominates; correction map is missing or stale add temperature-point calibration; verify warm-up window; log calibration version
p95/p99 worsens at higher update rate PLL lock status (if available), timestamp jitter proxy, CFO residual at duty-cycle edges short-term jitter / lock behavior affects detection repeatability verify lock/warm-up; bound runtime trim; avoid aggressive re-lock toggling
Large device-to-device spread per-unit offsets, calibration status, temperature behavior vs unit ID mix of clock drift and implementation bias; antenna delay may dominate separate clock vs antenna delay by controlled tests; freeze RF BOM variants (see H2-6)
Distance jumps after firmware/config change correction parameters, thresholds, CFO settings, calibration version correction step is too large or inconsistent between modes version parameters; clamp step size; rollback to last known-good config for A/B

Minimum “clock pack” to log per ranging exchange: temperature, correction version, CFO estimate/residual (if available), and timestamp variance proxy.

Figure F5 — Clock choice + node-local correction loops (evidence → estimate → bounded trim)

Flow + control-loop blocks; minimal labels; text ≥ 18px; no network-level sync.

NODE-LOCAL CLOCKING choose a path, then close a bounded correction loop Constraints Accuracy target Boot / lock time Power budget Recommended path XTAL + bounded cal TCXO + lighter cal Correction loop Oscillator PLL / divider TS counter DS-TWR compute CFO / residual Bounded trim Calibration layers Boot calibration Temp points Runtime micro-trim Logs to keep temperature · correction version · CFO residual · timestamp variance proxy

Chapter H2-6

RF Front-End & Antenna: Matching, Group Delay, Antenna Delay, Assembly Consistency

When the same UWB IC shows different distance bias after antenna or enclosure changes, the root cause is usually a mix of antenna delay, pulse distortion (group delay ripple), and chassis/ground effects that reshape the CIR. This chapter stays UWB-specific and focuses on evidence and repeatable build rules.

RF impact point: first-path detectability and CIR stability

UWB ranging is not “signal strength measurement”. It is dominated by whether the first path can be detected consistently. Any RF change that reshapes the pulse or shifts the detection threshold can move the detected first path, creating distance jumps or heavier tails.

  • Evidence: first-path confidence, peak/first-path ratio, first-path index stability, retry/fail codes.
  • Field signature: “mean looks OK” while p95/p99 breaks under NLOS/metal/human proximity.

Matching + bandwidth: why group delay ripple matters for pulses

A matching network can improve return loss, but the time-domain pulse can still be distorted when group delay ripple is large. Distortion broadens or shifts correlation peaks, making the detection point more sensitive to thresholds and multipath.

  • Return loss: affects effective energy and SNR margin.
  • Group delay ripple: reshapes pulses, changes correlation peak geometry, and affects first-path repeatability.

Antenna delay: why “fixed distance bias” appears after changes

Antenna delay is a hardware timing offset contributed by the antenna, feed structure, and the local reference environment. It often manifests as a nearly constant distance offset across the operating range. Changing antenna type, placement, enclosure spacing, or ground reference can shift this delay and therefore shift measured distance.

Key takeaway

If distance bias moves as a whole after an antenna/enclosure change, treat antenna delay and assembly consistency as first suspects. Use a golden reference and controlled A/B swaps to separate delay bias from channel effects.

Enclosure, routing, and ground reference: how CIR gets reshaped

Metal proximity, cable routing, and ground reference changes can create additional reflections and modify the apparent first-path peak. The effect is visible in CIR signatures and first-path confidence collapse. RF co-existence issues that matter for ranging are the ones that move detection behavior or increase jitter, not generic EMC topics.

  • Chassis proximity: can raise multipath components and shift first-path detectability.
  • Digital noise coupling: can increase timestamp jitter proxies and reduce confidence.
  • UWB-focused coexistence: look for neighbor activity correlated with confidence drops or retries.

Build rules: freeze variables that change delay and CIR

Repeatability requires assembly-level consistency. Freeze the variables that shift antenna delay or reshape the near-field environment.

  • Antenna geometry: type, placement, keep-out, and reference ground layout.
  • Feed & routing: length, bends, and proximity to noisy digital rails.
  • Enclosure spacing: plastic thickness, metal clearance, and mechanical tolerance stack-up.
  • Version control: lock RF BOM variants and record build identifiers in logs.
Figure F4 — Matching / antenna delay / chassis reference: how CIR and first path shift

Three mini-panels: group delay ripple, antenna delay bias, chassis/ground multipath; minimal labels; text ≥ 18px.

RF & ANTENNA EFFECTS ON CIR simplified, evidence-driven views 1) Matching group delay ripple 2) Antenna delay bias shift 3) Chassis multipath L/C match L C S11 Group delay CIR peaks first path Antenna / enclosure pos A pos B Distance bias + offset First path index shift Chassis / GND metal CIR multipath first path confidence ↓ Evidence to log: first-path confidence · peak/first ratio · first-path index · retries
Boundary

This chapter stays UWB-specific. Generic EMC compliance strategies belong to the EMC / Surge for IoT page.

Chapter H2-7

PHY Parameters: Preamble, PRF, Data Rate, SFD/STS — Tune for Measurability

PHY parameters are not “faster is better”. They should serve two goals: detection reliability (stable timestamps and CIR capture) and diagnostic visibility (node-side evidence that remains comparable across builds). This chapter stays strictly on node-local measurability and production profiles.

Measurability first: the hidden cost of “aggressive” settings

An aggressive configuration can look great in average throughput but fail in the field because retries, missed detections, and tail instability dominate power and latency. A robust profile maximizes successful exchanges and keeps quality metrics interpretable.

Measurability checklist
stable timestamp CIR available quality flags comparable logs

Preamble length: reliability margin vs duty-cycle efficiency

Preamble is a practical knob for detection margin. Longer preambles usually improve robustness under weak links, body blockage, or metal-rich multipath, while shorter preambles reduce air-time but tighten the detection margin.

  • Longer preamble: more correlation energy → fewer misses → better p95/p99 stability under stress.
  • Shorter preamble: higher potential update rate, but missed detections and retries can erase gains.
  • Logging cue: track retry rate and first-path confidence when changing preamble.

PRF: multipath separability and first-path behavior (not just “resolution”)

PRF changes the timing structure that the receiver uses to detect and correlate signals. In practice it influences how multipath components appear in CIR and how stable the first-path decision becomes across environments.

  • Multipath-rich scenes: PRF interacts with correlation behavior and can change peak/first-path ratios.
  • Threshold sensitivity: a “marginal” setting can turn small CIR changes into large first-path jumps.
  • Do not tune alone: validate PRF together with preamble and SFD/STS settings to keep metrics comparable.

Data rate: update rate is only real when detection success stays high

Higher data rate reduces packet time, but also tightens the margin for reliable detection and CIR capture. When the channel degrades, aggressive data rate often shifts the failure mode from “slightly noisy” to “missed or repeated”.

  • High data rate: less air-time per exchange, but tails often worsen in weak links and NLOS.
  • Moderate data rate: better measurability; logs remain stable and comparable across deployments.
  • Proof point: compare p95/p99 distance and retry rates, not only mean update rate.

SFD and STS: stable timestamp points + a bridge to secure ranging

SFD is where practical timestamp capture becomes stable. STS (Scrambled Timestamp Sequence) reinforces measurability by supporting trustworthy timestamp sequences and enabling secure-ranging hooks. This chapter only covers the node-visible effects and logging requirements—security protocol details belong to later chapters.

  • SFD role: a consistent detect event supports repeatable timestamp latching.
  • STS role: ties to secure ranging; changing STS behavior should be versioned and logged.
  • Logging cue: record STS enable state, error codes, and quality flags for each exchange.

Node-side CIR quality metrics (only what the node can compute)

Choose a small set of CIR-derived indicators that remain stable across builds. These metrics should explain failures and correlate with field symptoms without requiring a full RTLS engine.

Recommended node metrics
  • Peak amplitude (overall margin indicator)
  • First-path confidence (detectability / stability)
  • Peak-to-first ratio (multipath dominance proxy)
  • Retry / fail codes (operational health)

Production consistency: define and freeze a default profile

A production profile should keep quality metrics comparable and reduce “mystery variability”. Freeze what changes measurability, guard what can be tuned, and leave only bounded adaptivity.

Profile layer What belongs here Why Required logging
Freeze (factory) core preamble/PRF/data rate set, SFD/STS behavior, quality metric definitions keeps measurability stable and logs comparable across devices and time profile ID, build ID, firmware version
Guard (field, with limits) bounded preamble adjustments, bounded data-rate changes under poor links allows controlled robustness tuning without breaking diagnostic comparability reason code, before/after profile, time window
Adaptive (node, bounded) small step changes driven by quality flags, with hard ceilings prevents runaway complexity while responding to real channel stress step size, trigger flag, upper/lower bounds
Parameter selection triangle

Every profile is a trade: rangeupdate ratepower, with NLOS robustness typically increasing cost in at least one axis. Use quality metrics to keep the trade measurable.

Figure F7 — PHY parameters → measurability → CIR metrics → diagnostic visibility (profile-driven)

Causal chain blocks; minimal labels; text ≥ 18px; no RTLS engines.

PHY → MEASURABILITY tune for detection + evidence PHY knobs Preamble PRF Data rate SFD / STS Measurability Detect reliability TS repeatability CIR capture Comparable logs profile ID + flags Node metrics Peak First conf Peak/First Retry codes Quality flags Profile: Freeze · Guard · Adaptive (bounded)
Boundary

This chapter does not cover RTLS engines (TDoA/AoA/AoD), anchor deployment, or network time sync. It only covers node-local measurability.


Chapter H2-8

IMU Hooks (No Positioning Engine): Triggers, Time Alignment, Minimal Fusion Interface

IMU integration at the node should not “explode system complexity”. The IMU’s job is to provide triggering, motion state, and time-alignment confidence so upper layers can consume better-quality ranging data. This chapter explicitly excludes AoA/AoD localization engines.

Why add IMU at all: three node-local roles

A ranging node benefits from IMU information even without any localization engine. The value is practical: reduce wasted exchanges, improve interpretability, and gate unstable conditions.

  • Trigger: motion or shock interrupts can start short ranging bursts instead of always-on updates.
  • State: a simple motion-state machine informs duty-cycle and “trust level” tags.
  • Hint: provide flags that help upper layers filter out events likely degraded by rapid motion.

Trigger strategy: bounded bursts instead of constant ranging

Use IMU interrupts and thresholds to gate ranging. The policy should be predictable, bounded, and versioned. A safe design uses a small set of trigger reasons and a strict ceiling for maximum burst rate.

Guardrails
  • Ceiling: cap the burst window and max update rate.
  • Versioning: log the trigger reason and policy version.
  • Separation: IMU changes duty-cycle and flags, not core PHY profile definitions.

Node-local time alignment: IMU interrupt timestamp vs UWB timestamp

IMU events are timestamped in the MCU domain, while UWB timestamps are captured in the UWB timebase. The goal is not global absolute time—it is a bounded, observable alignment error that upper layers can trust.

  • Record offsets: measure and record a stable mapping offset between MCU-time and UWB-time domains.
  • Expose confidence: publish an alignment_quality flag so consumers know if alignment is tight or coarse.
  • Avoid overreach: do not introduce network timing or multi-node sync here.

Minimal consumable interface: what upper layers should receive

Keep the interface small and stable. The node should output a minimal set that upper layers can consume without binding to a specific localization engine.

Minimal output set
motion_state imu_event_flags alignment_quality uwb_quality_flags

“Coordinates” and localization solver outputs are out of scope. The interface carries only state and confidence signals.

Complexity control: three rules that prevent “system blow-up”

IMU hooks are valuable only when they remain predictable and bounded. These rules keep integration stable across firmware releases.

  • Rule 1: IMU changes duty-cycle and flags, not the definition of the PHY profile itself.
  • Rule 2: every policy has hard bounds (max burst time, max update rate, max power).
  • Rule 3: every state transition is logged (reason + version) for post-mortem analysis.
Figure F8 — Node output interface: UWB quality + IMU state (hooks only)

Single-node blocks: triggers, alignment mapping, minimal outputs; text ≥ 18px; no AoA/AoD engine.

IMU HOOKS (NODE-LOCAL) triggers · alignment · minimal outputs Ranging node IMU interrupt UWB TS + CIR Trigger policy Motion state Align mapper Quality sum motion_state imu_flags align_quality uwb_flags Boundary: no AoA/AoD engine · no localization solver
Boundary

This chapter defines IMU hooks and minimal outputs only. AoA/AoD fusion and positioning engines are intentionally excluded.

Chapter H2-9

Crypto/Auth for Secure Ranging: Replay, Relay, Session Keys, and STS in Practice

Encrypted communication is not the same as secure ranging. Secure ranging must protect freshness, time-of-flight integrity, and configuration integrity against attackers who can still pass “normal crypto” checks. This chapter focuses on node-local, ranging-related mechanisms and evidence.

Why “encrypted packets” can still produce fake distance

Encryption protects confidentiality and integrity of messages. Distance fraud attacks can still succeed if an attacker can replay old valid exchanges, relay challenges to a real device, or force a downgrade to weaker ranging settings. Secure ranging requires binding the ranging-critical fields to freshness and authenticated state.

Secure ranging goals
freshness identity binding STS integrity anti-downgrade

Threat model (node-centric): what can happen even with crypto

Secure ranging is about the attacker’s ability to affect the time-of-flight decision. The most common classes below should be treated as distinct engineering problems, each with different evidence signals.

  • Replay: old valid messages are injected again. Symptoms: repeated counters/nonces, “valid decode” but stale session.
  • Relay (mafia fraud): challenges are forwarded to a real device. Symptoms: timing consistency anomalies and unusual processing gaps.
  • Impersonation: a fake node claims a valid identity. Symptoms: auth failures, key-version mismatch, STS validation errors.
  • Downgrade: the link is forced into weaker settings (e.g., STS disabled). Symptoms: config negotiation changes and “compat mode” counters.

Engineering landing zone: STS + challenge/response (freshness binding)

Practical secure ranging uses two anchors: freshness (nonce/counter) and binding (ranging-critical fields are authenticated under the session state). STS supports trustworthy timestamp sequences and reduces the chance that a distance is accepted without a valid, session-bound signal structure.

Bind these (minimal)

Bind freshness (nonce/counter), session_id, peer_id, key_version, STS mode/config summary, and a compact digest of ranging-critical fields so distance cannot be accepted under stale or downgraded state.

Identity and keys (minimal set, ranging-related only)

Keep the node implementation minimal but robust. The goal is not a full enterprise PKI stack; it is a small, versioned set of identity and key state that can prevent replay and downgrade at ranging time.

  • device_id / peer_id: stable identifiers used to bind sessions and logs.
  • session_id: short-lived session context; every ranging exchange references it.
  • key_version: supports rotation; reject stale versions and log mismatches.
  • anti-rollback: prevent firmware/config from reverting to weaker security levels; keep monotonic version evidence.
Required node evidence fields
security_level sts_mode session_id nonce/counter key_version auth_fail_code downgrade_flag

Performance impact: latency, power, and configuration complexity

Security typically adds steps and compute. The right engineering move is to treat security level as a profile dimension: measure the cost in extra air-time, CPU cycles, and state transitions, then cap the complexity with a frozen default and bounded policies.

  • Latency: extra handshake steps reduce achievable update rate under tight duty-cycle limits.
  • Power: compute + air-time increase; the average can rise sharply in burst-heavy modes.
  • Complexity: more configurations require versioning and logs to keep diagnosis possible.

Decision table: Threat → Protection → Cost (plus evidence)

Use a simple engineering table to decide what to enable by default and what to guard. The evidence column is critical: without logs and flags, secure ranging failures look like “random link issues”.

Threat Protection (node-level) Cost (engineering) Evidence to log
Replay freshness (nonce/counter) + session binding; reject stale state small state storage; strict counter handling session_id, nonce/counter, replay_reject_code
Relay challenge/response with tight time consistency; STS mode enforced extra steps; tighter timing constraints; more diagnostics timing_anomaly_flag, auth_fail_code, sts_fail_code
Impersonation peer identity binding + session key derivation; reject key_version mismatch identity state management; rotation handling peer_id, key_version, auth_fail_code
Downgrade security_level lock + anti-rollback; reject weaker configs unless explicitly guarded config policy + version monotonicity downgrade_flag, policy_version, config_digest
Figure F9 — Secure ranging envelope: threats → defenses → evidence & cost

Block diagram only; minimal labels; text ≥ 18px; no PKI/TLS deep-dive; no RTLS engine.

SECURE RANGING (NODE) freshness + STS + binding + anti-downgrade Threats Replay Relay Impersonate Downgrade Defenses Freshness nonce / counter STS enforced Bind state peer + session Anti-downgrade rollback guard Evidence Logs Flags Latency Power Boundary: no PKI/TLS deep-dive · no RTLS engines · node-only security
Boundary

This chapter covers secure ranging mechanisms and node evidence only. It intentionally excludes PKI/TLS/cloud provisioning, anchors, and RTLS engines.


Chapter H2-10

Low Power and Duty-Cycling (Node View): Wake Policies, Burst Scheduling, Peak Current, and Brownout Evidence

UWB peak current can be high. Battery life becomes controllable when duty-cycling is treated as a policy with hard bounds, and when power integrity issues are separated from “link issues” using node-side evidence (brownout flags, reset causes, and ranging-quality logs). This chapter does not dive into PMIC topologies.

Make battery life a policy: three knobs that stay measurable

A practical node design controls lifetime via three measurable knobs: interval (how often ranging runs), burst window (how long high-rate updates last), and event triggers (when to wake for ranging).

Duty-cycle knobs
interval burst triggers ceilings

Duty-cycling modes: interval, burst, and event-driven scheduling

Use a small set of modes that can be reasoned about and logged. The most reliable field behavior comes from bounded policies rather than ad-hoc adjustments.

  • Interval mode: predictable updates; easiest to budget.
  • Burst mode: short high-rate window for interaction; must have a strict ceiling.
  • Event mode: wake on IMU/button/external interrupt; best for low average power.
Hard bounds

Always cap maximum burst duration, maximum update rate, and minimum cool-down time. Log the trigger reason and the policy version.

Energy budgeting that works in practice: “energy per exchange”

The most transferable budgeting method is to treat each ranging exchange (or burst) as a repeatable energy event. Once measured, lifetime becomes a controllable parameter: reduce exchange energy, reduce exchange count, or both.

  • Measure: record active window duration and a representative current profile for the exchange.
  • Count: log how many exchanges happen per hour/day, including retries.
  • Control: tie policy to quality flags so retries do not silently dominate energy.

Peak current and power integrity: how droop turns into timestamp anomalies

Voltage droop during high-current TX/RX or compute windows can destabilize detection and timestamp capture. In the field, this often presents as “random distance jumps” or “unexplained retries” unless power evidence is logged.

  • Droop → detect instability: SFD/STS decisions become threshold-sensitive; first-path becomes less stable.
  • Droop → digital upset: timebase or state machines can reset/timeout, causing missing or inconsistent timestamps.
  • Key separation: power issues must be proven via brownout/reset evidence, not guessed from link behavior.

Evidence logging: the minimum set to distinguish power vs channel vs policy

A small, consistent evidence set prevents “mystery bugs”. These fields should be logged alongside ranging quality flags and policy context, so failures can be bucketed correctly.

Minimum evidence set
reset_cause brownout_count vbat_min policy_version trigger_reason profile_id security_level retry_count fail_code quality_flags

Minimal state machine (node-only): predictable scheduling and traceability

A minimal state machine makes behavior stable across releases and enables traceability. Each transition should be logged with a reason code, so the duty-cycle policy remains inspectable in real deployments.

  • Sleep → Wake: by trigger_reason (IMU/button/interrupt) or interval timer.
  • Wake → Burst: bounded by ceilings; exits on time or quality decline.
  • Burst → Cooldown → Sleep: enforce minimum cool-down to avoid thermal/power stress.
Figure F10 — Duty-cycle timeline + peak current windows + evidence taps

Timeline blocks (no complex waveforms); text ≥ 18px; node evidence taps; no PMIC topology deep-dive.

DUTY-CYCLE (NODE) interval · burst · event triggers · evidence taps Timeline Sleep Wake Burst ranging Cooldown Current windows use blocks (no detailed waveform) I_sleep I_peak I_peak (burst) settle Evidence taps brownout reset cause vbat_min fail + quality Boundary: no PMIC topology · no energy harvesting deep-dive
Boundary

This chapter focuses on node-level duty-cycling, peak-current symptoms, and evidence logging. PMIC/harvesting design details are intentionally excluded.

Chapter H2-11

Verification & Debug: Reading CIR, Handling NLOS, Calibrating Offsets, and Controlling Production Consistency

The goal is field-proof evidence, not paper-perfect models. This chapter turns “range jitter” into repeatable buckets using CIR/diagnostics, then prevents fake confidence via detect→degrade→flag, and finally locks consistency with calibration records + fixture thresholds + a stable log schema.

Scope Guard

Allowed: CIR/first-path metrics, thresholds, NLOS detect→degrade→flag, antenna-delay & temperature buckets, fixture/golden unit, production thresholds, log schema, example MPNs for validation.
Banned: RTLS engines (TDoA/AoA/AoD), anchors deployment, network sync, cloud provisioning/PKI deep-dive, PMIC topology/energy-harvesting deep-dive.

CIR quick-read checklist NLOS policy table Calibration minimal loop Production thresholds + log schema

CIR/Diagnostics: the 5 metrics that carry the most evidence

The CIR is useful only when the node collects the same metrics under a frozen profile (preamble/PRF/SFD/STS/threshold settings). Start with a short, high-signal checklist that maps directly to failure buckets.

Metric (node-side) What it means (field) Common bucket Actionable next step
first_path_conf / FP confidence How stable/credible the detected first path is NLOS, heavy multipath, threshold drift Freeze thresholds; compare LOS reference; enable NLOS flagging
peak_amp Peak strength (use together with noise_floor) Distance/attenuation, antenna mismatch Check matching/ground/assembly; compare golden unit
noise_floor Raised baseline often signals interference or coupling Interference / digital coupling / threshold too low Raise threshold cautiously; correlate with retries and power events
peak_to_first ratio Large ratio suggests “first path is not the strongest” (typical NLOS) Multipath / NLOS Detect→degrade→flag; do not promise cm-level accuracy
threshold_id + hit rate Too high: missed detections. Too low: false triggers Config-induced jitter / “random” drops Lock a default profile; version the threshold set
Field rule

Always log a short window under a frozen profile (e.g., N exchanges) before changing any threshold or PRF/preamble setting. Without freeze-and-sample, comparisons become invalid.

NLOS policy: Detect → Degrade → Flag (never output fake precision)

NLOS is not solved by “better math” inside a tag. The node must detect suspicion, reduce the promise (output contract), and expose quality flags so upper layers can behave safely.

Detect (evidence) Degrade (output contract) Flag (what to report) What to log
low first_path_conf + high peak_to_first Lower update rate; increase averaging window; cap retries nlos_flag=1, confidence tier (coarse) first_path_conf, peak_to_first, retry_count, profile_id
unstable FP index (jumps) + rising noise_floor Output “invalid / low confidence” instead of cm-level number quality_flags: interference_suspect noise_floor, threshold_id, fail_code
high retries + timing anomalies (with clean power evidence) Switch to safer profile; limit burst duration quality_flags: unstable_link retry_count, fail_code, policy_version
Minimal quality schema
quality_flags nlos_flag confidence_tier profile_id policy_version

Calibration: separating fixed bias from environment-driven variance

Calibration should produce versioned artifacts that can be audited in the field. The key is to remove repeatable offsets (antenna delay / assembly bias) and avoid silently changing thresholds without traceability.

Calibration item Why it matters Artifact to store Production control
Antenna delay (Tx/Rx or equivalent bias) Directly becomes a fixed distance offset when antenna/ground/assembly changes antenna_delay_ver + bias value(s) spot-check per lot; reject outliers vs golden unit
Temperature buckets (coarse points) Thresholds/paths drift with temperature; keeping “policy+threshold ver” prevents chaos temp_bucket + threshold_set_ver verify across min/room/max; store bucket ID in logs
Assembly consistency sampling Housing/ground/placement can shift CIR shape; affects first-path stability lot_id + stats (mean/σ) + fail_code AQL sampling; tighten mechanical tolerances when σ grows
Calibration evidence fields
antenna_delay_ver threshold_set_ver temp_bucket lot_id golden_unit_id

Production consistency: fixture, golden unit, thresholds, and log schema

Production control is a repeatability problem. The fixture fixes geometry; the golden unit fixes expectations; thresholds fix pass/fail; and the log schema makes failures searchable across lots.

  • Fixture: repeatable geometry + optional “controlled obstruction” coupon for NLOS sanity checks.
  • Golden unit: a known-good node run on every line shift; its logs define normal bounds.
  • Thresholds: set acceptance windows for first_path_conf, noise_floor, peak_to_first, retry_count.
  • Log schema: fixed fields enable grep-style triage and trend charts later.
Minimal log schema (recommended)

profile_id, sts_mode, security_level, session_id, antenna_delay_ver, threshold_set_ver, temp_bucket, first_path_conf, peak_amp, noise_floor, peak_to_first, retry_count, fail_code, nlos_flag, quality_flags, reset_cause

Reference materials (MPN examples) for verification rigs and repeatable logs

The list below provides concrete, commonly used reference parts for building a repeatable verification loop (golden unit, profile freeze, known-good RF path, and consistent host control). Use these as examples; availability and package suffixes should be verified per region.

Category MPN examples Why it helps H2-11 Typical usage
UWB transceiver IC Qorvo DW3110, DW3210 Stable reference for CIR/first-path diagnostics and secure-ranging modes Golden design baseline; compare profiles/threshold sets
Integrated UWB module Qorvo DWM3001C (module; includes nRF52833 + antenna) RF path is validated/calibrated; reduces “board RF uncertainty” during debug Golden unit; quick LOS/NLOS fixture baselines
Dev kit Qorvo DWM3001CDK Repeatable host control + logs; accelerates threshold and schema freeze Line-side golden log generator; regression tests
UWB IC (IoT family) NXP Trimension SR040, SR150 Alternative reference platform for interop baselines and profile sanity checks Cross-vendor profile validation; evidence-field alignment
Clock parts (examples) 38.4 MHz crystal: NDK NX3225SA-38.4M, Abracon ABM8-38.400MHZ
TCXO example: Epson TG-5035CJ (frequency per design)
Keeps profile/threshold experiments comparable; reduces “clock drift” confounding Golden unit stability; temperature bucket validation
Debug/programming (examples) SWD cable: Tag-Connect TC2030-IDC
RF test connector: Hirose U.FL-R-SMT-1
Improves repeatability of firmware/profile loading and RF sampling during debug Line-friendly flashing; consistent RF probing points
Practical note

For H2-11, the “best” reference material is the one that produces stable, searchable logs under a frozen profile. Prioritize repeatability (golden unit + fixture + schema) over theoretical perfection.

Figure F11 — CIR quick-look: LOS vs NLOS (what to read in 10 seconds)

Simplified, visual-first diagram for SEO-friendly extraction; labels are metrics/flags; text ≥ 18px.

CIR QUICK-LOOK first_path_conf · peak_to_first · noise_floor · threshold_id · nlos_flag Read order: noise_floor → first_path_conf → peak_to_first → decide nlos_flag Metrics (node) first_path_conf peak_to_first noise_floor threshold_id LOS (typical) CIR shape Expected first_path_conf: high peak_to_first: low nlos_flag: 0 NLOS (typical) CIR shape Suspect first_path_conf: low peak_to_first: high nlos_flag: 1 Rule: if NLOS suspected → degrade output + flag quality (no fake precision)

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs (x12): Debuggable ranging, not vague “it works”

Each answer stays inside node scope: timestamps, PHY params, CIR evidence, antenna delay calibration, secure ranging basics, IMU hooks, duty-cycling evidence, and production consistency.

Allowed: SS/DS-TWR, node timestamps, CFO/clock drift (node), PHY params, CIR metrics, antenna delay & temp buckets, STS secure ranging, IMU hooks, duty-cycle evidence, production thresholds/logs.
Banned: RTLS engines (TDoA/AoA/AoD), anchor deployment, network sync (PTP/SyncE), cloud PKI deep-dive, PMIC topology deep-dive.
1

Why is DS-TWR usually less sensitive to temperature drift than SS-TWR?

Maps to: H2-3 (TWR sequences) + H2-5 (clock drift & temperature buckets)
DS-TWR uses two round trips so clock offset and much of the clock-rate error cancel across the message exchanges. Temperature-driven ppm drift still exists, but it contributes less to the final range than in SS-TWR, especially when reply delays are stable. Remaining error often comes from timestamp detection consistency and RF/antenna delay terms.
Evidence to watch: temp_bucket vs range bias/scatter, plus stable reply_delay configuration.
2

The link works, but range is always off by a fixed value—what is the first suspicion?

Maps to: H2-6 (antenna/ground delay) + H2-11 (calibration & production evidence)
A fixed offset most often points to antenna delay (including matching/ground reference changes) or an untracked calibration constant. Enclosure, layout revisions, or assembly variation can shift delay and become a “constant meters” bias even when packets are clean. Verify the stored calibration version, compare against a golden reference build, and check whether the offset is symmetric across devices.
Evidence to watch: antenna_delay_ver, lot-to-lot mean shift, golden-unit comparison logs.
3

Range “jumps” when update rate increases—what are the three most common root-cause buckets?

Maps to: H2-2 (requirements→budget) + H2-7 (PHY params) + H2-10 (duty-cycle & power evidence)
The common buckets are: (1) detectability margin shrinks (shorter preambles, tighter thresholds, heavier multipath), (2) scheduling/processing pressure increases (retries, overlapping exchanges, stale timestamps), and (3) power integrity events appear (peak current causes droop → retries/resets → discontinuous timestamps). Separate these buckets using CIR metrics, retry counters, and reset evidence under a frozen profile.
Evidence to watch: first_path_conf, retry_count, reset_cause vs burst timing.
4

How does CFO/clock drift show up in ranging, and how is it measured and mitigated?

Maps to: H2-4 (timestamp chain & budget) + H2-5 (node-internal correction)
CFO and drift manifest as a bias that correlates with timing between messages (reply delay) and changes with temperature/aging. Measure by logging the device’s CFO/drift estimates and checking whether range bias tracks those estimates across temperature buckets. Mitigation is typically a combination of stable clock source, node-side CFO correction loops, shorter/controlled reply delays, and versioned temperature-bucket compensation.
Evidence to watch: drift/CFO estimate trend vs temp_bucket and range bias.
5

In CIR, what does “unstable first path” usually mean?

Maps to: H2-7 (detectability & thresholds) + H2-11 (LOS/NLOS readout & flags)
Unstable first-path detection usually indicates multipath/NLOS, a threshold/profile mismatch, or interference raising the noise floor. When the strongest peak is not the earliest arrival, the first path can “wander” between candidates, creating jitter even though packets succeed. Use a combined readout (first-path confidence + peak-to-first + noise-floor) to decide when to degrade output and set an NLOS flag.
Evidence to watch: first_path_conf, peak_to_first, noise_floor, threshold_id.
6

Same hardware, but enclosure or hand-grip changes the range—how to isolate antenna vs ground-reference issues?

Maps to: H2-6 (RF front-end, antenna delay, enclosure/ground coupling)
Enclosure and hand-grip can detune impedance and shift the effective antenna delay by changing the ground reference and near-field coupling. Isolate by running a controlled fixture test across poses and comparing CIR shape changes: antenna detuning often alters peak amplitude and delay bias, while ground-reference problems frequently raise noise floor and distort early-path detection. Keep the profile frozen and log before/after deltas.
Evidence to watch: pose-dependent shifts in peak_amp, noise_floor, and mean bias.
7

With STS enabled, does range become more stable or does packet loss increase—and why?

Maps to: H2-7 (PHY parameters & detectability) + H2-9 (secure ranging basics & cost)
STS can improve resistance to spoofing, but it may increase airtime and processing overhead. If configuration reduces detectability margin (thresholds too tight, preamble too short, or key/session issues), retries and drops can rise—making range appear less stable due to missing samples. The practical approach is to tune PHY settings for reliable detection first, then enable security while monitoring retry rate and CIR confidence.
Evidence to watch: retry_count and first_path_conf before/after STS enable.
8

Why can an encrypted link still be vulnerable to relay attacks, and what must secure ranging add?

Maps to: H2-9 (threats, STS + challenge/response, anti-downgrade)
Encryption protects message content, but a relay attacker can forward encrypted packets fast enough to fake proximity. The missing property is time-of-flight binding: secure ranging must enforce distance-bounding behavior using STS with challenge/response under tight timing constraints, plus anti-replay freshness and anti-downgrade rules. Security must be evaluated as a ranging property, not only a transport property.
Evidence to watch: security_level, sts_mode, policy_version, and session freshness markers.
9

What is the minimal IMU integration, and what “fusion” crosses scope into an RTLS engine?

Maps to: H2-8 (IMU hooks: triggers, time alignment, minimal outputs)
Minimal IMU use is a hook: wake triggers, motion-state classification (still/moving), and time-aligned interrupt timestamps that accompany ranging quality flags. This helps scheduling, filtering out obvious outliers, and annotating confidence without building a location engine. Crossing scope starts when the node outputs fused position/trajectory, runs AoA/TDoA logic, or performs map-based optimization—those belong to RTLS engine pages.
Evidence to watch: motion_state + time-aligned interrupt timestamp + quality_flags.
10

What symptoms do peak-current brownouts/resets create in ranging, and how to preserve evidence?

Maps to: H2-10 (duty-cycle & power evidence) + H2-11 (logs, fixtures, consistency)
Peak-current events can cause sudden retry spikes, intermittent packet loss, discontinuous timestamps, and “bursts” of invalid ranges—often clustered at the start of a measurement burst. If a reset occurs, the ranging stream may restart with default thresholds or lost session context, amplifying jitter. Preserve evidence by logging reset cause, brownout flags, retry counters, and a short CIR summary for each exchange under a frozen profile.
Evidence to watch: reset_cause, brownout flag, retry_count, fail codes vs burst timing.
11

What quality flags should NLOS detection output so upper layers do not misuse the range?

Maps to: H2-11 (Detect→Degrade→Flag, minimal schema)
Output must separate “a number exists” from “the number is safe to use.” A minimal set is: nlos_flag, confidence_tier, and quality_flags (e.g., interference_suspect, unstable_first_path). Include the profile/policy version so the consumer knows what the flags mean. Upper layers should treat NLOS-suspect ranges as degraded inputs (or invalid) rather than cm-accurate values.
Evidence to watch: nlos_flag, confidence_tier, quality_flags, policy_version.
12

In production, which parameters must be locked, and which can be self-calibrated?

Maps to: H2-7 (profile freeze) + H2-11 (calibration artifacts & consistency)
Lock the items that define detectability and timestamp behavior: PHY profile (preamble/PRF/SFD/STS mode), thresholds, reply delays, and logging schema. Allow self-calibration only for versioned, auditable offsets such as antenna delay and coarse temperature buckets—never as silent “magic.” Production should validate with a fixture and a golden unit, and every shipped unit should log the profile/calibration versions used to generate range.
Evidence to watch: profile_id, threshold_set_ver, antenna_delay_ver, temp_bucket.