123 Main Street, New York, NY 10001

Indoor Positioning & Lighting: UWB/BLE AoA, Sync, Drivers

← Back to: Smart Home & Appliances

Indoor Positioning / Lighting ties UWB or BLE AoA location evidence to stable lighting actions: measure the full timestamp chain, manage error/drift with calibration, and prevent dimming/driver noise from corrupting RF and time sync. The goal is predictable “follow-me/zone/occupancy” behavior with quantified accuracy, latency, and false-trigger rate that stays consistent across rooms, people density, and dimming states.

H2-1. Engineering Scope & System Targets

Lock the boundary up front so this page stays vertical: positioning engine + luminaire actuation + time alignment + edge MCU evidence.

IN UWB / BLE AoA engines IN Time sync & timestamps IN LED driver & dimming interface IN Edge MCU control + logs OUT Cloud/platform architecture OUT Matter setup tutorials

This topic page covers an indoor positioning-to-lighting control path where a position/zone/angle event is converted into a measurable lighting action (brightness/CCT/scene) with bounded latency and low false triggers. The focus is on the hardware-facing engineering chain: RF/baseband evidence, timestamp integrity, clock discipline, edge MCU control, and the luminaire driver interface.

In-scope outputs (what the system must reliably deliver):

  • Occupancy / zone / “follow-me” lighting events driven by UWB ranging or BLE AoA angle/quality indicators.
  • Actuation commands to a dimming interface (PWM / 0–10V / DALI/serial control) and LED driver update timing that can be audited.
  • Evidence-ready logs that tie “wireless measurement → solve → actuation” into one traceable timeline.

Out-of-scope (link out, do not expand here):

  • Home hub/gateway platform design, cloud dashboards, automation-rule UI, user onboarding/commissioning walkthroughs.
  • Deep router/mesh/network tuning, app-level UX tutorials, or full smart luminaire product teardown beyond driver/EMC touchpoints.

Three measurable target groups (write them as acceptance criteria):

  • Positioning accuracy: define by the deployment goal (cm-level, ~0.5 m, or room-level). Use a testable metric such as P95 error (distance), angular error (AoA), or room classification accuracy.
  • Trigger latency: measure as a distribution (not only mean). At minimum report P50/P95/P99 of end-to-end latency, and split into: Rx→Solve and Solve→Actuate.
  • Reliability: quantify false-trigger rate (per hour/day), packet/measurement drop rate, and drift (error growth vs time/temperature).

Evidence rule for all later chapters: every claim must fall back to one of these chains— RF/measurement quality (CIR/ToF quality, IQ/phase stability, CFO), time alignment (offset/drift/holdover), driver interference (dimming edges, rail ripple, ground bounce), or system logs (Tx/Rx/Solve/Actuate timestamps).

Indoor Positioning → Lighting (Scope Boundary) UWB / BLE AoA engine • Time Sync • Edge MCU • LED Driver Interface Positioning Engine UWB BLE AoA Evidence: IQ/phase, CFO, ToF quality Output: range / angle / zone event Time Alignment Timestamps offset • drift • holdover Rx→Solve→Actuate Lighting Actuation Dimming I/F LED Driver Evidence: rail ripple, edges, EMI Output: brightness/CCT/scene Measurable Targets (Acceptance Criteria) Accuracy cm / 0.5 m / room-level P95 error / angle error Latency Split: Rx→Solve, Solve→Actuate Report P50 / P95 / P99 Reliability false triggers • drops • drift correlate with dimming state Scope is hardware-evidence driven. Platform/cloud topics are out-of-scope here.
Figure F1. System scope boundary and measurable targets: positioning evidence + time alignment + luminaire actuation.
Cite this figure: ICNavigator — “Scope Boundary for Indoor Positioning & Lighting (F1)”.
Cite this figure

H2-2. End-to-End Event Chain: From Ranging/Angle to Light Actuation

Convert “it feels slow / jittery / unstable” into a measurable chain with timestamps, jitter sources, and first-proof measurements.

A positioning-driven lighting system is only debuggable when the full pipeline is split into auditable stages. The recommended chain is: Tag/Phone → Anchor/Array → IQ + Hardware Timestamp → Solver → Policy Window → Dimming Interface → LED Driver Update. Every stage must expose at least one time reference and one quality indicator.

Canonical timestamps (define them precisely so logs are consistent):

  • Tx — the physical transmit time of a packet/pulse (or the closest available reference). If Tx is not hardware-stamped, record the best proxy and mark it.
  • Rx — the receive time captured by hardware timestamping (preferred). Software “time of arrival” is often too noisy for tight latency/accuracy claims.
  • Solve — when a position/zone result becomes available and the estimated event time used by the solver (store both if they differ).
  • Actuate — when the lighting command is committed (policy output) and when the driver actually updates (two stamps if possible).

For robust KPI reporting, latency must be treated as a distribution. The most common field failures are not caused by average delay, but by P95/P99 tail latency that creates missed triggers, delayed “follow-me” behavior, or visible lighting hesitation.


Where jitter and “latency explosions” usually come from:

  • Wireless retries & backoff — creates long-tail delays; correlates with channel occupancy, dimming state EMI, or human blockage.
  • Measurement windowing — scan/slot scheduling can impose a deterministic floor on response time (visible as periodic latency spikes).
  • Solver batching — batching improves throughput but increases P99 and makes actuation feel “bursty”.
  • Policy debounce/hysteresis — required to avoid false triggers, but must be designed with evidence-driven time windows.
  • Dimming update cadence — PWM/0–10V/DALI update rates can quantize the actuation response, producing step-like behavior.

Rule of thumb: always split latency into Rx→Solve and Solve→Actuate. If the tail lives in Rx→Solve, fix measurement scheduling, timestamp integrity, retries, or solver load. If the tail lives in Solve→Actuate, fix policy windows, queueing, and dimming/driver update cadence.

Stage Typical Worst-case drivers What to log / measure first
Anchor/Array capture Window-limited Scan slot length, missed captures, RF interference Rx hardware timestamp; capture success; IQ quality / CIR/ToF quality
Retries/backoff Low Collision, low SNR, EMI correlation with dimming edges Retry counters; channel busy time; correlation with dimming state
Solver CPU-bound Batching, queueing, thermal throttling, memory pressure Solve timestamp; queue depth; CPU load; solver cycle time distribution
Policy window Intentional Debounce too long, hysteresis mis-tuned Policy decision time; window parameters; false-trigger counters
Dimming interface Cadence-limited Update rate, bus contention, command quantization Actuate(commit) timestamp; bus latency; update cadence
LED driver update Driver-limited Soft-start/settle, current loop response, protection events Actuate(driver) timestamp; rail ripple; fault flags (OCP/OTP)
Event Chain with Timestamps Measure latency as a distribution • Split Rx→Solve and Solve→Actuate Tag / Phone Tx stamp Anchor / Array Rx HW stamp Solver Solve stamp Policy Window debounce LED Driver Actuate stamp Dimming I/F PWM / 0–10V / bus Jitter Hotspots retries/backoff • windowing • batching • bus cadence Latency Split Rx → Solve measurement windows • retries • compute load Solve → Actuate policy window • queueing • dimming cadence Report P50/P95/P99; investigate tail latency first.
Figure F2. End-to-end chain and canonical timestamps (Tx/Rx/Solve/Actuate) with the main jitter hotspots.
Cite this figure: ICNavigator — “Event Chain with Timestamps for Indoor Positioning & Lighting (F2)”.
Cite this figure

H2-3. UWB vs BLE AoA: Engineering Trade-Offs & Decision Matrix

This section is not a primer. It maps measurable targets (accuracy, tail latency, reliability) to the engineering costs: synchronization, calibration, deployment density, power, and EMI sensitivity.

AXIS Accuracy AXIS P95/P99 latency AXIS Deployment density AXIS Power budget AXIS Calibration effort AXIS EMI sensitivity

A positioning-driven lighting system should start from the required outcome, not from the radio label. Three common requirement tiers are: room/zone-level (coarse presence & region triggers), sub-meter (smooth follow-me lighting), and cm-level (tight asset point location). Each tier implies different acceptance metrics: P95 error, P99 latency, and false-trigger rate.

UWB (ToF / TWR / TDoA)

Strength: distance precision and better handles multipath via quality evidence (e.g., CIR/first-path behavior).

Engineering cost: synchronization and infrastructure (especially TDoA), higher peak current, slot/window scheduling.

Most sensitive to: clock drift/holdover and timestamp jitter; tail latency from ranging windows/retries.

First evidence: ToF/CIR quality stats, anchor-to-anchor offset/drift, P99 ranging latency.

BLE AoA (Angle + RSSI / Fusion)

Strength: cost and deployment friendliness; strong fit for zone/direction triggers and follow-me with acceptable precision.

Engineering cost: antenna array calibration and channel phase consistency; co-existence (Wi-Fi + dimming EMI).

Most sensitive to: phase noise/CFO estimation stability and EMI-induced phase jumps; reflection-driven angle bias.

First evidence: inter-channel phase consistency, CFO variance over time, angle variance vs dimming state.

Decision rule: focus on what breaks the user experience. If the issue is tail latency, audit Rx→Solve first (windows, retries, compute load). If the issue is angle/distance instability, audit phase/CFO/time alignment and driver coupling evidence.

Decision axis UWB (ToF/TWR/TDoA) BLE AoA
Accuracy fit Best for sub-meter to cm-level when quality evidence can be captured and anchors can be placed well. Strong for room/zone to sub-meter when array calibration and reflection bias are controlled.
Latency behavior Window/slot scheduling sets a floor; retries create P99 tails. Needs disciplined ranging cadence. Angle updates can be frequent, but phase/CFO stability and co-existence determine jitter/tails.
Deployment density May require more anchor planning for coverage; TDoA adds sync infrastructure complexity. Arrays need line-of-sight coverage and stable geometry; density depends on room layout and reflections.
Power budget Higher peaks during ranging; duty-cycling must be designed around time slots and required refresh. Generally friendly for low duty cycles; still sensitive to scan/processing frequency for AoA updates.
BOM & complexity RF + clocking/sync components can dominate; test/validation overhead often higher. Arrays add antenna/channel count and calibration fixtures; cost often lower than UWB at scale.
Calibration Anchor timing alignment (for TDoA) and hardware timestamp discipline are critical. Array geometry + channel phase/gain calibration are mandatory; drift management matters long-term.
EMI sensitivity Timestamp jitter can be impacted by conducted/radiated noise; strong evidence path via ToF/CIR quality. Phase chain is sensitive: dimming edges and ground bounce can induce angle jumps and variance growth.
UWB vs BLE AoA — Decision Map Choose by measurable targets and engineering costs (sync, calibration, EMI, power). UWB Path ToF / TWR TDoA (sync) Key costs sync • infrastructure • peak current First evidence CIR/ToF quality • offset/drift • P99 Best when sub-meter → cm targets dominate BLE AoA Path AoA Array RSSI Fusion Key costs array calibration • phase stability • co-exist First evidence phase consistency • CFO variance • angle σ Best when room/zone → sub-meter triggers Common axes Accuracy • P99 latency • EMI
Figure F3. Decision map: pick UWB or BLE AoA using measurable targets and the primary engineering costs.
Cite this figure: ICNavigator — “UWB vs BLE AoA Decision Map for Indoor Positioning & Lighting (F3)”.
Cite this figure

H2-4. Error Budget: Why Accuracy Collapses (Multipath, Phase, Sync, Dimming EMI)

Turn “it drifts / it jumps / it fails in one room” into a measurable error budget with evidence hooks and minimum-fix levers.

Accuracy and stability are dominated by an error budget across four categories: wireless physics (multipath/NLOS), RF/baseband integrity (phase/IQ/CFO), time alignment (offset/drift/timestamp jitter), and system coupling (dimming-driver noise coupling into RF ground/baseband). Each category must map to a signature, a first evidence, and a minimum-fix lever.


A) Wireless physics (multipath / NLOS / blockage)

Signature: variance grows at the same spot; bias changes near glass/metal; accuracy worsens with people density.

First evidence: UWB CIR/first-path stats; AoA angle spread and reflection-driven bias patterns.

Minimum fix: anchor placement/height/visibility; use quality thresholds and “reject measurement” rules before actuation.

B) RF/baseband (phase noise / IQ / array error / CFO)

Signature: AoA angle jumps; stable but biased angles; temperature-dependent slow drift.

First evidence: inter-channel phase consistency; CFO estimation variance; IQ imbalance indicators.

Minimum fix: array phase/gain calibration; stabilize reference ground and sensitive supplies; reduce phase-chain noise coupling.

C) Time alignment (clock drift / timestamp resolution)

Signature: good at start then slowly degrades; reboot causes temporary instability; anchor-to-anchor bias grows over time.

First evidence: offset/drift/holdover curves; timestamp jitter distribution (RMS + peak-to-peak).

Minimum fix: disciplined sync refresh and holdover budget; prioritize hardware timestamping and log sync state.

D) Dimming EMI coupling (driver edges → RF/baseband)

Signature: accuracy drops at specific dim levels; packet drops at dimming edges; noise peaks at PWM harmonics.

First evidence: rail ripple/ground bounce correlated with dim edges; RSSI/IQ noise peaks at PWM frequency.

Minimum fix: return-path control, partitioned supplies, targeted filtering; include dimming state in logs for correlation.

Error-budget discipline prevents scope creep: only evidence that connects to position output and lighting actuation belongs here. Platform/cloud topics remain out-of-scope.

Error source Primary signature Evidence to capture Minimum-fix lever
Multipath / NLOS bias + variance growth; room-dependent failure UWB CIR/first-path vs total energy; AoA angle spread vs location anchor geometry; quality gating; reject-before-actuate
Blockage (human) temporary spikes; correlated with movement/crowd quality metric dips; increased retries; latency tail growth increase diversity; smoothing window tuned by evidence
Phase chain drift AoA jumps or stable bias inter-channel phase delta stability; CFO variance calibration + drift tracking; improve reference/ground
Timestamp jitter P99 latency spikes; range/angle instability Rx timestamp jitter stats; sync state logs hardware timestamp; reduce noise coupling; sync discipline
Dimming EMI coupling only fails at certain dim levels rail ripple vs dim edges; IQ/RSSI noise peaks at PWM return-path control; partition supplies; targeted filters
Error Budget Map (Evidence-Driven) Each error class must map to a signature, evidence, and minimum-fix lever. Position Output jitter • bias • jumps → trigger errors Measure: P95 error • P99 latency • false triggers A) Wireless physics multipath / NLOS Evidence: CIR stats B) RF / baseband phase / IQ / CFO Evidence: phase Δ C) Time alignment offset / drift Evidence: drift curve D) Dimming EMI edges → coupling Evidence: PWM peaks Each class must produce a measurable signature and one “first evidence” capture.
Figure F4. Error budget map: four measurable error classes feeding position instability and trigger errors.
Cite this figure: ICNavigator — “Error Budget Map for Indoor Positioning & Lighting (F4)”.
Cite this figure

H2-5. Time Sync & Timestamp Discipline: Aligning Time for TWR/TDoA/AoA

This section focuses on implementable engineering: what must be synchronized, how timestamps are produced and transported, and how to verify offset/drift/holdover without drifting into academic algorithms.

SYNC anchor-to-anchor SYNC array channels SYNC MCU timebase VERIFY offset/drift/holdover RULE event-time vs solve-time

A positioning-driven lighting system fails quietly when time is not aligned. The goal is to keep all measurements and actions anchored to a traceable time model: hardware capture time (RF/baseband), system time (MCU discipline), and action time (dimming/driver update). Without this, accuracy collapses, tail latency becomes unexplainable, and field logs cannot prove causality.

What must be aligned

Anchors: required for distributed reception (TDoA) and multi-anchor fusion. Misalignment becomes slow drift and room-dependent bias.

Array channels: required for AoA phase differences. Misalignment becomes angle jumps or stable bias that worsens with temperature.

MCU timebase: required for consistent logs and end-to-end latency split (Rx→Solve, Solve→Actuate).

Sync stack (from hard to soft)

1) Hardware timestamps: captured as close to RF/baseband as possible; software time-of-arrival is typically too noisy.

2) MCU discipline: local clock is continuously corrected (frequency/offset) and carries sync state.

3) Domain crossing: every timestamp transported into logs must include domain ID and quality flags.

Practical rule: do not “fix” positioning by tuning filters until timestamp integrity is proven. Many “multipath problems” are actually timestamp jitter or resync jumps disguised as RF issues.


Verification metrics (engineer-facing acceptance language):

  • Offset: instantaneous time difference between domains/devices. Track as a time series and histogram.
  • Drift: slope of offset vs time (ppm or ns/s). Correlate with temperature and supply state.
  • Holdover: time-to-fail after losing reference sync while staying within an offset bound.
  • Resync jump: offset discontinuity at re-alignment. Large jumps create visible “position jump / light jump”.
Scope Metric How to measure Fail signature → first fix lever
Anchors offset / drift periodic offset sampling + temperature; store drift slope and variance slow accuracy collapse → tighten sync refresh, improve clock discipline, reduce timestamp jitter
Array channels phase consistency measure inter-channel phase delta stability; track vs temperature and dimming state angle jumps/bias → calibrate channels, stabilize references, reduce phase-chain noise coupling
MCU timebase holdover / jump disable reference for a window; measure time-to-fail and resync jump distribution post-resync instability → rate-limit corrections, record sync state, avoid step changes in action time

Lighting coupling constraint: event-time vs solve-time

  • Event time: physical occurrence time used to order events and keep motion continuity across latency jitter.
  • Solve time: when a result becomes available; used to schedule real actions (cannot act before solve time).
  • Anti-jitter policy: order by event time, execute by solve time, and update drivers at a bounded cadence to avoid visible flicker.
Field Meaning Domain Quality flags Used for
rx_ts hardware capture time (preferred) RF/baseband ts_valid, jitter_est Rx→Solve latency split, fusion ordering
event_ts estimated physical event time used by solver solver sync_state, conf event ordering, motion continuity
solve_ts time when result becomes available MCU/system queue_depth tail latency analysis, compute bottleneck
actuate_ts commit time + driver update time (if available) MCU/driver driver_fault Solve→Actuate split, visible delay diagnosis
Time Sync Stack & Timestamp Semantics Engineer-facing: what to align, what to log, and what to verify. Anchor Sync offset • drift • holdover Needed for TDoA / fusion Array Channel Sync phase Δ consistency Needed for AoA angle MCU Timebase discipline • domain ID Needed for audit logs Timestamp Flow Hardware TS Rx capture Domain Crossing domain ID + flags System Log Rx/Solve/Actuate Semantics & Verification Event time vs Solve time order by event • execute by solve Verify: offset / drift / holdover / jump track distributions and correlation
Figure F5. Time sync stack and timestamp semantics: what must be aligned and what must be verified for stable triggers.
Cite this figure: ICNavigator — “Time Sync Stack & Timestamp Semantics (F5)”.
Cite this figure

H2-6. Edge MCU ↔ Positioning Engine Interface: Data, Control, Power

Define the edge MCU role without drifting into gateway/cloud topics: an implementable contract across data plane, control plane, and power plane, with a checklist-style interface table.

The edge MCU is responsible for maintaining an evidence-ready closed loop: it ingests timestamps/measurements, applies control knobs (windows, calibration, power), and emits lighting actions with bounded latency and traceable logs. A good interface contract prevents false diagnoses such as “RF is unstable” when the real cause is buffering, DMA overruns, or control scheduling.

Data plane (evidence + results)

Inputs: IQ/CIR snippets (or summaries), hardware timestamps, range/angle + quality.

Constraints: buffering depth and backpressure must be explicit to avoid silent drops.

First evidence: queue depth, drop counters, DMA late/overrun correlated with P99 latency tails.

Control plane (knobs + side effects)

Knobs: scan windows/slots, TX power, array selection, calibration update, sync refresh.

Side effects: changes can shift latency floor, EMI footprint, and false-trigger rate.

Rule: every knob change must be logged with versioning for field correlation.

Power plane (latency vs energy)

Modes: always-on, duty-cycled, event-wakeup.

Trade: lower power often increases P99 latency and requires stronger hysteresis to avoid jitter.

Requirement: log mode transitions and wake sources to explain behavior changes.

Lighting hook (actuation contract)

Outputs: dimming command + commit timestamp; driver update time when available.

Cadence: bounded update rate prevents visible flicker and isolates solver jitter.

Fault link: driver OCP/OTP flags must join the same log timeline.

Category Message / item Transport Must include Why it matters
Data Measurement report SPI/UART rx_ts, seq, quality flags enables ordering and Rx→Solve latency split; prevents “invisible drops”
Data IQ/CIR snippet (optional) SPI (DMA) snippet ID, capture window, jitter estimate proves phase/CFO/multipath signatures without streaming full raw data
Control Scan window / slot config SPI/UART window length, cadence, version sets latency floor; explains periodic P99 spikes
Control Calibration update SPI/UART cal ID, timestamp, rollback tag prevents untraceable angle bias and drift after maintenance
Power Mode transition GPIO/IRQ + log wake source, mode ID, time links energy saving decisions to latency/reliability changes
Lighting Actuation commit PWM/0–10V/bus actuate_ts, update cadence makes Solve→Actuate measurable; isolates driver cadence from solver jitter
Logs Error counters internal + report drop/overrun/late/retry distinguishes RF loss from buffering/DMA timing problems

Implementation sanity check: if positioning “randomly” becomes unstable, verify DMA/queue overruns and mode transitions before retuning RF or solver filters.

Edge MCU Interface Split (Data / Control / Power) A closed-loop contract: evidence in, knobs out, actions logged. Positioning Engine UWB / BLE AoA Data out rx_ts • range/angle IQ/CIR snippet (opt) Edge MCU timestamp discipline DMA + buffers logs + counters actuate_ts Lighting Actuation Dimming + Driver Output PWM / 0–10V / bus driver update fault flags Control Power Log every knob change and mode transition to explain field behavior.
Figure F6. Edge MCU interface contract split into data, control, and power planes with an auditable lighting hook.
Cite this figure: ICNavigator — “Edge MCU Interface Split for Indoor Positioning & Lighting (F6)”.
Cite this figure

H2-7. Luminaire Drivers × Positioning Coexistence: Dimming Cadence, Flicker, Action Jitter

Unique to positioning-driven lighting: a noisy event stream must be shaped into stable driver updates, while dimming edges and harmonics must not degrade RF/baseband integrity.

CHAIN PWM/Analog/Bus → Driver → Light RISK flicker & step jitter RULE rate / slew / hysteresis EVIDENCE RF noise ↔ dim state

A positioning engine produces event-rate variability: confidence rises and falls, zones may chatter near boundaries, and track updates arrive with jitter. Directly mapping every event to a dimming update creates visible artifacts: step-change oscillation, low-frequency envelope flicker, and inconsistent “follow-me” response. A robust system treats dimming as an actuation pipeline with explicit cadence limits.

Dimming chain (short and measurable)

Command: PWM / analog dim / digital bus update

Driver update: internal modulation & current-loop response

Optical output: perceived brightness and visible flicker risk

Typical failures

Chatter: brightness toggles between two levels near a zone boundary

Flicker: irregular update timing forms a low-frequency envelope

P99 delay: inconsistent reaction time is perceived as “not following”


Action shaping (implementable constraints)

  • Rate limit: cap update frequency (bounded driver update cadence; merge updates inside the window).
  • Slew limit: cap Δbrightness per update (reduce step-change visibility).
  • Hysteresis / hold: enforce temporal/spatial hysteresis to prevent zone chatter.
  • Ordering: sort by event time, execute by solve time; keep a fixed actuation commit schedule.
Symptom Metric to watch Where to log/measure Fail signature → first fix lever
Step jitter Δbrightness per update distribution brightness_cmd + update interval large Δ with irregular cadence → enable slew limit + merge updates
Visible flicker update interval variance (P95/P99) actuate_ts spacing + dim state periodic envelope appears → cap update rate + fixed commit schedule
Chatter at boundary zone flip rate / confidence oscillation zone ID + confidence vs time rapid flipping → add spatial/temporal hysteresis + hold time
Packet loss increases loss vs duty/brightness state drop counters + dimming duty loss tracks dim state → suspect coupling; proceed to H2-8 evidence

Coexistence requires both domains to be visible in the same timeline: actuation cadence and dimming state must be logged alongside RF quality, phase jitter, and timestamp jitter.

Event → Actuation Pipeline (Jitter Control) Shape a noisy positioning stream into stable driver updates. Position outputs zone / track / confidence event_time ordering burst + jitter Action shaping Rate limit Slew limit Hysteresis / hold Dimming chain PWM / Analog / Bus LED driver update Light output Metrics to keep visible Update interval P95 / P99 spacing Δbrightness step-change risk RF / BB correlation noise / loss vs dim state Stable actuation cadence reduces flicker and makes EMC correlation measurable.
Figure F7. Event-to-actuation pipeline: rate/slew/hysteresis shape positioning updates into stable dimming commands.
Cite this figure: ICNavigator — “Event → Actuation Pipeline (F7)”.
Cite this figure

H2-8. EMC Coupling & Ground Bounce: Why Dimming Can Degrade Positioning

Rugged coexistence, scoped to this page: switching current creates ground/reference disturbance that contaminates RF/baseband phase and timestamp integrity. The goal is a provable evidence chain and minimal local fixes.

PATH switching → ground bounce → jitter MEASURE ripple at dim edges MEASURE phase/TS jitter vs dim state FIX partition / return / local filter

When lights turn on or dimming changes, the driver’s switching current generates di/dt and dv/dt that disturb power and ground references. If that disturbance reaches the positioning engine’s RF reference, baseband sampling, or timestamp capture, the system sees phase jitter, timestamp jitter, and packet loss that look like “wireless instability”. This section confines mitigation to local, positioning-relevant measures and links out for broader EMC/ESD topics.

Coupling path (scoped)

1) LED switching current creates voltage drop across shared impedance.

2) Ground bounce / rail ripple disturbs sensitive reference nodes.

3) RF/BB integrity degrades: IQ phase noise, CFO variance, timestamp jitter → position jitter/loss.

Two priority measurements

Measure #1: RF/BB rail ripple and reference noise at dimming edges (edge-aligned capture).

Measure #2: phase jitter / timestamp jitter statistics vs dimming state (duty/brightness/update moments).

Observation Measurement Correlation target Interpretation → first local fix
RF noise peak appears RF/BB noise spectrum during dimming PWM freq / harmonics conducted/radiated coupling → adjust cadence, improve return path, add local filtering at RF entry
Packet loss rises drop counters + duty/brightness duty level / update moments noise synchronized with dim state → power partition + routing of high-current loop
AoA angle jitter rises inter-channel phase delta variance dimming edges reference disturbance → isolate analog/RF reference, tighten grounding near array front-end
UWB range jitter rises timestamp jitter / ToF quality dimming cadence TS capture contamination → local decoupling, reduce shared impedance, schedule updates away from capture windows

Local (page-scoped) mitigation levers

  • Power partition: separate RF/MCU supplies from driver power where feasible; avoid shared high-current impedance.
  • Return-path control: keep LED switching loop compact; prevent high di/dt return from crossing RF reference ground.
  • Local filtering points: filter at RF/BB supply entry and sensitive reference nodes; validate improvement via jitter stats.
  • Timing isolation: avoid dimming updates near RF capture windows; keep actuation commit cadence bounded.

For broader EMC/ESD/Surge practices and compliance workflows, link to: EMC, Safety & Compliance (master page). Keep this page focused on coupling proof and minimal fixes.

Coupling Path: Dimming Edges → Ground Bounce → Jitter Make the disturbance measurable and fixable with minimal local changes. LED Driver Switching high di/dt current loop dimming edges / harmonics conducted + radiated noise Shared Impedance rail ripple @ edges ground bounce reference disturbance RF / Baseband IQ phase chain timestamp capture jitter ↑ / loss ↑ Two Priority Measurements #1 Ripple / reference noise capture at dimming edges #2 Phase / TS jitter stats correlate vs dim state Fix one lever at a time and re-check correlation to confirm causality.
Figure F8. EMC coupling path scoped to this page: switching edges disturb references, contaminating phase and timestamps.
Cite this figure: ICNavigator — “Coupling Path: Dimming Edges → Ground Bounce → Jitter (F8)”.
Cite this figure

H2-9. Calibration Governance: AoA Array, Phase Consistency, and Drift Control

Long-term stability is a governance problem: define what is calibrated, detect drift with evidence, trigger re-calibration safely, and keep a ledger for traceability and rollback.

OBJECTS geometry / phase / timebase ARTIFACT factory baseline package FIELD trigger + gate + rollback EVIDENCE widening / drift / variance

What must be calibrated (scoped)

Geometry: array spacing/orientation and mounting bias.

Phase chain: per-channel phase/group-delay bias and IQ residual mismatch.

Timebase alignment: sampling/channel alignment and timestamp-domain consistency.

Factory baseline deliverable

array_geom + version + assembly context

ch_phase_bias[i] / group-delay delta

temp_coeff (key points / segments)

cal_quality (residual / consistency)

Calibration must be treated as a versioned parameter package, not a one-time step. Each package binds context (temperature points, RF configuration, firmware build, reference fixture version) so that field drift can be interpreted and corrected instead of guessed.


In-field calibration: trigger → gate → execute → validate → rollback

  • Triggers: temperature delta, cold start/reset, service events (shock/remount), firmware updates, drift alarms.
  • Gate: only recalibrate when health evidence degrades; avoid unnecessary recal loops.
  • Validate: require residual/consistency to return near baseline; otherwise revert to last-known-good.
  • Isolate: during recalibration, keep lighting actuation cadence bounded to avoid contaminating measurements.
Ledger field Meaning Why it matters Typical failure signature
cal_version active parameter package version reproducibility + rollback unknown version → cannot compare “before/after” drift
temp_profile temperature points used/observed explains thermal drift variance grows with ΔT → temp model insufficient
params_hash hash of calibration parameters detects silent mismatch hash changes without record → inconsistent results
cal_quality residual/consistency score gate for safe activation quality degrades → widen angle distribution tail
pass_fail activation decision prevents bad updates field recal fails repeatedly → investigate coupling/sync
rollback_from previous version reference safe recovery no rollback path → unstable long-term behavior
Trigger Gate evidence (must worsen) Action Pass criteria Rollback rule
ΔT exceeded phase delta variance ↑ or angle tail ↑ phase-chain local recal residual returns near baseline revert to last-known-good
Cold start/reset offset/phase consistency not within window quick consistency check + apply baseline health metrics stable for N windows lock baseline, disable updates
Service/remount systematic bias detected (mean shift) geometry bias update bias reduced without tail growth restore previous geometry
Drift alarm repeat variance ↑ at fixed reference points full field recal sequence CDF tail improves (P95/P99) revert + raise diagnostic flag

Drift evidence should be computed from fixed-reference repeats: widening angle distribution, slow offset drift, and repeat variance growth provide stable alarm signals.

Calibration Lifecycle & Drift Governance Stable systems manage calibration like a versioned contract with evidence and rollback. Factory baseline package array_geom + ch_phase_bias temp_coeff + cal_quality Runtime health monitor phase consistency offset drift repeat variance / tail Trigger + Gate ΔT / cold start service / update gate by evidence Field recalibration execute (local / full sequence) validate residual + tail improvement rollback if fail Calibration ledger cal_version + params_hash temp + context + quality traceability for field issues Treat calibration as a controlled update: evidence-gated, validated, and reversible.
Figure F9. Calibration governance lifecycle: baseline package, health monitoring, gated triggers, field recalibration, ledger, and rollback.
Cite this figure: ICNavigator — “Calibration Lifecycle & Drift Governance (F9)”.
Cite this figure

H2-10. Validation Matrix: Quantify Accuracy, Latency, Robustness, and Stability

A practical, copy-and-run test matrix: scenarios × metrics × tools/truth × log fields. Focus on distributions (CDF, P95/P99), not single-point averages.

SCENES LOS/NLOS, crowd, materials LOADS dim states, Wi-Fi occupancy METRICS CDF, P99 latency, false triggers TOOLS markers/rail, scope, spectrum, logs

Scenario axes (repeatable)

Propagation: LOS / NLOS (controlled occlusion)

Dynamics: none / sparse / crowded movement

Materials: glass / metal / tile / mixed

Lighting: fixed / sweep / event-chatter updates

Coexistence: low / high Wi-Fi occupancy

Thermal: low / nominal / high + ramp

Metric groups (distribution-first)

Accuracy: error CDF (P50/P90/P95/P99)

Latency: end-to-end distribution + split (Rx→Solve, Solve→Actuate)

Robustness: loss, false trigger rate, follow-me failure rate

Stability: drift rate, repeat variance, resync/recal jump

Define scenario IDs as switchable combinations of axes and run each scenario long enough to populate tails (P95/P99). Every run must include the same timestamp fields and state annotations (lighting state, Wi-Fi occupancy level, temperature, sync/calibration version) to keep correlation valid.

Scenario ID Axes (LOS/NLOS, crowd, material, lighting, Wi-Fi, temp) Metrics to report Tools / truth Log fields (minimum)
S01 baseline LOS LOS, none, mixed, fixed, low, nominal error CDF + latency P95/P99 marker grid / reference points event_ts, rx_ts, solve_ts, act_ts, dim_state, cal_version
S02 NLOS occlusion NLOS, none, mixed, fixed, low, nominal CDF tail + false triggers + loss marker grid + repeat holds + quality flags (NLOS/phase_ok/ts_ok), drop_cnt
S03 crowd dynamics mixed, crowded, mixed, fixed, low, nominal track stability + variance growth reference points + repeated passes track_id, confidence, zone_id flip rate
S04 dim sweep LOS, none, mixed, sweep, low, nominal loss vs duty + phase/TS jitter stats scope + (optional) spectrum dim_duty, update_moment, phase_jitter, ts_jitter
S05 high Wi-Fi LOS, none, mixed, fixed, high, nominal latency tail + loss + retry behavior logs + coexistence load injection wifi_occupancy_level, retry_cnt, queue_depth
S06 thermal ramp LOS, none, mixed, fixed, low, ramp drift rate + recal trigger behavior temp chamber (or controlled ramp) temp, cal_quality, recal_events, cal_version
Field Source Unit Used for Common anomaly meaning
event_ts positioning event time ticks/ns ordering, trajectory continuity missing/unsorted → pipeline ordering not enforced
rx_ts RF/BB capture ticks/ns Rx→Solve latency, TS integrity jitter spikes at dim edges → coupling suspected
solve_ts solver output ready ticks/ms solver windowing and tail tail grows → window/retry/cpu contention
act_ts driver update commit ticks/ms end-to-end latency distribution irregular spacing → missing rate limit / fixed cadence
dim_state lighting subsystem enum correlation vs EMI and jitter loss/peaks track duty → dim coupling path
cal_version calibration ledger id traceability and rollback variance change after update → calibration quality

Keep this page scoped: use the validation matrix to prove positioning–lighting coexistence under dimming and coexistence load. Link broader EMC/ESD compliance procedures to the EMC, Safety & Compliance master page.

Validation Matrix Map & Test Loop Scenarios → metrics → tools/truth → logs, repeated until tails are stable. Scenario toggles LOS / NLOS crowd: none / sparse / crowded materials: glass / metal / tile lighting: fixed / sweep / chatter Wi-Fi occupancy: low / high temperature: nominal / ramp Metrics (distributions) error CDF (P50..P99) latency tail (P95/P99) loss / false triggers drift / repeat variance sync + cal versions Tools & logs marker grid / rail scope @ dim edges (optional) spectrum event_ts / rx_ts / act_ts dim_state / occupancy / temp Closed test loop configure scenario run + capture analyze tails change one lever re-test
Figure F10. Validation matrix map: scenario toggles feed distribution-based metrics with tool-backed truth and standardized logs.
Cite this figure: ICNavigator — “Validation Matrix Map & Test Loop (F10)”.
Cite this figure

H2-11. Field Debug Playbook: Symptom → Evidence → Isolate → First Fix

Use a strict, repeatable flow to turn “it drifts” into measurable evidence. Each playbook entry below is fixed to 4 lines: Symptom, First 2 evidence picks, Discriminator, First fix.

Always tag every log sample with: dim_state, update_rate, wifi_busy, temp, cal_version, sync_state. Prefer distribution metrics (P95/P99) over single averages.
Coupling / Ground Bounce Timebase / Sync Drift AoA Phase Chain Multipath / NLOS Queue / Retry Tail Latency Calibration Governance
PB-01 Only-worse-during-dimming Coupling likely
Symptom
Position / angle gets worse only during dimming sweeps or brightness step changes.
First 2 evidence
  • Scope: RF/BB rail ripple + local ground reference noise aligned to dimming edges.
  • Logs: packet loss / phase-jitter / timestamp-jitter vs dim_state and duty cycle.
Discriminator
If RF noise peaks and jitter rise at specific dimming frequencies/duty, coupling dominates. If metrics are flat vs dim state, prioritize multipath or calibration instead.
First fix
Rate-limit dimming updates and avoid measurement windows (time-domain separation). Reduce high di/dt return path overlap with RF reference. If the driver supports it, enable spread-spectrum / soft-edge modes (example LED driver/controller families: TPS92520-Q1, LT3795).
PB-02 Crowd-induced drift/jumps Multipath / NLOS
Symptom
Angle/position becomes unstable only when people move (crowded room, walking path).
First 2 evidence
  • Logs: error CDF tail expands (P95/P99) while median stays similar.
  • Quality indicators: UWB CIR/first-path quality or AoA angle variance vs time (before/after crowd entry).
Discriminator
If tail-only degradation correlates with human motion, NLOS/multipath dominates. If mean bias shifts in one direction, suspect geometry/installation changes.
First fix
Increase anchor/array angular diversity (height/angle/coverage), add a “reject/hold” policy when quality is low, and widen hysteresis at zone boundaries. For UWB ranging engines, common IC examples include DW3110/DW3120 or SR150 (infrastructure-grade).
PB-03 One-room / one-side bias Environment or geometry
Symptom
Only one room or one side of a room shows consistent bias or worse accuracy.
First 2 evidence
  • A/B compare: same tag path, two layouts (anchor/array moved 0.5–1 m or rotated).
  • Logs: bias stays constant across time (mean shifts), not just variance.
Discriminator
If bias persists and follows the room materials (metal/glass), reflection geometry dominates. If bias follows the hardware unit, calibration/phase chain dominates.
First fix
Add angular coverage or change anchor line-of-sight. Re-run array geometry/phase baseline checks (AoA) or first-path validation (UWB). AoA-capable SoC examples: nRF52833, EFR32BG24.
PB-04 Latency P99 spikes Queue / retry
Symptom
“Follow-me” lighting feels laggy; latency suddenly increases (especially P99) while median stays acceptable.
First 2 evidence
  • Timestamp chain: rx_ts → solve_ts → act_ts histogram (P50/P95/P99).
  • Retry/queue counters: radio retransmits, solver window backlog, actuation update backlog.
Discriminator
If tail latency spikes align with retries/backlog, it is scheduling/queue. If spikes align with dim edges, coupling is the driver.
First fix
Separate “event-time” ordering from “actuation-time” cadence; enforce max update rate; batch small changes into stable commit ticks. Keep solve windows bounded; cap retries under high interference.
PB-05 After firmware update Version regression
Symptom
Accuracy/latency changes immediately after firmware update or configuration change.
First 2 evidence
  • Compare cal_version, radio settings, dimming update policy, solver window size before/after.
  • Run the same validation scenario ID and compare CDF + latency distributions.
Discriminator
If the bias shifts with identical environment, it is calibration/timebase/settings. If only tail gets worse, it is scheduling/queue.
First fix
Roll back one change at a time: actuation cadence, capture window alignment, calibration parameters. Gate any new build with scenario-matrix deltas (CDF + P99).
PB-06 Slow drift (hours/days) Sync/thermal
Symptom
Works at startup, then gradually drifts over hours/days; often temperature-correlated.
First 2 evidence
  • Drift plot: anchor-to-anchor offset / timebase error vs time and temp.
  • AoA: phase-consistency metric vs temperature ramp; UWB: first-path quality vs temperature.
Discriminator
If offset/drift grows monotonically, timebase/holdover is the root. If phase-consistency worsens with temp, channel drift dominates.
First fix
Tighten sync refresh/discipline policy and record sync state in logs. Add temperature-gated re-calibration only when health metrics exceed thresholds (avoid “over-calibration”). UWB infrastructure IC examples: SR150; UWB transceiver examples: DW3110/DW3120.
PB-07 Angle jumps, RSSI stable AoA phase chain
Symptom
BLE AoA angle intermittently jumps while RSSI looks stable; direction flips or becomes noisy.
First 2 evidence
  • IQ/phase: per-antenna phase delta stability across identical packets.
  • CFO/clock: CFO estimate variance and its correlation with dimming state / temperature.
Discriminator
If phase deltas drift across channels while RSSI stays stable, array/channel drift dominates. If phase noise spikes only at dim edges, coupling dominates.
First fix
Re-check array calibration baseline; validate antenna switching timing; reduce shared impedance between RF reference and driver returns. AoA-capable SoC examples: nRF52833, EFR32BG24.
PB-08 UWB range outliers NLOS / first-path
Symptom
UWB distance shows spikes/outliers in specific spots (near metal, corners, glass), even at steady lighting.
First 2 evidence
  • CIR/first-path quality vs measured range; mark NLOS flags if available.
  • Compare two anchor geometries (small relocation) and re-run the same path.
Discriminator
If first-path quality degrades where spikes occur, multipath/NLOS dominates. If spikes align with actuation edges, coupling dominates.
First fix
Add anchor diversity or change height/angles to improve first-path visibility. Reject measurements when quality is low and hold last stable position for lighting actions. UWB transceiver IC examples: DW3110 (UWB transceiver), DW3120 (variant in same family).
PB-09 Loss vs brightness/duty Conducted/ground coupling
Symptom
Packet loss increases at certain brightness levels or duty cycles; positioning becomes intermittent.
First 2 evidence
  • Correlate drop_cnt with duty and update cadence.
  • Scope: compare RF rail noise at “good” vs “bad” brightness settings.
Discriminator
If loss peaks at repeatable brightness points (not random), it is a spectral/edge interaction or control-loop operating mode change.
First fix
Change dimming frequency / spread the spectrum where supported; reduce edge steepness; isolate RF rail with local filtering; keep driver switching return localized. Example driver/controller MPNs commonly seen in dimmable systems: TPS92520-Q1, LT3795.
PB-10 Frequent recal triggers Governance issue
Symptom
System requests recalibration too often, or recalibration passes once but quickly regresses.
First 2 evidence
  • Calibration ledger: cal_version changes vs time/temp; compare quality/residual metrics.
  • Health metrics: angle variance / offset drift / timestamp jitter before and after recal.
Discriminator
If recal “improves briefly then fails” under dimming, coupling is still present. If recal never stabilizes even at steady lighting, phase chain or geometry baseline is unstable.
First fix
Add gate conditions (recal only when health metrics exceed thresholds), require pass/fail criteria, and enforce automatic rollback to last-known-good baseline. For BLE AoA engines, common SoC examples include nRF52833 and EFR32BG24.
F11 — Field Debug Decision Ladder (Positioning × Lighting) Symptom → pick 2 evidence → isolate root cause → change one lever → re-test 1) Symptom bucket 2) First 2 evidence picks 3) Discriminator Only when dimming Drift / loss / angle jumps Only with crowds Tail error grows (P95/P99) One room biased Mean shifts (not just noise) Latency P99 spikes Follow-me feels laggy Scope @ dim edges RF rail ripple / ref noise Compare good vs bad duty Log correlation phase/TS jitter vs dim_state drop_cnt vs duty / update_rate Timestamp chain rx_ts → solve_ts → act_ts P50/P95/P99 distributions Coupling Noise peaks align with dim Jitter rises at duty points Multipath / NLOS Tail error grows w/ people First-path quality drops Phase chain drift AoA phase deltas unstable RSSI looks stable Queue / retries P99 spikes w/ backlog Retry counters rise Rule: Change ONE lever → re-run the same scenario ID → compare CDF + P99. Keep logs tagged by dim_state / temp / cal_version / sync_state.
The ladder connects “only-when” symptoms to two evidence picks (scope + logs), then isolates coupling vs multipath vs phase drift vs queue tails. Keep fixes minimal and reversible: change one lever, re-test the same scenario ID, compare distributions.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12. FAQs ×12 (Evidence-Backed, No Scope Creep)

Each answer uses a fixed pattern: two checks (one waveform/spectrum + one log/stat), a discriminator (what proves the root cause), and a first fix lever (one change, then re-test). All questions map back to this page’s chapters.

dim_state phase_jitter ts_jitter offset/drift CDF P95/P99 drop_cnt queue_depth
1“Works in daylight, drifts at night when lights turn on” — rail ripple or AoA phase consistency?

First capture RF/BB rail ripple and reference-ground noise aligned to the dimming edge, then correlate phase_jitter/ts_jitter vs dim_state. If jitter spikes only at specific brightness steps, coupling dominates; if drift persists without dim correlation, AoA channel bias is more likely. First fix: rate-limit dim updates and avoid capture windows; add local filtering/return-path control at RF/BB entry points.

Maps to: H2-8 / H2-9Example MPNs: Qorvo DW3120, SiLabs EFR32BG24
2“Accuracy changes a lot after moving to another room” — multipath/NLOS or anchor sync drift?

Compare error CDF tail (P95/P99) and (if available) first-path quality/CIR stats between rooms, then check anchor offset/drift logs. If only the tail worsens and follows materials/geometry, NLOS dominates; if offset/drift evolves similarly across rooms, the timebase is unstable. First fix: do an A/B anchor placement tweak (height/orientation/diversity) or tighten resync/holdover thresholds before changing algorithms.

Maps to: H2-4 / H2-5 / H2-10Example MPNs: NXP Trimension SR150, Qorvo DW3110
3“Latency swings wildly” — solver window or dimming refresh / event queue?

Log the full timeline rx_ts → solve_ts → act_ts and plot P50/P95/P99, then log queue_depth + update_rate of the lighting command path. A P99-only blow-up with backlog indicates queue/retry pressure; a periodic latency pattern aligned to refresh cadence indicates a timing collision. First fix: enforce a fixed commit cadence, merge updates, and schedule actuation away from capture/solve windows.

Maps to: H2-2 / H2-7Example MPNs: NXP i.MX RT1060 (edge MCU class), STM32WB09
4“AoA angle occasionally jumps” — array calibration or CFO / phase-noise issue?

Check per-channel phase delta consistency on repeated packets, and track CFO estimate variance vs temperature and dim_state. If channel deltas slowly widen or bias with temperature, calibration/thermal drift dominates; if jumps coincide with CFO noise bursts or rail disturbance, RF/reference stability dominates. First fix: gate re-calibration on a health metric (not a timer), and stabilize the RF/BB reference path before increasing filtering in software.

Maps to: H2-4 / H2-9Example MPNs: Nordic nRF52833, SiLabs EFR32BG24
5“Packets drop only at one brightness level” — how to prove dimming-harmonic coupling?

Sweep brightness and record drop_cnt vs duty, then measure rail ripple or RF noise spectrum to find a repeatable spur at that level. A deterministic “brightness notch” that reproduces across runs indicates harmonic/edge coupling rather than random interference. First fix: shift the dimming frequency or edge rate, apply update-throttling, and add localized decoupling/return-path control near RF/BB and driver hot loops.

Maps to: H2-7 / H2-8Example MPNs: TI TPS92520-Q1, ADI LT3796
6“Same spot, repeat measurements show larger variance over time” — thermal drift or sync holdover?

Track anchor offset/drift slope during holdover and correlate variance with temperature ramp and re-sync events. A monotonic drift slope points to insufficient discipline/holdover; a strong temperature correlation points to phase/geometry thermal sensitivity. First fix: tighten resync thresholds (limit holdover time) and enable temperature-gated re-calibration with rollback if the health metric worsens.

Maps to: H2-5 / H2-9Example MPNs: NXP SR150, Qorvo DW3120
7“Position is accurate but lighting follow-me flickers” — action granularity or interface quantization?

Compare actuation step size distribution and update interval jitter against the driver/bus minimum step and refresh rate. If many small, irregular steps occur, flicker is an output-shaping problem, not a localization problem. First fix: add hysteresis + minimum hold time, rate-limit/slew-limit brightness changes, and commit commands on a stable cadence matched to the dimming interface.

Maps to: H2-7 / H2-2Example MPNs: ADI LT3796, TI TPS92520-Q1
8“Drifts when the room is crowded” — NLOS blockage or RF interference?

Use the validation matrix: compare error CDF tail under crowding (P95/P99) and compare drop_cnt vs Wi-Fi busy / channel occupancy. Tail-only degradation with stable median is typical NLOS; rising drop_cnt correlated with channel occupancy indicates interference/retries. First fix: for NLOS, increase geometry diversity and gate on quality metrics; for interference, reduce retries and align capture/actuation scheduling to avoid collisions.

Maps to: H2-4 / H2-10Example MPNs: Qorvo DW3110, Nordic nRF52833
9“After reboot, first 10 minutes are inaccurate” — calibration not converged or sync not locked?

Log sync_state transitions (lock/holdover/resync) and a calibration health metric (phase consistency / residual). If accuracy improves immediately after sync lock, the timebase is the bottleneck; if sync is locked but health improves slowly, calibration/thermal settling dominates. First fix: gate lighting automation to “sync_locked AND health_ok”, and keep a safe default behavior until both gates pass.

Maps to: H2-5 / H2-9Example MPNs: NXP SR150, SiLabs EFR32BG24
10“Phone works, tags do not” — duty-cycling power or the ranging link?

Compare samples-per-second and window length for tags vs phone, then compare link-quality metrics (angle/range quality, drop_cnt). If tags run a much lower duty cycle, the system is under-sampling and cannot stabilize position; if quality is low even at high duty, antenna orientation or RF path is limiting. First fix: add burst mode on motion/occupancy events, and keep command cadence stable to avoid creating flicker while sampling increases.

Maps to: H2-6 / H2-3Example MPNs: Nordic nRF52833 tag-class, Qorvo DW3120
11“Different luminaire models behave inconsistently” — driver noise or interface timing?

First check refresh period + minimum step differences across luminaires, then check whether drop_cnt/phase_jitter changes only with specific models and brightness points. If inconsistency is mainly response step/latency, it is interface quantization; if it is localization stability, it is coupling from the driver power stage. First fix: normalize command cadence/steps per luminaire capability and apply window avoidance plus localized filtering for high-noise models.

Maps to: H2-7 / H2-8Example MPNs: TI TPS92520-Q1, ADI LT3796
12“False triggers are frequent” — how to push down false-trigger rate with gates and timing?

Measure false-trigger rate against threshold + hysteresis + time window using the validation matrix, and verify whether low-quality samples can directly trigger actions. If triggers cluster near zone boundaries, hysteresis/hold is insufficient; if triggers correlate with low-quality metrics, a quality gate is missing. First fix: add a three-part guard (quality gate + hysteresis + minimum hold time), then re-quantify the reduction using the same scenarios and metrics.

Maps to: H2-2 / H2-10Example MPNs: (platform-agnostic) Edge MCU + DF radio

Figure F12 — FAQ Evidence Chain Map (Positioning ↔ Lighting)

Minimal debug map: where each symptom typically lands in the positioning + lighting coexistence evidence chain.

Evidence Chain (FAQ → Chapters) Two checks → discriminator → one lever → re-test Typical Symptoms Drifts only when dimming Latency P99 spikes Room-to-room variance AoA angle jumps False triggers Evidence Chain Blocks Luminaire Driver / Dimming PWM / bus refresh / step size Power + Ground Coupling rail ripple / ground bounce UWB / BLE AoA Engine IQ/phase, CIR, drop_cnt Time Sync / Timestamps offset, drift, holdover Edge MCU Interface queue_depth, cadence Calibration + Drift phase bias, temp model Validation Re-test Error CDF (P95/P99) · Latency (P50/P99) · Drop rate · False-trigger rate Rule: change one lever → re-test with the same matrix; avoid multi-change guessing.
Use this map to keep every FAQ answer grounded: one waveform/spectrum + one log/stat, then a single-lever fix and a matrix re-test.