123 Main Street, New York, NY 10001

Fiber-Optic Fence Perimeter Sensing (Interferometry/Rayleigh)

← Back to: Security & Surveillance

Core idea: Fiber-optic perimeter sensing turns tiny fence vibrations into reliable, zone-tagged security events by controlling the full chain—optical power budget, photonic AFE noise/linearity, sampling + DSP, and robust Ethernet uplink. The goal is not “more sensitivity,” but repeatable detection with low false alarms, traceable zone mapping, and survivability in outdoor surge/EMI and network outages.

H2-1 • Definition & Boundary

Definition & Boundary

One-sentence boundary: this page covers the end-to-end engineering chain of fiber-optic (FO) perimeter sensing—from optical path to photonic AFE to ADC/TDC sampling to edge event generation and Ethernet uplink. Everything is written to be verifiable by field evidence (event logs, counters, and observable metrics).

What FO fence sensing solves (engineering outcomes):

  • Long distance coverage with minimal active electronics along the fence (central interrogator + passive fiber).
  • Distributed sensing: convert disturbances into zone-resolved events (distance bins → zones).
  • EMI immunity along the sensing line (fiber as the sensing medium), while keeping electronics measurable and protectable at the edge.
  • Operational traceability: each alarm can carry the “why” in metrics (SNR, confidence, health status) rather than a black-box trigger.

Not in scope (kept out on purpose):

  • Camera recognition/analytics, video pipeline tuning, NVR/VMS platform architecture, recording compliance.
  • Access control reader/panel design, building-wide intercom topology, door/relay linkage logic.
  • Network-wide timing architecture (PTP grandmaster, TSN system design) beyond local timestamp fields needed for events.

Evidence chain (everything ultimately lands here): FO perimeter sensing must produce events and health telemetry that are easy to audit. A practical design starts by defining the minimum event schema below—so validation and field debug have concrete “first checks”.

Field Meaning Typical “first check” use
zone_id Zone index mapped from distance/time bins (the operator-facing location). “Alarm is in wrong place” → verify zone map version and bin alignment.
start_ts / end_ts Event time window; enables de-duplication and correlation with other systems. “Late/early alarms” → check sampling window, edge queue delay, uplink retries.
snr_db Signal-to-noise estimate at the decision point (not raw optical power). “Detection drops with distance” → compare SNR trend vs optical power vs AFE noise.
confidence Decision confidence based on features/thresholds; should be reproducible. “False alarms” → see confidence distribution and which feature dominated.
link_status Ethernet link up/down + error counters snapshot at event time. “Missing events” → distinguish sensing failure from uplink loss/buffer overflow.
optical_power_dbm (optional) Monitored optical Tx/Rx level or backscatter baseline proxy (implementation-dependent). “Sudden sensitivity change” → detect connector/splice loss or reflection changes.
pipeline_version / zone_map_version Traceability for firmware + calibration/mapping used to generate the event. “Behavior changed after service” → confirm versions before chasing hardware.

Note: this page does not prescribe a specific protocol or cloud backend. The focus is defining measurable fields and designing the chain that makes them stable.

F1. FO Fence Perimeter Sensing — End-to-End Chain Optical path → Photonic AFE → ADC/TDC → Edge events → Ethernet uplink Laser Tx Power / Mod Fiber Fence Distributed sensing line Z1 Z2 Z3 Z4 Backscatter Interferometer Photonic AFE PD / TIA / VGA PD TIA AGC ADC / TDC Sampling / ToF Edge Processing Filter → Feature → Event Ethernet Uplink Event + Health Electrical signal path (measurable) Minimum Event Fields • zone_id • start_ts/end_ts • snr_db • confidence • link_status • versions Design rule: every claim must map to a measurable field (SNR, timestamps, link status, versions).
Figure F1. FO fence perimeter sensing is best described as a measurable chain: optical path → photonic AFE → sampling/ToF → edge event generation → Ethernet uplink. Keeping a minimum event schema makes validation and field debug repeatable.
Cite this figure: FO Fence Perimeter Sensing — End-to-End Chain (F1). Suggested caption: “End-to-end FO perimeter sensing chain with minimum event fields (zone/time/SNR/confidence/link).”
H2-2 • Sensing Principles You Actually Use

Sensing Principles You Actually Use

This section is not a physics lecture. It captures the two implementation routes an engineer actually chooses between, and shows how each route shifts the dominant bottleneck and the first evidence fields to check in validation and field debug.

Route A — Interferometry (phase-sensitive): highly sensitive to tiny strain/vibration; the system is often phase-noise limited. Engineering focus becomes phase stability, AFE linearity, and drift control.

  • Dominant risk: phase noise / drift masquerading as intrusion, or eroding SNR over time.
  • First evidence checks: snr_db trend, decision stability, AFE saturation/AGC state, baseline drift markers.

Route B — Rayleigh distributed (DAS/DVS concept): segmentation by time windows / distance bins; localization is tied to sampling strategy and ToF/bin alignment. Engineering focus becomes windowing, timing, and zone mapping robustness.

  • Dominant risk: bin/zone misalignment, reflections/splices shifting backscatter baseline, aliasing from poor sampling.
  • First evidence checks: window SNR consistency, hotspot stability by distance bin, zone map version and calibration stamps.

Decision table (engineering-focused): compare the routes using parameters that directly map to hardware constraints and measurable evidence.

Dimension Interferometry (Phase-sensitive) Rayleigh Distributed (Window/Distance-bin)
Localization method Event inferred from phase changes; localization often depends on system architecture and sensing layout. Localization by time window → distance bin → zone mapping (ToF/sampling aligned).
Primary bottleneck Phase noise / drift + AFE linearity; stability becomes the limiter. Sampling/window alignment + reflection baseline stability; timing strategy becomes the limiter.
AFE pressure point Low-noise, wideband, low-distortion photonic AFE; saturation/AGC artifacts are critical. Photonic AFE still matters, but bin consistency and dynamic baseline handling dominate field behavior.
Sampling strategy Continuous sampling supports phase demod and feature extraction. Windowed sampling drives range resolution and zone granularity; aliasing must be controlled.
Common false-alarm sources Thermal/mechanical drift, phase instability, environmental coupling that resembles intrusion signatures. Connector/splice reflections, micro-bend loss changes, bin/zone mis-mapping, windowing artifacts.
“First two fields” to check snr_db + AFE state (saturation/AGC) at event time. Window/bin consistency + zone_map_version (plus optical baseline proxy if available).

Practical rule: choose the route by what must stay stable in the field. If stability is dominated by phase drift, design for phase-noise control and AFE linearity. If stability is dominated by range/bin alignment, design for sampling/window integrity and robust zone mapping.

F2. Two Practical Routes (Engineering View) Same chain blocks; different bottleneck: phase-noise-limited vs window/bin-limited Route A: Interferometry Route B: Rayleigh Distributed Laser / Mod Fiber Fence Photonic AFE → ADC Noise / linearity / saturation Phase Demod + Features Bottleneck: phase noise / drift Edge Event + Ethernet zone / ts / snr / conf / link Laser / Pulse Fiber (Bins) Photonic AFE → TDC/ADC Windowing / ToF / baseline Distance Bin + Zone Map Bottleneck: window/bin alignment Edge Event + Ethernet zone / ts / snr / conf / link Key limiter: phase noise / drift → check SNR trend + AFE state Key limiter: window/bin alignment → check bin consistency + zone map version
Figure F2. Two routes share the same “blocks” but fail differently in the field. Interferometry is commonly phase-noise limited; Rayleigh distributed sensing is commonly window/bin limited. The decision should be based on which stability axis can be controlled and verified.
Cite this figure: Interferometry vs Rayleigh Distributed — Engineering Comparison (F2). Suggested caption: “Two FO perimeter sensing routes and their dominant bottlenecks and first evidence checks.”
H2-3 • Optical Front-End Blocks

Optical Front-End Blocks

Goal: break the optical module into purchasable, tunable, and measurable blocks. A stable optical front-end is not “nice to have”: it defines the backscatter baseline and determines whether the downstream AFE and DSP can ever deliver consistent event SNR and false-alarm immunity.

Block A — Laser & Modulation (engineering meaning):

  • Direct modulation favors simplicity, but couples driver noise into intensity/frequency behavior that may appear as event-like features downstream.
  • External modulation (EOM/AOM) separates “how to drive” from “how to sense”, enabling better control of modulation depth and spectral behavior, at the cost of insertion loss and added control points.
  • Tunable knobs to keep explicit: Tx power setpoint, modulation rate/duty, mod depth/bias (if external), warm-up/thermal stabilization.

Engineering check: modulation should improve distance-profile visibility without raising the noise floor or creating periodic artifacts in the event spectrum.

Block B — Coupling / Circulator / Split (return-path control):

  • Return-path separation is the main purpose: prevent direct leakage and strong reflections from dominating the receiver.
  • Isolation limits set how often the receiver sees “fixed hotspots” (strong, distance-locked peaks) that can saturate AFE stages and inflate false alarms.
  • Reflection sensitivity increases with connectors and splices; their position and reflection strength must be treated as measurable state, not a mystery.

Engineering check: a healthy return path shows a predictable baseline vs distance, without dominant fixed peaks that persist across environments.

Block C — Optical power budget (why distance fails first):

  • Budget components: Tx power, fiber attenuation over length, component insertion loss (couplers/circulators/connectors), and the usable backscatter baseline.
  • Design target: keep enough return margin so that the photonic AFE noise floor does not dominate event SNR at the far end.
  • Field reality: micro-bends, wet connectors, or a single splice issue can create step-loss or new hotspots that shift the entire SNR distribution.

Engineering check: compare baseline distance profile and optical monitor readings against commissioning baselines (trend matters more than absolute numbers).

Component-to-symptom map (purchasable → measurable → diagnosable):

Optical block Spec focus (what matters) Field symptom First evidence to check
Laser Tx / Driver Output stability, warm-up behavior, driver noise coupling SNR drifts over time; periodic artifacts in spectrum optical_power_dbm trend (if available), baseline stability vs time
Modulator (EOM/AOM) Insertion loss, bias stability, modulation depth consistency Distance profile becomes “flat” or overly ripple-like Distance profile visibility; modulation artifact markers
Coupler / Splitter Insertion loss, balance, port repeatability Overall sensitivity loss; inconsistent return amplitude link_loss_delta vs commissioning; baseline downshift
Optical circulator Isolation (leakage), insertion loss Fixed strong hotspots; AFE saturation increases Hotspot position stability; pd_saturation_cnt spikes
Connector / Patch point Reflection and contamination sensitivity New peak at fixed distance; false alarms cluster Hotspot location; baseline discontinuity near that distance
Splice Step loss, reflection (should be minimal) Distance-profile step-down; far zones degrade Baseline step location; far-zone snr_db distribution shift
Micro-bend / Tight routing Loss under stress/temperature Intermittent sensitivity; weather-dependent behavior Baseline wobble vs environment; event rate changes vs baseline loss

Practical field order: (1) optical monitor trend → (2) distance profile baseline → (3) fixed hotspot positions → (4) loss delta vs commissioning.

F3. Optical Link & Return Path (What Changes in the Field) Mark reflection points, splice loss steps, and micro-bend loss that reshape the distance profile Laser Tx Power / Mod Modulator Direct / EOM / AOM Circulator Tx/Rx isolation Forward path Connector Splice Bend Backscatter / Return reflection peak loss step attenuation rise Rx Port to PD/TIA Distance Profile (baseline vs anomalies) distance level hotspot step-loss
Figure F3. The optical chain must be treated as a measurable state machine: connectors create reflection peaks, splices can create step-loss, and micro-bends can raise attenuation. These effects reshape the distance profile and propagate into event SNR and false alarms.
Cite this figure: Optical Link & Return Path with Reflection Points (F3). Suggested caption: “Optical forward/return path and typical field anomalies that reshape the distance profile.”
H2-4 • Photonic AFE

Photonic AFE (PD/TIA/Balanced Receiver, Noise & Dynamic Range)

Why this is a depth core: many “cannot detect” and “false alarm” issues are not algorithmic. They are caused by noise floor, nonlinearity, or gain-control behavior inside the photonic AFE. A practical AFE write-up must always connect mechanisms to measurable evidence: noise floor, saturation count, AGC state, and spectrum baseline.

A. Photodiode & Balanced Detection (engineering payoff):

  • Balanced receiver helps reject common-mode intensity noise and improves stability of the usable baseline.
  • Failure pattern: strong fixed reflections can unbalance the receiver and push one path into saturation, creating event-like artifacts.
  • Evidence: rising pd_saturation_cnt, noisy snr_db distribution, and distance-locked hotspots correlated with false alarms.

Design requirement: preserve linearity for strong returns while keeping enough sensitivity for far zones.

B. TIA focus (what decides “detectable”):

  • Input capacitance & stability affect usable bandwidth and noise behavior at the most sensitive node.
  • Bandwidth choice is a trade: too low distorts features; too high admits unnecessary noise and EMI coupling.
  • Supply isolation matters: rail ripple can appear as a raised spectrum floor and directly reduce event SNR.
  • Evidence: measured output noise (RMS or spectral), clipping markers, and correlation between rail noise and spectrum baseline.

Engineering rule: far-zone performance is usually limited by the TIA noise floor, not the DSP model.

C. VGA / AGC (when it helps, when it breaks repeatability):

  • When needed: large return dynamic range across distance bins, or baseline shift after installation changes.
  • Hidden risk: AGC hysteresis and gain hunting can make the same disturbance look different over time, breaking threshold-based immunity.
  • Evidence: agc_state oscillation, event latency drift, and a widened confidence distribution without a matching SNR improvement.

Practical mitigation: distance-aware gain profiles and cooldown logic often outperform aggressive global AGC.

Evidence checklist (what to log so issues are diagnosable):

  • noise_floor (spectrum baseline proxy), snr_db at decision point, and far-zone SNR distribution over time
  • pd_saturation_cnt / clipping markers near strong reflections or hotspots
  • agc_state or gain code timeline to correlate with false-alarm clusters
  • Optional: rail ripple snapshot and AFE temperature when drift is suspected
F4. Photonic AFE Chain (Noise, Saturation, AGC Behavior) PD → TIA → VGA/AGC → Anti-alias → ADC; log the markers that explain “miss” vs “false alarm” PD Balanced Rx TIA Noise / linearity VGA / AGC Gain control AAF Anti-alias ADC Sampling Noise Floor log: noise_floor limits far zones Saturation log: saturation_cnt creates artifacts AGC Hysteresis log: agc_state breaks repeatability Evidence Fields to Log snr_db noise_floor saturation_cnt agc_state link_status zone_map_version Interpretation rule: “miss” is usually noise-floor limited; “false alarm” is often saturation or gain hunting.
Figure F4. Photonic AFE behavior explains most field failures: a raised noise floor kills far-zone detectability, saturation creates false features, and AGC hysteresis breaks repeatability. Logging noise_floor, saturation_cnt, and agc_state makes root-cause diagnosis fast.
Cite this figure: Photonic AFE Chain and Failure Markers (F4). Suggested caption: “PD→TIA→VGA/AGC→AAF→ADC with noise floor, saturation, and AGC hysteresis markers.”
H2-5 • TDC/ADC & Sampling Strategy

TDC/ADC & Sampling Strategy

Why sampling decides everything: zoning accuracy, maximum detectable vibration bandwidth, and many false alarms are sampling artifacts. A correct strategy ties time reference, windowing, binning, and filters into a measurable chain: sample_rate, timestamp_jitter, window_alignment_error, and aliasing_marker.

A. TDC for ToF windowing (Rayleigh / distributed routes):

  • Role: establish a stable time reference so distance bins map consistently to physical zones.
  • Failure pattern: time jitter becomes bin-boundary jitter → zone flicker near boundaries.
  • Evidence: rising timestamp_jitter, increased window_alignment_error, and boundary zones showing unstable event clustering.

Engineering requirement: keep window alignment stable enough that a stationary reflector does not “move” across bins.

B. ADC for continuous sampling (interferometric routes):

  • Role: continuous waveform capture to support phase demod and spectral features.
  • Sampling reality: the usable feature bandwidth is bounded by the effective sampling rate and anti-aliasing chain.
  • Evidence: sample_rate (effective), adc_clip_cnt, and a stable spectral baseline (noise floor and peak behavior over time).

Engineering requirement: preserve far-zone micro-features without being dominated by near-zone hotspots and clipping.

C. Anti-aliasing is an event-quality requirement:

  • Chain: analog AAF limits out-of-band energy → digital filtering selects the band of interest → sampling rate sets the boundary.
  • False-alarm mechanism: out-of-band energy folds into the decision band when aliasing is present.
  • Evidence: aliasing_marker (mirror-peak signatures or folded-energy score), filter profile ID/version, and event-rate sensitivity to sampling changes.

Engineering requirement: any “new false alarm burst” must be explainable by a measurable aliasing or alignment marker.

Sampling decision matrix (goal → what to tune → what it costs → what to verify):

Goal What to tune Trade-off / cost Evidence to verify
Finer zone accuracy Window width, bin size, zone map granularity, TDC stability More bins → more compute/memory; tighter windows → more alignment sensitivity window_alignment_error ↓, boundary-zone flicker ↓, stable hotspot bin index
Higher detectable frequency Effective sample_rate, AAF corner, digital band selection Higher Fs → compute/storage/thermal; insufficient AAF → aliasing risk Band-limited spectrum stable; aliasing_marker remains low
Lower false alarms AAF + digital filtering; cooldown/dedup inputs (in H2-6) Tighter filtering may suppress weak real events; latency may increase aliasing_marker ↓, event-rate vs Fs becomes predictable
Lower power / cost Reduce Fs, reduce resolution, reduce bins, simplify filters Less bandwidth and less dynamic range; risk of missed far-zone events Far-zone snr_db distribution remains acceptable; miss-rate does not spike
F5. Windowing: Sampling → Distance Bins → Zones TDC/clock alignment defines bin boundaries; binning defines zone accuracy and boundary stability 1) Sampling timeline Fs (sample_rate) + timestamp reference timestamp_jitter 2) Windowing → distance bins Window alignment defines bin boundaries (ToF mapping) window bin width window_alignment_error → boundary shift 3) Bins → Zones Multiple bins aggregate into a physical zone (stable mapping) Zone A Zone B Zone C boundary-zone flicker
Figure F5. Sampling and time reference define window alignment; windowing defines bin boundaries; binning defines zone stability. Zone “jumping” near boundaries is often a measurable window_alignment_error + timestamp_jitter problem.
Cite this figure: Sampling → Distance Bins → Zones (F5). Suggested caption: “Time-window alignment and distance-bin mapping that determines zone accuracy and boundary stability.”
H2-6 • Phase Demod / DSP Pipeline

Phase Demod & Edge DSP Pipeline (Raw Waveform → Event)

Intent: describe the pipeline as engineering blocks, not academic papers. The output must be an auditable event packet so field failures can be traced to versions, filters, and evidence fields.

Pipeline summary (what each stage must guarantee):

  • Preprocess: remove DC / drift so thresholds remain stable across environment and installation changes.
  • Filter: isolate the band of interest; prevent out-of-band energy from dominating features.
  • Feature: extract interpretable metrics (energy, spectral peak, duration, phase jump) that can be logged and audited.
  • Decision: apply thresholds/classifiers plus cooldown/dedup so one physical disturbance does not become an event storm.
  • Packet: produce a stable schema with IDs and versioning for traceability.

Engineering rule: every field-debug conversation should end at “which stage changed the evidence” rather than “the model feels wrong”.

Input → processing → output fields (engineering table):

Stage Input Processing blocks Output artifact Evidence fields (log/packet)
Preprocess Raw samples (continuous or windowed), timestamps DC removal, drift removal, normalization Baseline-corrected waveform baseline_level, drift_rate, sample_rate
Filter Baseline-corrected waveform Bandpass / notch, optional decimation Band-limited waveform filter_profile_id, aliasing_marker, noise_floor
Feature Band-limited waveform per bin/zone Energy, spectral peak, duration, phase jump Feature vector peak_metric, snr_db, duration_ms
Decision Feature vector + zone context Threshold/classifier, cooldown, dedup Event decision + score confidence, classifier_id, cooldown_ms
Packet Decision + metadata Schema packing + integrity fields Event packet (uplink) zone_id, start_ts, end_ts, pipeline_version, zone_map_version, link_status

Event packet fields (minimum schema for traceability):

  • zone_id, start_ts, end_ts
  • peak_metric, snr_db, confidence
  • classifier_id, pipeline_version, filter_profile_id
  • zone_map_version, optional link_status snapshot

This schema keeps field debugging bounded: any anomaly must map to a stage and a versioned configuration.

F6. DSP Pipeline: Raw → Filter → Feature → Decision → Event Engineering blocks with auditable outputs and versioned fields Raw samples / windows Preprocess DC / drift normalize Filter bandpass decimate Feature energy / peak duration / phase Decision score cooldown Event packet (uplink) zone_id start/end_ts snr_db confidence classifier_id versions pipeline_version • filter_profile_id • zone_map_version • link_status (optional) Cooldown & Dedup (prevents event storms) Rule example: merge repeated triggers in same zone within cooldown_ms cooldown_ms Audit rule: anomalies must map to a stage + a versioned configuration, not subjective “model behavior”.
Figure F6. A practical pipeline is a set of verifiable blocks. Stable preprocessing and filtering protect threshold consistency; interpretable features enable audits; cooldown/dedup prevents event storms. A versioned event packet makes field debugging bounded and repeatable.
Cite this figure: DSP Pipeline to Versioned Event Packet (F6). Suggested caption: “Engineering DSP blocks from raw waveform to event packet with cooldown/dedup and traceable fields.”
H2-7 • Zone Mapping & Calibration

Zone Mapping & Calibration (Distance Bins → Fence Segments)

Intent: turn distance-domain bins into a real-world fence map (posts, corners, gates, and segment chainage), and keep the mapping traceable. A practical system treats the zone table as a protected asset: it must be versioned, validated, and rollback-capable.

A. Commissioning calibration (build the first zone table):

  • Inputs: known physical markers (post IDs / chainage), plus a controllable excitation point (tap/pull/shaker).
  • Procedure: locate the excitation peak on the distance axis → choose a stable bin range → assign zone_id → bind to physical_marker.
  • Acceptance: repeated excitations at the same marker must land on the same bin range and the same zone_id.

Practical rule: zone boundaries should avoid “boundary-jitter” regions where window alignment error can move bins across a fence segment boundary.

B. Drift monitoring (detect before alarms explode):

  • Causes: temperature, tension, support deformation, cable state changes.
  • Monitors: track stable reference peaks/hotspots and baseline statistics per zone; compute a drift_score.
  • Action: when drift exceeds threshold, raise a drift alert and require a targeted verification or recalibration.

Drift is not a feeling. It must be a number that correlates with zone stability and far-zone SNR changes.

C. Recalibration & self-check (minimum effort, controlled change):

  • Quick verify: test only key markers (corners/gates) and confirm the bin-to-zone mapping still holds.
  • Full recalibration: rebuild the table when hardware/connector changes, step-loss events, or wide drift is observed.
  • Evidence: keep counts of recalibration attempts and whether each attempt improved drift and boundary stability.

A healthy deployment can prove “mapping confidence” without redoing the entire perimeter every time.

D. Configuration vs calibration boundary (protect the mapping asset):

  • Config (field adjustable): thresholds, cooldown/dedup, filter profiles (all versioned).
  • Calibration (protected): zone table, bin boundaries, physical markers, reference points.
  • Protection: calibration writes require validation (crc/hash), controlled mode, and rollback to last_known_good_version.

Most “mysterious location errors” come from accidental table edits, not sensing physics.

Zone-table evidence fields (minimum set for traceability):

zone_table_version zone_table_crc calibration_ts drift_score drift_alert_cnt recalibration_cnt last_known_good_version

The uplink event should reference zone_id only; the table provides the auditable mapping to physical markers and segment chainage.

F7. Zone Table: Distance Bins → Zone ID → Physical Marker Commissioning builds a versioned mapping asset; events reference zone_id for traceability Distance bins bin_index / bin_range bin 120–135 bin 136–160 bin 161–190 bin 191–210 Zone table zone_id + start/end bin Zone 07 • 120–135 Zone 08 • 136–160 Zone 09 • 161–190 Zone 10 • 191–210 version crc/hash Physical markers post/corner/gate + chainage P-07 • 0+420m Corner-B • 0+585m Gate-1 • 0+760m Segment-10 • 0+910m Event uses zone_id (table provides traceable physical meaning) Protect calibration writes: validate crc/hash, record calibration_ts, allow rollback to last_known_good_version
Figure F7. A practical deployment needs a protected mapping asset: distance bins map to zone IDs, which map to physical markers (posts/corners/gates). Versioning and CRC/hash prevent silent table corruption and enable rollback.
Cite this figure: Zone Table Schema (F7). Suggested caption: “Distance-bin to zone-ID mapping with physical markers and versioned validation for traceability.”
H2-8 • False Alarm Sources & Immunity

False Alarm Sources & Immunity (Evidence-First Triage)

Intent: replace “mystery alarms” with a measurable taxonomy. Most false alarms fall into one of three buckets: Environment, Optical path changes, or Electronics coupling. Each bucket has distinct evidence signals and a different first fix.

A. Environment (wind, rain, traffic, construction, animals):

  • Typical evidence: stable time-of-day correlation, broader spectral spread, longer duration, or cross-zone consistency patterns.
  • Discriminator: multi-zone simultaneous triggers and long occupancy often point to environment rather than localized intrusion.
  • First fix: tighten band selection, add cooldown/dedup rules, and apply zone-consistency gates (do not change zone table first).

Preferred evidence fields: spectrum signature, duration, cross-zone consistency score.

B. Optical path changes (connectors, reflections, micro-bends):

  • Typical evidence: optical power delta, new fixed-position hotspot peak, step-like baseline loss that hurts far zones.
  • Discriminator: repeated alarms anchored to the same distance, or a global far-zone SNR drop.
  • First fix: clean/reseat connectors, verify splices, inspect bends/strain points; then re-verify mapping.

Preferred evidence fields: optical power delta, hotspot position, baseline step location.

C. Electronics coupling (power noise, EMI, ground bounce):

  • Typical evidence: raised noise floor, bursty event rate correlated with equipment switching, link error counters spiking.
  • Discriminator: alarms that correlate with power events or EMI activity, often accompanied by link-layer anomalies.
  • First fix: improve power isolation/grounding, add filtering and shielding, verify surge/ESD bonding.

Preferred evidence fields: noise floor, link error counters, event-rate bursts.

False-alarm triage table (symptom → first 2 evidence → discriminator → first fix):

Symptom First 2 evidence to check Discriminator (what proves the bucket) First fix (minimum action)
Alarms increase only during rain/wind spectrum_signature, duration_ms Broadband + long occupancy + recurring weather correlation Adjust band selection; add cooldown/dedup; environment gating by duration
Alarms repeat at the same distance/zone hotspot_position, optical_power_delta Fixed-position peak or reflection change anchored in distance axis Reseat/clean connector; inspect splice; verify micro-bend/strain points
Far zones suddenly “go noisy / go blind” baseline_step_location, snr_db (far) Step-loss or attenuation change shifts baseline and reduces far-zone margin Inspect for cable damage/bend; repair; then run quick mapping verification
Event storms when nearby equipment switches noise_floor, link_error_counters Noise floor rise + link counters spike at the same time Improve grounding; add power filtering/isolation; check bonding and shielding
Multiple zones trigger simultaneously zone_consistency_score, duration_ms Cross-zone consistency pattern indicates environment or shared coupling Apply cross-zone gating; verify optical baseline; check EMI correlation
“Location jumps” near zone boundaries window_alignment_error, timestamp_jitter Bin boundary shift explains zone flicker rather than true movement Stabilize timing; adjust zone boundaries away from jitter regions; re-validate

Immunity ladder (from identification to suppression):

  • Level 1 — Identify: log stable evidence fields (spectrum, SNR, link counters, power delta, versions).
  • Level 2 — Suppress: filter profiles + cooldown/dedup + zone-consistency gating.
  • Level 3 — Isolate: optical path maintenance, grounding/power isolation, EMI hardening.
  • Level 4 — Govern: protect calibration writes (zone table) and keep rollback anchors to prevent accidental mis-mapping.

Engineering rule: do not “fix false alarms” by editing the zone table unless evidence proves a mapping drift/corruption.

F8. False Alarms: Classification → Evidence → First Fix Three measurable buckets with distinct evidence signals False alarms Environment wind / rain / traffic Optical path connectors / reflections Electronics power / EMI / ground Evidence spectrum_signature duration_ms zone_consistency Evidence optical_power_delta hotspot_position baseline_step Evidence noise_floor link_errors event_rate_burst First fix: pick the bucket by evidence → apply minimum action (filter/cooldown, optical maintenance, power/EMI isolation)
Figure F8. A measurable classification tree: Environment, Optical path, and Electronics coupling produce distinct evidence signals. Selecting the right evidence path prevents “random tuning” and reduces false alarms with minimum changes.
Cite this figure: False Alarm Classification Tree (F8). Suggested caption: “Evidence-first taxonomy for false alarms with bucket-specific indicators and first-fix actions.”
H2-10 • Power, Protection & Outdoor Robustness

Power, Protection & Outdoor Robustness (Surge/ESD/Grounding Checklist)

Intent: outdoor security nodes fail in repeatable ways: surge, ESD, lightning bypass currents, wet cables, and power-domain coupling. This chapter turns “random resets and false alarms” into a measurable power/protection checklist tied to evidence fields.

A. Power tree & domain isolation (analog vs digital):

  • Domains: photonic AFE analog rails, ADC/DSP digital rails, Ethernet PHY rail, outdoor I/O rail.
  • Goal: prevent digital/PHY transients from raising AFE noise floor or triggering saturation.
  • Evidence: correlate noise_floor and false-alarm bursts with rail dips or reset causes.

If the analog domain cannot be proven quiet, “algorithm tuning” will not stabilize false alarms.

B. Protection by interface (do not treat all ports the same):

  • Power input: TVS + inrush/limiting strategy + reverse protection (as applicable).
  • Ethernet: ESD arrays + common-mode control + isolation boundary awareness.
  • Outdoor I/O: TVS + series impedance/isolators depending on cable length and exposure.

Protection must match the cable exposure and the fault energy path, otherwise it just moves the failure mode.

C. Grounding & lightning bypass reality (common-mode paths):

  • Mechanism: lightning bypass and ESD inject common-mode currents that find unintended return paths.
  • Symptoms: random resets, link drops, sudden noise-floor jumps, repeated alarms across zones.
  • Evidence: link_error_counters + reset_reason spikes correlated with site events.

Most “unexplainable” outdoor issues become explainable once the common-mode path is identified.

D. Health monitoring & trend logs (prove degradation early):

  • Monitor: vin, key rails (UVLO), temp, (optional) optical_power, and reset causes.
  • Trend: store short rolling trends and counters; report via heartbeat.
  • Goal: distinguish “site disturbance” from “device aging / cable damage / marginal power.”

A node that cannot report trends will be serviced only after it fails hard.

Outdoor robustness triage (threat → where it hits → evidence → first fix):

Threat Where it hits Evidence to check First fix (minimum action)
Surge / lightning bypass Power input + chassis paths reset_reason, uvlo_event_cnt, time correlation with site events Verify bonding/ground path; check TVS/limiter; inspect cable routing and shields
ESD on Ethernet / long cables Ethernet port / PHY link_error_counters, link_down_time, CRC/PHY errors around events Check ESD arrays/CM path; verify isolation boundary and shield termination
Brownout / unstable supply All rails (esp. PHY + DSP) vin_trend, uvlo_event_cnt, reset_reason Improve input hold-up; adjust UVLO margins; inspect supply cabling and connectors
Power-domain coupling Analog AFE rail noise_floor rise correlated with PHY activity or I/O switching Increase analog isolation; improve filtering; review ground returns near AFE
Thermal stress / enclosure heating AFE + DSP + power temp_c_trend, drift or alarm-rate increase with temperature Improve thermal path; apply derating; verify that calibration/drift alarms trigger

Power/protection evidence fields (minimum set):

uvlo_event_cnt reset_reason vin_trend temp_c_trend link_error_counters noise_floor optical_power_delta

Even if some fields are approximated (no direct sensor), the system must expose consistent counters and timestamps.

F10. Outdoor Interfaces → Protection → Domains → AFE (with Health Monitors) Separate energy paths and keep analog quiet; expose UVLO/reset/link/noise evidence Power In 24V / PoE PD Ethernet Outdoor cable Outdoor I/O relay / DI / aux Protection TVS • limit • reverse Protection ESD • CM • isolation Protection TVS • series • isol. Power tree domain isolation Analog rail DSP/ADC rail PHY rail Photonic AFE noise floor / sat ADC / DSP events + logs Ethernet PHY link errors Health monitors (report via heartbeat) VIN / UVLO Temp trend Reset reason Link errors Optical power (optional)
Figure F10. Outdoor interfaces must be treated as distinct energy paths: each port needs matching protection, domain-isolated rails keep the photonic AFE quiet, and health monitors expose UVLO/reset/link/noise evidence for field debugging.
Cite this figure: Interface Protection Overview (F10). Suggested caption: “Outdoor protection and domain isolation map with minimal health monitors for auditability.”

H2-11. Validation Plan

This chapter defines a repeatable test matrix that proves the FO perimeter chain works end-to-end (optics → photonic AFE → sampling → DSP → event packet → Ethernet uplink) with measurable PD/FAR/latency, stable zone mapping, drift visibility, and offline survival—without expanding into VMS/NVR platform design.

11.0 What “Pass” Means (freeze acceptance metrics first)

Validation is only comparable across sites and firmware versions when acceptance metrics are frozen and computed from the same event log contract. Use the targets below as placeholders and set final thresholds per fence type, mounting method, and risk class.

PD (Probability of Detection) FAR (False Alarm Rate) Zone Accuracy Boundary Confusion Latency p50/p95 Drift (zone/time) Offline Survival Replay Consistency
Zone accuracy & boundary stability Correct zone rate at center points; boundary error rate to adjacent zones; zone jitter at repeated hits.
PD/FAR under real disturbances PD vs distance/strength; FAR split by environment class (wind/rain/traffic/works) and by zone clustering.
Latency & ordering Latency distribution (p50/p95) from stimulus to event timestamp; monotonically increasing packet_seq.
Drift & re-calibration effectiveness Zone drift across temperature/tension change; re-calibration reduces drift below threshold; version traceability.

11.1 Validation Data Contract (event log schema required for every test)

Every test case must export the same minimal fields so PD/FAR/latency and drift are computed consistently. Store them in a local ring buffer (with power-loss protection if needed), and export as CSV/JSON on demand.

Field Type / Example Why it must exist (evidence)
zone_id, distance_bin int / “Z12”, bin=382 Primary localization evidence; enables zone accuracy and clustering diagnostics.
start_ts, end_ts, event_ts µs or ms epoch Latency computation, de-dup, cooldown validation, and time alignment vs ground truth.
snr_db, peak float Separates “not detected” vs “detected but low quality”; supports threshold tuning without guesswork.
confidence, feature_id 0..1, int Explains why an event is accepted/rejected; supports immunity tuning by class signature.
algo_version, fw_version string Non-negotiable traceability for audit and regression comparisons.
zone_table_version, zone_table_crc, cal_id int/hex/string Proves which mapping/calibration produced the decision; prevents “mystery changes” in the field.
packet_seq, boot_id uint32 Ordering + non-repudiation basics; enables replay consistency after link/power loss.
retry_count, buffer_fill_pct, drop_cnt int Uplink health evidence; proves offline survival window and explains missing events.
link_state, link_up_ms, link_down_ms enum + counters Correlates FAR spikes with link flaps and cable/PoE issues; supports installation acceptance.
vin_mv, temp_c, reset_reason int/float/enum Outdoor robustness evidence; separates real intrusion from power/EMI-induced artifacts.
stimulus_type, stimulus_strength, ground_truth_zone string/int/int Required for PD and zone-accuracy computation; makes test runs reproducible across teams.

Tip: if storage is constrained, log full fields for “accepted events” and compact fields for “rejected candidates,” but keep zone/time/SNR/confidence/version.

11.2 Test Matrix Overview (repeatable, comparable, audit-friendly)

The matrix below is intentionally structured as stimulus → exported evidence → computed metrics → pass/fail. Keep each test case short and repeatable; increase coverage by sweeping distance, temperature, and mounting variants.

Test ID Stimulus (ground truth) Coverage Metrics (computed) Evidence fields (must export)
T1 Zone accuracy Known strike/pull points per zone (center + boundary) All zones, + boundary points zone_accuracy, boundary_error, zone_jitter zone_id, ground_truth_zone, confidence, snr_db, zone_table_version
T2 PD vs distance Standardized stimulus strength at near/mid/far distances 10%/50%/90% fiber length PD(distance,strength), snr(distance), latency p95 stimulus_strength, stimulus_distance, snr_db, start_ts/event_ts
T3 Dynamic range Strength sweep (min detectable → max without saturation) Near + far points min_detectable, saturation_rate, distortion markers peak, snr_db, sat_flag (or clipping counter), gain_state
T4 FAR by environment No intrusion; wind/rain/traffic/works exposure windows Multiple hours per class FAR/hour, FAR/km-hour, clustering index zone_id, feature_id, confidence, snr_db, temp_c, vin_mv
T5 Link loss survival Controlled Ethernet drop while continuing stimuli Short + long outages replay_consistency, dup_rate, drop_explained packet_seq, retry_count, buffer_fill_pct, drop_cnt, link_state
T6 Power interruptions Brownout/PoE reset during activity Multiple cycles event_loss_rate, reboot_to_ready_time reset_reason, boot_id, vin_mv, packet_seq continuity notes
T7 Temperature drift Low/room/high temp with repeated known-point stimuli Zones + boundaries zone_drift, PD_degradation, re-cal effectiveness temp_c, zone_id, ground_truth_zone, zone_table_version, cal_id
T8 Soak & aging watch 24–168h continuous run + daily spot checks All zones + optics health trend alarms, FAR drift, link flap rate optics health (if available), link counters, FAR by class, versions

11.3 Step-by-step Procedures (what to do, what to export, what to fix first)

Procedure P1 — Zone Accuracy & Boundary Stability

  • Pick 2 points per zone: a center marker and a boundary-near marker.
  • Apply a standardized strike/pull stimulus (same tool + same strength level), repeat N times per point.
  • Export: zone_id, ground_truth_zone, confidence, snr_db, zone_table_version/CRC.
  • Compute: zone_accuracy and boundary_confusion. If boundary errors dominate, verify zone-table edges and cooldown/de-dup rules before tuning thresholds.

Procedure P2 — PD vs Distance + Latency Distribution

  • Select near/mid/far points (e.g., 10%/50%/90% of fiber length).
  • For each point, sweep stimulus strength levels (light/medium/heavy) and repeat N times.
  • Export: stimulus_distance, stimulus_strength, start_ts/event_ts, snr_db, peak, confidence.
  • Compute: PD(distance,strength) and latency p50/p95. If far-end PD drops, check optical budget indicators and SNR slope before algorithm changes.

Procedure P3 — FAR by Environment Class (make “false alarms” measurable)

  • Run “no intrusion” windows under representative conditions: wind, rain, traffic vibration, construction-like disturbances.
  • Export: zone_id, feature_id/classifier_id, confidence, snr_db, temp_c, vin_mv, link_state.
  • Compute: FAR/hour and FAR/km-hour, plus zone clustering index (hot zones usually indicate mounting/optics/power issues).
  • First fix order: (1) mounting/mechanical coupling, (2) optics health (power/reflectors), (3) power/EMI (vin/reset reasons), then (4) classifier thresholds.

Procedure P4 — Offline Survival (Ethernet drop) + Replay Consistency

  • Induce a controlled Ethernet drop while continuing standardized stimuli every fixed interval.
  • Export: packet_seq, retry_count, buffer_fill_pct, drop_cnt, link up/down times, boot_id.
  • Acceptance: after link recovery, events replay in order (monotonic packet_seq), duplicates are detectable, and any missing events are explained by drop_cnt with buffer limits.

Procedure P5 — Temperature Drift + Re-calibration Effectiveness

  • At low/room/high temperatures, repeat the same known-point stimuli set (including boundary points).
  • Export: temp_c, zone_id, ground_truth_zone, zone_table_version, cal_id, drift alarms.
  • Acceptance: drift is visible in logs and reduced by re-calibration; mapping version changes are always traceable (version + CRC).

Procedure P6 — Long-run Soak (field realism)

  • Run continuous operation for 24–168 hours with scheduled spot checks and at least one link/power disturbance cycle.
  • Track trends: FAR drift, link flap rate, retry_count distribution, temperature/vin excursions, and any reset_reason bursts.
  • Acceptance: no unexplained FAR spikes; no silent mapping changes; event export remains consistent across firmware builds.

11.4 Example MPNs (reference parts for evidence hooks & outdoor reliability)

The list below is not a mandated BOM; it is a concrete starting point to implement the logging hooks, time/sequence integrity, Ethernet/PoE robustness, and low-noise rails needed for repeatable validation. Always re-check power, temperature range, and package availability.

Function MPN examples Why it helps validation
Time-to-Digital (ToF / windowing) TDC7200 Enables precise time stamping / distance bin mapping when a TDC-based architecture is used.
SAR ADC (aux channels / monitors) ADS8866 Useful for monitoring analog health channels (e.g., optical power monitor, AFE rails) with consistent logging.
PoE PD interface TPS2373 (and TPS2373-4 variants) Improves repeatable bring-up and logging of PD input behavior; helps correlate resets with PoE events.
Gigabit Ethernet PHY KSZ9031RNX Stable link + counters; supports diagnosing FAR spikes caused by link flaps or EMI on cabling.
Low-noise LDO (AFE rails) LT3042 / TPS7A49 Reduces rail noise coupling into AFE/SNR; makes “noise floor vs distance” curves stable and comparable.
I²C isolation (if sensor/RTC domains need isolation) ADuM1250 Helps isolate noisy domains; improves repeatability in outdoor ground potential differences.
High-accuracy temperature sensor TMP117 Improves drift correlation and thermal profiling; supports deterministic re-calibration triggers.
RTC with battery backup / timestamps PCF2129 Strengthens event time integrity during outages and simplifies audit logs.
Non-volatile event buffer (fast writes) FM25V10 (SPI F-RAM) Supports high-frequency event logging without long program/erase latency; improves power-loss survivability.
QSPI firmware/config storage W25Q128JV Concrete reference for versioned firmware/config storage; aids traceability across validation runs.
Low-cap TVS/ESD array (high-speed lines) RClamp0524P Reduces field failures and “mystery resets”; helps keep link stability during EFT/ESD tests.
Validation Data Loop Stimulus → Edge Node → Event Log → Metrics → Pass/Fail (traceable by versions) Stimulus System Under Test Event Log Metrics Pass Known Points zone / boundary Strength Sweep light/med/heavy Environment wind/rain/traffic Fault Injection link/power/temp Optics + Fiber Photonic AFE ADC / TDC DSP / Decision Uplink + Buffer hooks SNR conf zone ts seq link vin temp ver CRC Event Records zone / ts / SNR Health Snapshot vin / temp / link Traceability fw/algo/map ver + CRC + cal_id PD vs dist FAR by class Latency p95 Drift zone PASS targets FAIL inspect evidence Key evidence fields zone_id • start/end_ts • snr_db • confidence • feature_id • packet_seq • retry/buffer • link_state • vin_mv • temp_c • versions + CRC
Figure F11. A validation closed-loop that forces every test to produce auditable evidence (fields + versions) and comparable metrics (PD/FAR/latency/drift) across sites and firmware builds.
Cite this figure: “Validation Data Loop for FO Perimeter Sensing (Figure F11)”, ICNavigator — Security & Surveillance › Fence/FO Perimeter Sensing. (Include firmware/algo versions and zone-table version/CRC used during the run.)

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12. FAQs (12)

Each answer stays within this page boundary (optics → photonic AFE → sampling → DSP → event packet → Ethernet uplink → outdoor robustness), and maps back to the evidence fields used in the validation plan.

Q1Interferometric vs Rayleigh distributed sensing—how do I choose?

Choose interferometric sensing when you need high vibration sensitivity and can control phase noise and stability; choose Rayleigh distributed (DAS-style) when you need distance-window zoning over long runs with clearer spatial bins. Decide by comparing your optical budget/backscatter margin and the SNR you can sustain per window at the far end. See H2-2 and H2-3.

Measure: optical_power / backscatter_level, window_SNR vs distance, phase_noise marker, distance_bin consistency.
Q2Events are real, but zones keep drifting—sampling issue or a bad zone table?

If the same physical hit produces a different distance_bin over repeats, suspect sampling/window alignment (timestamp jitter, aliasing, or window boundaries). If distance bins are stable but zone_id changes, suspect the zone table (version/CRC mismatch, boundary definitions, or overwritten mapping). Lock zone_table_version/CRC first, then tune sampling. See H2-5 and H2-7.

Measure: distance_bin variance, timestamp_jitter / window_alignment_error, zone_table_version + CRC, zone_jitter count.
Q3False alarms spike in heavy rain—check environmental spectrum or optical power changes first?

Start with the fastest discriminator: if optical power/backscatter margin shifts with rain (step changes or slow wandering), treat it as an optics/connector/micro-bend issue; if optical metrics stay flat, treat it as an environment-class signature problem and inspect spectral features and zone clustering. Rain-driven FAR should be explainable by either optics drift or repeatable feature patterns. See H2-8 and H2-3.

Measure: optical_power trend, reflection-point indicator (if available), FAR/hour by class, feature_id + SNR distribution, zone clustering.
Q4Detection gets weaker with distance—optical budget problem or AFE noise floor?

Plot SNR versus distance. A smooth SNR roll-off that tracks optical loss/backscatter indicates budget margin; a high, flat noise floor or frequent clipping near the AFE points to photonic front-end limits (TIA noise, linearity, or rail noise). If near-end SNR is strong but far-end collapses abruptly, inspect optics loss/reflections; if noise floor rises everywhere, inspect AFE rails and TIA operating region. See H2-3 and H2-4.

Measure: optical_power/backscatter margin, TIA output noise floor, saturation/clipping counters, SNR vs distance slope.
Q5Event latency swings wildly—is it the DSP window or the uplink queue?

Split latency into two parts: detection latency (raw-to-decision windowing) and transport latency (queue/retry/buffer). If start/end timestamps shift with processing windows, tune DSP aggregation, cooldown, or window size. If event timestamps are stable but delivery time fluctuates, inspect buffer_fill_pct, retry_count, and link_state flaps. Always log both decision time and packet sequence to avoid guessing. See H2-6 and H2-9.

Measure: start_ts/end_ts/event_ts, buffer_fill_pct, retry_count, packet_seq monotonicity, link up/down counters.
Q6After replacing a fiber segment, everything is “off”—which calibrations must be redone?

Rebuild the distance-to-zone mapping whenever fiber length, splices, or connectors change enough to shift time-of-flight bins or reflection behavior. At minimum, rerun known-point strikes to regenerate the zone table, record a new calibration ID, and freeze zone_table_version/CRC so field edits cannot silently corrupt mapping. Then re-check drift alarms under temperature/tension swings. See H2-7.

Measure: distance_bin shift vs prior baseline, zone_table_version/CRC change, cal_id, drift_alarm count, zone accuracy at boundary points.
Q7Strong stimulus is obvious, but nothing is detected—did anti-alias/bandpass filtering remove it?

Yes, this happens when the analog anti-alias filter and the DSP bandpass are tuned for a different disturbance spectrum. Confirm by exporting a short raw or lightly filtered trace and comparing energy before and after the filter stages. If the stimulus energy sits outside the configured passband or aliases into a rejected region, adjust sample rate, AAF cutoff, and DSP bands together. Avoid “threshold-only” tuning without spectrum evidence. See H2-5 and H2-6.

Measure: sample_rate, AAF configuration, bandpass settings, spectral peak location, aliasing markers, feature energy retained after filters.
Q8Night-time false alarms rise—animals/wind or power noise? Which two evidences first?

Use two fast discriminators: (1) the event signature (feature_id, spectrum/energy shape, duration) to separate wind/animals from “electrical” patterns, and (2) power/health evidence (vin_mv trends, UVLO/reset_reason bursts) to confirm rail-induced artifacts. If power evidence is clean, tune immunity by class and check zone clustering; if power evidence is noisy, fix grounding, surge paths, and analog rail isolation first. See H2-8 and H2-10.

Measure: feature_id + SNR distribution, vin_mv + reset_reason/UVLO events, link_state flaps, zone clustering index.
Q9One fixed location keeps alarming—is it a reflection/connector issue? How to verify?

A persistent “hot zone” usually means either a localized optical discontinuity (connector/splice reflection or micro-bend loss) or a mechanical coupling hotspot. Verify by correlating the zone with optical health: look for step changes in optical power, shifts in reflection-related indicators, or repeated SNR patterns unique to that segment. If optics look stable, inspect mounting, tension, and nearby vibration sources that repeatedly excite that location. See H2-3 and H2-8.

Measure: optical_power step/trend, zone clustering persistence, repeated feature_id patterns, link error counters during alarms.
Q10Events are missing after a network outage—what’s the minimal buffer/retry closed loop?

Implement a monotonic packet_seq, a durable event queue, and explicit drop accounting. During outage, buffer_fill_pct should rise predictably; after recovery, replay must preserve sequence ordering, and duplicates must be detectable for de-duplication upstream. If events are dropped, drop_cnt must explain how many and why (buffer full, policy limits). This minimal loop makes outages auditable without requiring VMS platform logic. See H2-9.

Measure: packet_seq monotonicity, buffer_fill_pct, retry_count, drop_cnt, link_up/down time, replay_count (if logged).
Q11After a surge, the device “works” but sensitivity is worse—check which chain first?

Start at the photonic AFE and its rails: surges often leave subtle damage or elevated noise that reduces SNR without killing the system. Re-run a known stimulus at a known point and compare SNR, noise floor, and clipping/saturation counters versus baseline. If AFE noise has risen, inspect TVS paths, ground return integrity, and low-noise LDO health before adjusting DSP thresholds. Sensitivity loss that correlates with vin ripple or reset bursts points to power domain injury. See H2-10 and H2-4.

Measure: TIA noise floor, saturation/clipping count, SNR at fixed stimulus, vin ripple trend, surge/ESD counters (if available).
Q12How do I do a field health check with minimal instruments?

Use logs first, then only two measurements. From logs, confirm link stability (link_state flaps), uplink health (retry_count, buffer_fill), and event quality (SNR/confidence distribution) under a simple tap test. Then measure (1) input rail stability (vin_mv trend, UVLO/reset_reason) and (2) optics health if available (optical power monitor). If these are stable, focus on zone mapping/version integrity; if not, fix grounding, protection, and rail isolation before any algorithm tuning. See H2-11 and H2-10.

Measure: link_state + counters, retry_count/buffer_fill_pct, SNR/confidence stats, vin_mv + reset_reason, optical_power (if present).