Core idea: Fiber-optic perimeter sensing turns tiny fence vibrations into reliable, zone-tagged security events by controlling the full chain—optical power budget, photonic AFE noise/linearity, sampling + DSP, and robust Ethernet uplink. The goal is not “more sensitivity,” but repeatable detection with low false alarms, traceable zone mapping, and survivability in outdoor surge/EMI and network outages.
H2-1 • Definition & Boundary
Definition & Boundary
One-sentence boundary: this page covers the end-to-end engineering chain of
fiber-optic (FO) perimeter sensing—from optical path to photonic AFE to
ADC/TDC sampling to edge event generation and Ethernet uplink.
Everything is written to be verifiable by field evidence (event logs, counters, and observable metrics).
What FO fence sensing solves (engineering outcomes):
Long distance coverage with minimal active electronics along the fence (central interrogator + passive fiber).
EMI immunity along the sensing line (fiber as the sensing medium), while keeping electronics measurable and protectable at the edge.
Operational traceability: each alarm can carry the “why” in metrics (SNR, confidence, health status) rather than a black-box trigger.
Not in scope (kept out on purpose):
Camera recognition/analytics, video pipeline tuning, NVR/VMS platform architecture, recording compliance.
Access control reader/panel design, building-wide intercom topology, door/relay linkage logic.
Network-wide timing architecture (PTP grandmaster, TSN system design) beyond local timestamp fields needed for events.
Evidence chain (everything ultimately lands here): FO perimeter sensing must produce
events and health telemetry that are easy to audit. A practical design starts by defining the
minimum event schema below—so validation and field debug have concrete “first checks”.
Field
Meaning
Typical “first check” use
zone_id
Zone index mapped from distance/time bins (the operator-facing location).
“Alarm is in wrong place” → verify zone map version and bin alignment.
start_ts / end_ts
Event time window; enables de-duplication and correlation with other systems.
Signal-to-noise estimate at the decision point (not raw optical power).
“Detection drops with distance” → compare SNR trend vs optical power vs AFE noise.
confidence
Decision confidence based on features/thresholds; should be reproducible.
“False alarms” → see confidence distribution and which feature dominated.
link_status
Ethernet link up/down + error counters snapshot at event time.
“Missing events” → distinguish sensing failure from uplink loss/buffer overflow.
optical_power_dbm (optional)
Monitored optical Tx/Rx level or backscatter baseline proxy (implementation-dependent).
“Sudden sensitivity change” → detect connector/splice loss or reflection changes.
pipeline_version / zone_map_version
Traceability for firmware + calibration/mapping used to generate the event.
“Behavior changed after service” → confirm versions before chasing hardware.
Note: this page does not prescribe a specific protocol or cloud backend. The focus is defining measurable
fields and designing the chain that makes them stable.
Figure F1. FO fence perimeter sensing is best described as a measurable chain:
optical path → photonic AFE → sampling/ToF → edge event generation → Ethernet uplink.
Keeping a minimum event schema makes validation and field debug repeatable.
This section is not a physics lecture. It captures the two implementation routes an engineer
actually chooses between, and shows how each route shifts the dominant bottleneck and the
first evidence fields to check in validation and field debug.
Route A — Interferometry (phase-sensitive): highly sensitive to tiny strain/vibration; the system is
often phase-noise limited. Engineering focus becomes phase stability, AFE linearity, and drift control.
Dominant risk: phase noise / drift masquerading as intrusion, or eroding SNR over time.
Route B — Rayleigh distributed (DAS/DVS concept): segmentation by time windows / distance bins;
localization is tied to sampling strategy and ToF/bin alignment. Engineering focus becomes windowing, timing, and zone mapping robustness.
Low-noise, wideband, low-distortion photonic AFE; saturation/AGC artifacts are critical.
Photonic AFE still matters, but bin consistency and dynamic baseline handling dominate field behavior.
Sampling strategy
Continuous sampling supports phase demod and feature extraction.
Windowed sampling drives range resolution and zone granularity; aliasing must be controlled.
Common false-alarm sources
Thermal/mechanical drift, phase instability, environmental coupling that resembles intrusion signatures.
Connector/splice reflections, micro-bend loss changes, bin/zone mis-mapping, windowing artifacts.
“First two fields” to check
snr_db + AFE state (saturation/AGC) at event time.
Window/bin consistency + zone_map_version (plus optical baseline proxy if available).
Practical rule: choose the route by what must stay stable in the field. If stability is dominated by
phase drift, design for phase-noise control and AFE linearity. If stability is dominated by range/bin alignment,
design for sampling/window integrity and robust zone mapping.
Figure F2. Two routes share the same “blocks” but fail differently in the field.
Interferometry is commonly phase-noise limited; Rayleigh distributed sensing is commonly window/bin limited.
The decision should be based on which stability axis can be controlled and verified.
Goal: break the optical module into purchasable, tunable, and
measurable blocks. A stable optical front-end is not “nice to have”:
it defines the backscatter baseline and determines whether the downstream AFE and DSP can ever deliver consistent
event SNR and false-alarm immunity.
Block A — Laser & Modulation (engineering meaning):
Direct modulation favors simplicity, but couples driver noise into intensity/frequency behavior that may appear as event-like features downstream.
External modulation (EOM/AOM) separates “how to drive” from “how to sense”, enabling better control of modulation depth and spectral behavior, at the cost of insertion loss and added control points.
Tunable knobs to keep explicit: Tx power setpoint, modulation rate/duty, mod depth/bias (if external), warm-up/thermal stabilization.
Engineering check: modulation should improve distance-profile visibility without raising the noise floor or creating periodic artifacts in the event spectrum.
Block B — Coupling / Circulator / Split (return-path control):
Return-path separation is the main purpose: prevent direct leakage and strong reflections from dominating the receiver.
Isolation limits set how often the receiver sees “fixed hotspots” (strong, distance-locked peaks) that can saturate AFE stages and inflate false alarms.
Reflection sensitivity increases with connectors and splices; their position and reflection strength must be treated as measurable state, not a mystery.
Engineering check: a healthy return path shows a predictable baseline vs distance, without dominant fixed peaks that persist across environments.
Block C — Optical power budget (why distance fails first):
Budget components: Tx power, fiber attenuation over length, component insertion loss (couplers/circulators/connectors), and the usable backscatter baseline.
Design target: keep enough return margin so that the photonic AFE noise floor does not dominate event SNR at the far end.
Field reality: micro-bends, wet connectors, or a single splice issue can create step-loss or new hotspots that shift the entire SNR distribution.
Engineering check: compare baseline distance profile and optical monitor readings against commissioning baselines (trend matters more than absolute numbers).
Baseline wobble vs environment; event rate changes vs baseline loss
Practical field order: (1) optical monitor trend → (2) distance profile baseline → (3) fixed hotspot positions → (4) loss delta vs commissioning.
Figure F3. The optical chain must be treated as a measurable state machine:
connectors create reflection peaks, splices can create step-loss, and micro-bends can raise attenuation.
These effects reshape the distance profile and propagate into event SNR and false alarms.
Why this is a depth core: many “cannot detect” and “false alarm” issues are not algorithmic.
They are caused by noise floor, nonlinearity, or gain-control behavior inside the photonic AFE.
A practical AFE write-up must always connect mechanisms to measurable evidence: noise floor, saturation count,
AGC state, and spectrum baseline.
A. Photodiode & Balanced Detection (engineering payoff):
Balanced receiver helps reject common-mode intensity noise and improves stability of the usable baseline.
Failure pattern: strong fixed reflections can unbalance the receiver and push one path into saturation, creating event-like artifacts.
Evidence: rising pd_saturation_cnt, noisy snr_db distribution, and distance-locked hotspots correlated with false alarms.
Design requirement: preserve linearity for strong returns while keeping enough sensitivity for far zones.
B. TIA focus (what decides “detectable”):
Input capacitance & stability affect usable bandwidth and noise behavior at the most sensitive node.
Bandwidth choice is a trade: too low distorts features; too high admits unnecessary noise and EMI coupling.
Supply isolation matters: rail ripple can appear as a raised spectrum floor and directly reduce event SNR.
Evidence: measured output noise (RMS or spectral), clipping markers, and correlation between rail noise and spectrum baseline.
Engineering rule: far-zone performance is usually limited by the TIA noise floor, not the DSP model.
C. VGA / AGC (when it helps, when it breaks repeatability):
When needed: large return dynamic range across distance bins, or baseline shift after installation changes.
Hidden risk: AGC hysteresis and gain hunting can make the same disturbance look different over time, breaking threshold-based immunity.
Evidence:agc_state oscillation, event latency drift, and a widened confidence distribution without a matching SNR improvement.
Practical mitigation: distance-aware gain profiles and cooldown logic often outperform aggressive global AGC.
Evidence checklist (what to log so issues are diagnosable):
noise_floor (spectrum baseline proxy), snr_db at decision point, and far-zone SNR distribution over time
pd_saturation_cnt / clipping markers near strong reflections or hotspots
agc_state or gain code timeline to correlate with false-alarm clusters
Optional: rail ripple snapshot and AFE temperature when drift is suspected
Figure F4. Photonic AFE behavior explains most field failures:
a raised noise floor kills far-zone detectability, saturation creates false features, and AGC hysteresis breaks repeatability.
Logging noise_floor, saturation_cnt, and agc_state makes root-cause diagnosis fast.
Why sampling decides everything: zoning accuracy, maximum detectable vibration bandwidth, and many
false alarms are sampling artifacts. A correct strategy ties time reference, windowing,
binning, and filters into a measurable chain: sample_rate,
timestamp_jitter, window_alignment_error, and aliasing_marker.
A. TDC for ToF windowing (Rayleigh / distributed routes):
Role: establish a stable time reference so distance bins map consistently to physical zones.
Failure pattern: time jitter becomes bin-boundary jitter → zone flicker near boundaries.
Evidence: rising timestamp_jitter, increased window_alignment_error,
and boundary zones showing unstable event clustering.
Engineering requirement: keep window alignment stable enough that a stationary reflector does not “move” across bins.
B. ADC for continuous sampling (interferometric routes):
Role: continuous waveform capture to support phase demod and spectral features.
Sampling reality: the usable feature bandwidth is bounded by the effective sampling rate and anti-aliasing chain.
Evidence:sample_rate (effective), adc_clip_cnt, and a stable spectral baseline
(noise floor and peak behavior over time).
Engineering requirement: preserve far-zone micro-features without being dominated by near-zone hotspots and clipping.
C. Anti-aliasing is an event-quality requirement:
Chain: analog AAF limits out-of-band energy → digital filtering selects the band of interest → sampling rate sets the boundary.
False-alarm mechanism: out-of-band energy folds into the decision band when aliasing is present.
Evidence:aliasing_marker (mirror-peak signatures or folded-energy score),
filter profile ID/version, and event-rate sensitivity to sampling changes.
Engineering requirement: any “new false alarm burst” must be explainable by a measurable aliasing or alignment marker.
Sampling decision matrix (goal → what to tune → what it costs → what to verify):
Goal
What to tune
Trade-off / cost
Evidence to verify
Finer zone accuracy
Window width, bin size, zone map granularity, TDC stability
More bins → more compute/memory; tighter windows → more alignment sensitivity
window_alignment_error ↓, boundary-zone flicker ↓, stable hotspot bin index
Higher detectable frequency
Effective sample_rate, AAF corner, digital band selection
Less bandwidth and less dynamic range; risk of missed far-zone events
Far-zone snr_db distribution remains acceptable; miss-rate does not spike
Figure F5. Sampling and time reference define window alignment; windowing defines bin boundaries; binning defines zone stability.
Zone “jumping” near boundaries is often a measurable window_alignment_error + timestamp_jitter problem.
Cite this figure:Sampling → Distance Bins → Zones (F5).
Suggested caption: “Time-window alignment and distance-bin mapping that determines zone accuracy and boundary stability.”
Intent: describe the pipeline as engineering blocks, not academic papers.
The output must be an auditable event packet so field failures can be traced to
versions, filters, and evidence fields.
Pipeline summary (what each stage must guarantee):
Preprocess: remove DC / drift so thresholds remain stable across environment and installation changes.
Filter: isolate the band of interest; prevent out-of-band energy from dominating features.
Feature: extract interpretable metrics (energy, spectral peak, duration, phase jump) that can be logged and audited.
Decision: apply thresholds/classifiers plus cooldown/dedup so one physical disturbance does not become an event storm.
Packet: produce a stable schema with IDs and versioning for traceability.
Engineering rule: every field-debug conversation should end at “which stage changed the evidence” rather than “the model feels wrong”.
This schema keeps field debugging bounded: any anomaly must map to a stage and a versioned configuration.
Figure F6. A practical pipeline is a set of verifiable blocks.
Stable preprocessing and filtering protect threshold consistency; interpretable features enable audits; cooldown/dedup prevents event storms.
A versioned event packet makes field debugging bounded and repeatable.
Cite this figure:DSP Pipeline to Versioned Event Packet (F6).
Suggested caption: “Engineering DSP blocks from raw waveform to event packet with cooldown/dedup and traceable fields.”
H2-7 • Zone Mapping & Calibration
Zone Mapping & Calibration (Distance Bins → Fence Segments)
Intent: turn distance-domain bins into a real-world fence map (posts, corners, gates, and segment chainage),
and keep the mapping traceable. A practical system treats the zone table as a protected asset:
it must be versioned, validated, and rollback-capable.
A. Commissioning calibration (build the first zone table):
Inputs: known physical markers (post IDs / chainage), plus a controllable excitation point (tap/pull/shaker).
Procedure: locate the excitation peak on the distance axis → choose a stable bin range → assign zone_id → bind to physical_marker.
Acceptance: repeated excitations at the same marker must land on the same bin range and the same zone_id.
Practical rule: zone boundaries should avoid “boundary-jitter” regions where window alignment error can move bins across a fence segment boundary.
B. Drift monitoring (detect before alarms explode):
Causes: temperature, tension, support deformation, cable state changes.
Monitors: track stable reference peaks/hotspots and baseline statistics per zone; compute a drift_score.
Action: when drift exceeds threshold, raise a drift alert and require a targeted verification or recalibration.
Drift is not a feeling. It must be a number that correlates with zone stability and far-zone SNR changes.
C. Recalibration & self-check (minimum effort, controlled change):
Quick verify: test only key markers (corners/gates) and confirm the bin-to-zone mapping still holds.
Full recalibration: rebuild the table when hardware/connector changes, step-loss events, or wide drift is observed.
Evidence: keep counts of recalibration attempts and whether each attempt improved drift and boundary stability.
A healthy deployment can prove “mapping confidence” without redoing the entire perimeter every time.
D. Configuration vs calibration boundary (protect the mapping asset):
The uplink event should reference zone_id only; the table provides the auditable mapping to physical markers and segment chainage.
Figure F7. A practical deployment needs a protected mapping asset: distance bins map to zone IDs, which map to physical markers (posts/corners/gates).
Versioning and CRC/hash prevent silent table corruption and enable rollback.
Cite this figure:Zone Table Schema (F7).
Suggested caption: “Distance-bin to zone-ID mapping with physical markers and versioned validation for traceability.”
Intent: replace “mystery alarms” with a measurable taxonomy.
Most false alarms fall into one of three buckets: Environment, Optical path changes,
or Electronics coupling. Each bucket has distinct evidence signals and a different first fix.
A. Environment (wind, rain, traffic, construction, animals):
Level 4 — Govern: protect calibration writes (zone table) and keep rollback anchors to prevent accidental mis-mapping.
Engineering rule: do not “fix false alarms” by editing the zone table unless evidence proves a mapping drift/corruption.
Figure F8. A measurable classification tree: Environment, Optical path, and Electronics coupling produce distinct evidence signals.
Selecting the right evidence path prevents “random tuning” and reduces false alarms with minimum changes.
Cite this figure:False Alarm Classification Tree (F8).
Suggested caption: “Evidence-first taxonomy for false alarms with bucket-specific indicators and first-fix actions.”
Intent: define how an edge node delivers events reliably over Ethernet, without drifting into VMS platform architecture.
The engineering goal is simple: no silent loss, dedup-able retries, and auditable link health.
A. Event contract (fields + semantics):
Dedup key:device_id + boot_id + packet_seq (stable across retries).
Time:event_ts (UTC or local+offset), optionally mono_counter to detect resets.
A deployment that can’t report these fields cannot reliably distinguish true silence from transport loss.
Figure F9. Edge node uplink path with two-tier buffering (RAM queue + NVM journal), bounded retry/ACK,
and a heartbeat channel that makes offline intervals and buffer pressure measurable.
Cite this figure:Edge Uplink & Buffering (F9).
Suggested caption: “Event delivery with stable dedup keys, offline journaling, and auditable link/buffer counters.”
Intent: outdoor security nodes fail in repeatable ways: surge, ESD, lightning bypass currents, wet cables,
and power-domain coupling. This chapter turns “random resets and false alarms” into a measurable power/protection checklist
tied to evidence fields.
A. Power tree & domain isolation (analog vs digital):
Domains: photonic AFE analog rails, ADC/DSP digital rails, Ethernet PHY rail, outdoor I/O rail.
Goal: prevent digital/PHY transients from raising AFE noise floor or triggering saturation.
Evidence: correlate noise_floor and false-alarm bursts with rail dips or reset causes.
If the analog domain cannot be proven quiet, “algorithm tuning” will not stabilize false alarms.
B. Protection by interface (do not treat all ports the same):
Even if some fields are approximated (no direct sensor), the system must expose consistent counters and timestamps.
Figure F10. Outdoor interfaces must be treated as distinct energy paths: each port needs matching protection,
domain-isolated rails keep the photonic AFE quiet, and health monitors expose UVLO/reset/link/noise evidence for field debugging.
Cite this figure:Interface Protection Overview (F10).
Suggested caption: “Outdoor protection and domain isolation map with minimal health monitors for auditability.”
H2-11. Validation Plan
This chapter defines a repeatable test matrix that proves the FO perimeter chain works end-to-end
(optics → photonic AFE → sampling → DSP → event packet → Ethernet uplink) with measurable PD/FAR/latency,
stable zone mapping, drift visibility, and offline survival—without expanding into VMS/NVR platform design.
11.0 What “Pass” Means (freeze acceptance metrics first)
Validation is only comparable across sites and firmware versions when acceptance metrics are frozen and
computed from the same event log contract. Use the targets below as placeholders and set final thresholds
per fence type, mounting method, and risk class.
Zone accuracy & boundary stabilityCorrect zone rate at center points; boundary error rate to adjacent zones; zone jitter at repeated hits.
PD/FAR under real disturbancesPD vs distance/strength; FAR split by environment class (wind/rain/traffic/works) and by zone clustering.
Latency & orderingLatency distribution (p50/p95) from stimulus to event timestamp; monotonically increasing packet_seq.
Drift & re-calibration effectivenessZone drift across temperature/tension change; re-calibration reduces drift below threshold; version traceability.
11.1 Validation Data Contract (event log schema required for every test)
Every test case must export the same minimal fields so PD/FAR/latency and drift are computed consistently.
Store them in a local ring buffer (with power-loss protection if needed), and export as CSV/JSON on demand.
Field
Type / Example
Why it must exist (evidence)
zone_id, distance_bin
int / “Z12”, bin=382
Primary localization evidence; enables zone accuracy and clustering diagnostics.
start_ts, end_ts, event_ts
µs or ms epoch
Latency computation, de-dup, cooldown validation, and time alignment vs ground truth.
snr_db, peak
float
Separates “not detected” vs “detected but low quality”; supports threshold tuning without guesswork.
confidence, feature_id
0..1, int
Explains why an event is accepted/rejected; supports immunity tuning by class signature.
algo_version, fw_version
string
Non-negotiable traceability for audit and regression comparisons.
zone_table_version, zone_table_crc, cal_id
int/hex/string
Proves which mapping/calibration produced the decision; prevents “mystery changes” in the field.
packet_seq, boot_id
uint32
Ordering + non-repudiation basics; enables replay consistency after link/power loss.
retry_count, buffer_fill_pct, drop_cnt
int
Uplink health evidence; proves offline survival window and explains missing events.
link_state, link_up_ms, link_down_ms
enum + counters
Correlates FAR spikes with link flaps and cable/PoE issues; supports installation acceptance.
vin_mv, temp_c, reset_reason
int/float/enum
Outdoor robustness evidence; separates real intrusion from power/EMI-induced artifacts.
Required for PD and zone-accuracy computation; makes test runs reproducible across teams.
Tip: if storage is constrained, log full fields for “accepted events” and compact fields for “rejected candidates,” but keep zone/time/SNR/confidence/version.
11.2 Test Matrix Overview (repeatable, comparable, audit-friendly)
The matrix below is intentionally structured as stimulus → exported evidence → computed metrics → pass/fail.
Keep each test case short and repeatable; increase coverage by sweeping distance, temperature, and mounting variants.
Test ID
Stimulus (ground truth)
Coverage
Metrics (computed)
Evidence fields (must export)
T1 Zone accuracy
Known strike/pull points per zone (center + boundary)
Compute: zone_accuracy and boundary_confusion. If boundary errors dominate, verify zone-table edges and cooldown/de-dup rules before tuning thresholds.
Procedure P2 — PD vs Distance + Latency Distribution
Select near/mid/far points (e.g., 10%/50%/90% of fiber length).
For each point, sweep stimulus strength levels (light/medium/heavy) and repeat N times.
Induce a controlled Ethernet drop while continuing standardized stimuli every fixed interval.
Export: packet_seq, retry_count, buffer_fill_pct, drop_cnt, link up/down times, boot_id.
Acceptance: after link recovery, events replay in order (monotonic packet_seq), duplicates are detectable, and any missing events are explained by drop_cnt with buffer limits.
Procedure P5 — Temperature Drift + Re-calibration Effectiveness
At low/room/high temperatures, repeat the same known-point stimuli set (including boundary points).
Acceptance: drift is visible in logs and reduced by re-calibration; mapping version changes are always traceable (version + CRC).
Procedure P6 — Long-run Soak (field realism)
Run continuous operation for 24–168 hours with scheduled spot checks and at least one link/power disturbance cycle.
Track trends: FAR drift, link flap rate, retry_count distribution, temperature/vin excursions, and any reset_reason bursts.
Acceptance: no unexplained FAR spikes; no silent mapping changes; event export remains consistent across firmware builds.
11.4 Example MPNs (reference parts for evidence hooks & outdoor reliability)
The list below is not a mandated BOM; it is a concrete starting point to implement the logging hooks,
time/sequence integrity, Ethernet/PoE robustness, and low-noise rails needed for repeatable validation.
Always re-check power, temperature range, and package availability.
Function
MPN examples
Why it helps validation
Time-to-Digital (ToF / windowing)
TDC7200
Enables precise time stamping / distance bin mapping when a TDC-based architecture is used.
SAR ADC (aux channels / monitors)
ADS8866
Useful for monitoring analog health channels (e.g., optical power monitor, AFE rails) with consistent logging.
PoE PD interface
TPS2373 (and TPS2373-4 variants)
Improves repeatable bring-up and logging of PD input behavior; helps correlate resets with PoE events.
Gigabit Ethernet PHY
KSZ9031RNX
Stable link + counters; supports diagnosing FAR spikes caused by link flaps or EMI on cabling.
Low-noise LDO (AFE rails)
LT3042 / TPS7A49
Reduces rail noise coupling into AFE/SNR; makes “noise floor vs distance” curves stable and comparable.
I²C isolation (if sensor/RTC domains need isolation)
Improves drift correlation and thermal profiling; supports deterministic re-calibration triggers.
RTC with battery backup / timestamps
PCF2129
Strengthens event time integrity during outages and simplifies audit logs.
Non-volatile event buffer (fast writes)
FM25V10 (SPI F-RAM)
Supports high-frequency event logging without long program/erase latency; improves power-loss survivability.
QSPI firmware/config storage
W25Q128JV
Concrete reference for versioned firmware/config storage; aids traceability across validation runs.
Low-cap TVS/ESD array (high-speed lines)
RClamp0524P
Reduces field failures and “mystery resets”; helps keep link stability during EFT/ESD tests.
Figure F11. A validation closed-loop that forces every test to produce auditable evidence (fields + versions)
and comparable metrics (PD/FAR/latency/drift) across sites and firmware builds.
Cite this figure: “Validation Data Loop for FO Perimeter Sensing (Figure F11)”, ICNavigator — Security & Surveillance › Fence/FO Perimeter Sensing.
(Include firmware/algo versions and zone-table version/CRC used during the run.)
Each answer stays within this page boundary (optics → photonic AFE → sampling → DSP → event packet → Ethernet uplink → outdoor robustness),
and maps back to the evidence fields used in the validation plan.
Q1Interferometric vs Rayleigh distributed sensing—how do I choose?
Choose interferometric sensing when you need high vibration sensitivity and can control phase noise and stability; choose Rayleigh
distributed (DAS-style) when you need distance-window zoning over long runs with clearer spatial bins. Decide by comparing your
optical budget/backscatter margin and the SNR you can sustain per window at the far end.
See H2-2 and H2-3.
Q2Events are real, but zones keep drifting—sampling issue or a bad zone table?
If the same physical hit produces a different distance_bin over repeats, suspect sampling/window alignment (timestamp jitter,
aliasing, or window boundaries). If distance bins are stable but zone_id changes, suspect the zone table (version/CRC mismatch,
boundary definitions, or overwritten mapping). Lock zone_table_version/CRC first, then tune sampling.
See H2-5 and H2-7.
Q3False alarms spike in heavy rain—check environmental spectrum or optical power changes first?
Start with the fastest discriminator: if optical power/backscatter margin shifts with rain (step changes or slow wandering),
treat it as an optics/connector/micro-bend issue; if optical metrics stay flat, treat it as an environment-class signature
problem and inspect spectral features and zone clustering. Rain-driven FAR should be explainable by either optics drift or
repeatable feature patterns.
See H2-8 and H2-3.
Measure: optical_power trend, reflection-point indicator (if available), FAR/hour by class, feature_id + SNR distribution, zone clustering.
Q4Detection gets weaker with distance—optical budget problem or AFE noise floor?
Plot SNR versus distance. A smooth SNR roll-off that tracks optical loss/backscatter indicates budget margin; a high, flat
noise floor or frequent clipping near the AFE points to photonic front-end limits (TIA noise, linearity, or rail noise).
If near-end SNR is strong but far-end collapses abruptly, inspect optics loss/reflections; if noise floor rises everywhere,
inspect AFE rails and TIA operating region.
See H2-3 and H2-4.
Measure: optical_power/backscatter margin, TIA output noise floor, saturation/clipping counters, SNR vs distance slope.
Q5Event latency swings wildly—is it the DSP window or the uplink queue?
Split latency into two parts: detection latency (raw-to-decision windowing) and transport latency (queue/retry/buffer).
If start/end timestamps shift with processing windows, tune DSP aggregation, cooldown, or window size. If event timestamps are
stable but delivery time fluctuates, inspect buffer_fill_pct, retry_count, and link_state flaps. Always log both decision time
and packet sequence to avoid guessing.
See H2-6 and H2-9.
Measure: start_ts/end_ts/event_ts, buffer_fill_pct, retry_count, packet_seq monotonicity, link up/down counters.
Q6After replacing a fiber segment, everything is “off”—which calibrations must be redone?
Rebuild the distance-to-zone mapping whenever fiber length, splices, or connectors change enough to shift time-of-flight bins or
reflection behavior. At minimum, rerun known-point strikes to regenerate the zone table, record a new calibration ID, and freeze
zone_table_version/CRC so field edits cannot silently corrupt mapping. Then re-check drift alarms under temperature/tension swings.
See H2-7.
Measure: distance_bin shift vs prior baseline, zone_table_version/CRC change, cal_id, drift_alarm count, zone accuracy at boundary points.
Q7Strong stimulus is obvious, but nothing is detected—did anti-alias/bandpass filtering remove it?
Yes, this happens when the analog anti-alias filter and the DSP bandpass are tuned for a different disturbance spectrum.
Confirm by exporting a short raw or lightly filtered trace and comparing energy before and after the filter stages. If the
stimulus energy sits outside the configured passband or aliases into a rejected region, adjust sample rate, AAF cutoff, and
DSP bands together. Avoid “threshold-only” tuning without spectrum evidence.
See H2-5 and H2-6.
Measure: sample_rate, AAF configuration, bandpass settings, spectral peak location, aliasing markers, feature energy retained after filters.
Q8Night-time false alarms rise—animals/wind or power noise? Which two evidences first?
Use two fast discriminators: (1) the event signature (feature_id, spectrum/energy shape, duration) to separate wind/animals from
“electrical” patterns, and (2) power/health evidence (vin_mv trends, UVLO/reset_reason bursts) to confirm rail-induced artifacts.
If power evidence is clean, tune immunity by class and check zone clustering; if power evidence is noisy, fix grounding, surge paths,
and analog rail isolation first.
See H2-8 and H2-10.
Q9One fixed location keeps alarming—is it a reflection/connector issue? How to verify?
A persistent “hot zone” usually means either a localized optical discontinuity (connector/splice reflection or micro-bend loss)
or a mechanical coupling hotspot. Verify by correlating the zone with optical health: look for step changes in optical power,
shifts in reflection-related indicators, or repeated SNR patterns unique to that segment. If optics look stable, inspect mounting,
tension, and nearby vibration sources that repeatedly excite that location.
See H2-3 and H2-8.
Measure: optical_power step/trend, zone clustering persistence, repeated feature_id patterns, link error counters during alarms.
Q10Events are missing after a network outage—what’s the minimal buffer/retry closed loop?
Implement a monotonic packet_seq, a durable event queue, and explicit drop accounting. During outage, buffer_fill_pct should rise
predictably; after recovery, replay must preserve sequence ordering, and duplicates must be detectable for de-duplication upstream.
If events are dropped, drop_cnt must explain how many and why (buffer full, policy limits). This minimal loop makes outages auditable
without requiring VMS platform logic.
See H2-9.
Q11After a surge, the device “works” but sensitivity is worse—check which chain first?
Start at the photonic AFE and its rails: surges often leave subtle damage or elevated noise that reduces SNR without killing the system.
Re-run a known stimulus at a known point and compare SNR, noise floor, and clipping/saturation counters versus baseline. If AFE noise
has risen, inspect TVS paths, ground return integrity, and low-noise LDO health before adjusting DSP thresholds. Sensitivity loss that
correlates with vin ripple or reset bursts points to power domain injury.
See H2-10 and H2-4.
Measure: TIA noise floor, saturation/clipping count, SNR at fixed stimulus, vin ripple trend, surge/ESD counters (if available).
Q12How do I do a field health check with minimal instruments?
Use logs first, then only two measurements. From logs, confirm link stability (link_state flaps), uplink health (retry_count, buffer_fill),
and event quality (SNR/confidence distribution) under a simple tap test. Then measure (1) input rail stability (vin_mv trend, UVLO/reset_reason)
and (2) optics health if available (optical power monitor). If these are stable, focus on zone mapping/version integrity; if not, fix grounding,
protection, and rail isolation before any algorithm tuning.
See H2-11 and H2-10.