123 Main Street, New York, NY 10001

Glass-Break & Shock Sensor Hardware Design & Debug

← Back to: Security & Surveillance

A glass-break & shock sensor is reliable only when every alarm is backed by an explainable evidence chain: acoustic features + vibration features + threshold strategy + self-test health. This page shows how to build, tune, validate, and field-debug that chain so detection stays high while false alarms and battery drain stay low.

H2-1. Definition & Scope: What This Sensor Proves

This page is strictly about edge-side glass-break and shock/vibration detection using a microphone path and an accelerometer path, including AFE, DSP feature extraction, thresholds, self-test, and low-power behavior. It does not expand into panels, platforms, cameras, NVR/VMS, or networking stacks.

Scope contract

A glass-break & shock sensor is an evidence-driven classifier at the sensor node: it converts two raw signals (acoustic + structural) into an explainable decision with enough context to support validation and field debugging.

  • Acoustic evidence chain: a time-windowed acoustic event (impact-like onset + shatter-like content) expressed as band energies, event order, and duration.
  • Structural evidence chain: vibration/shock signatures (peak/RMS, ring time, repetition) that depend on mounting coupling and enclosure mechanics.
  • Decision philosophy: detect-and-reject, not “always alarm”—every alarm should be explainable, and every reject should have a reason code.
Edge-only evidence Explainable outputs Mounting-aware Low-power state machine

What must be “proven” (not merely detected)

A robust detector proves consistency between what was sensed and what was decided. The minimum proof is not the raw waveform; it is the combination of feature scores, threshold profile identity, and health state at decision time.

  • Event type: glass / shock / both / unknown (or vendor-specific classes).
  • Confidence: per-path scores and fused confidence (to separate “weak evidence” from “strong evidence”).
  • Profile integrity: threshold/profile ID + version + CRC (to prevent “silent” behavior changes).
  • Health integrity: last self-test state + fail code + timestamp, plus tamper status if present.
Minimum explainable output fields:
event_type score_mic score_accel score_fused reject_reason profile_id profile_ver profile_crc selftest_state selftest_code

What this page intentionally does not cover

To avoid scope creep and content overlap with sibling pages, only the sensor-node evidence chain is discussed here. System integration is limited to generic “outputs” (alarm line, relay driver, or radio uplink) without platform or panel architecture.

  • Not covered: alarm panel/VMS/NVR architecture, cloud dashboards, OS/app tutorials, PoE switching/PSE, timing-grandmaster design.
  • Only referenced (one-liners): output interface types and how to log/encode the explainable fields.
Figure F1. Glass-break & shock sensor: dual sensing paths into DSP, thresholds, self-test, and explainable outputs. Block diagram showing microphone AFE and accelerometer AFE feeding DSP feature extraction, fusion/vote, threshold profiles, self-test injection, and alarm/log outputs. Glass-Break & Shock Sensor — Edge Evidence Chain Acoustic path (Microphone) Structural path (Shock/Vibration) Mic + Bias Low-noise Preamp Band-pass + Limiter ADC (clip cnt) Accelerometer Filter + Peak Detect Wake / IRQ (counts) ADC / DSP In (peak/RMS) DSP Feature Extraction Band energy E_low / E_high Event order impact→shatter Envelope duration Peak / RMS shock metrics Ring time mount coupling Reject rules reason codes Fusion / Vote score_mic + score_accel score_fused + reject_reason Threshold Profiles id / ver / CRC Self-test Inject pass/fail code Outputs Alarm Output relay / GPIO / RF Event Log type + scores profile + selftest Health Flags tamper / faults watchdog resets Power State wake counts duty-cycle time current budget
Figure F1. Dual-path sensing (microphone + accelerometer) feeds DSP features, fusion/voting, threshold profiles, and self-test. Outputs remain explainable via scores, reject reasons, and profile/self-test identifiers.
Cite this figure: Figure F1 — Glass-Break & Shock Sensor: Edge Evidence Chain Recommended caption: “Dual-path mic + accelerometer evidence chain with DSP features, fusion/vote, thresholds, self-test, and explainable outputs.”

H2-2. System Architecture: Dual-Path Sensing and Decision Flow

The architecture goal is not maximum sensitivity; it is stable decisions across environments and installations. Dual-path design reduces false alarms and makes field behavior diagnosable by separating acoustic evidence from structural evidence.

Three architectures and how they fail

Selecting where “truth” comes from determines whether the product will be stable in the field. Single-path systems typically fail in predictable ways; dual-path systems fail in ways that can be explained and fixed.

  • Mic-only: vulnerable to “glass-like” sounds (metal clicks, sharp impacts, certain audio content). False alarms rise when background noise changes.
  • Accel-only: highly dependent on mounting coupling and enclosure mechanics. Same thresholds behave differently across installations.
  • Mic + Accel: use one path as a discriminator for the other. Design focuses on fused confidence + explicit reject reasons.

Where fusion happens (and where it does not)

Fusion should happen at the node’s DSP/MCU decision layer so that the alarm is explainable without relying on a backend. This page covers DSP-level fusion only: per-path feature vectors produce per-path scores, then a fused decision is computed.

DSP-level fusion contract:
features_mic → score_mic   +   features_accel → score_accel   ⇒   score_fused + reject_reason + profile_id/ver/CRC

Fusion policy is typically one of: AND (conservative), OR (sensitive), or weighted vote (field-tunable). Weighted vote is usually preferred because it can be tuned using validation datasets without changing hardware.

Minimum decision inputs (feature vectors) for explainability

To stay robust and debuggable, the decision input set must be small, measurable, and logged. A minimal feature set should cover energy distribution, time order, and mechanical impulse characteristics.

  • Acoustic: band energies (low/high), energy ratio, envelope duration, event-order score, ADC clip count.
  • Structural: peak and RMS acceleration, ring time, trigger count, impulse repetition.
  • Context: noise-floor estimate, mount/profile identifier, self-test freshness (time since last pass).

The same fields also enable fast triage: high clip count indicates front-end saturation; abnormal ring time indicates mounting coupling shift; frequent triggers indicate wake-threshold drift or mechanical vibration sources.

Figure F2. Dual-path decision flow: mic-only vs accel-only vs fused evidence at DSP. Diagram contrasting three architectures and showing DSP-level fusion with minimal feature vectors, scores, and explainable outputs. Decision Architecture — Why Dual-Path Wins Mic-only Accel-only Mic + Accel (DSP fusion) Mic AFE → Features band energy • order • duration Single threshold limited discriminators Common failure “glass-like” sounds → false alarms Accel path → Features peak/RMS • ring time • count Mount-dependent coupling shifts signatures Common failure same thresholds → inconsistent field behavior Two feature vectors mic + accel (logged) score_mic clip cnt noise floor score_accel peak/RMS ring time Fusion / Vote weighted policy + profile_id/ver/CRC Explainable decision score_fused + reject_reason self-test state + codes Result hard to debug without reject reasons Result installation variance dominates
Figure F2. Mic-only and accel-only architectures have predictable failure modes (false alarms vs installation variance). DSP-level fusion enables explainable decisions via per-path scores, profile integrity, and reject reasons.
Cite this figure: Figure F2 — Dual-Path Decision Flow and Fusion Location Recommended caption: “Comparison of mic-only, accel-only, and DSP-fused dual-path architectures, highlighting explainable scores and reject reasons.”

H2-3. Acoustic Signature Engineering: What “Glass Break” Looks Like in Data

“Glass break” should be treated as a structured event, not a single spectral peak. Robust detection comes from time order, band-energy distribution, and window density that can be logged and verified in validation and field debug.

Event model: three windows that make the decision explainable

A practical edge model separates the acoustic stream into three windows so that the classifier can explain what happened and why it was accepted or rejected.

  • Impact window: short onset search for an impulse-like trigger (often broad/low-weighted energy and sharp envelope rise).
  • Shatter window: limited-time follow-on search for dense high-band micro-bursts and consistent “shatter-like” energy distribution.
  • Decision window: aggregate evidence into score_mic, then emit score_fused and reject_reason.
Logging tip: store per-window summaries so field behavior can be replayed without raw audio: E_low E_high E_ratio order_score burst_count duration_ms.

Three verifiable indicators (minimum set)

Detection should rely on a small feature set that is stable across rooms and background noise. These three families provide a compact, measurable evidence chain.

  • Energy ratio (E_high / E_low): reduces sensitivity to absolute gain and distance by comparing bands.
  • Event order (impact → shatter): requires high-band density to occur after an onset within a valid time gap.
  • Window density: burst_count, burst_spacing, and duration_ms reject sparse “clicks” and isolated noise.

How common confusers fail the evidence chain

False alarms often look “energetic” but fail one of the three constraints. Using explicit reject reasons prevents chasing random noise sources.

  • Sharp metal click: may have high-band energy but typically low burst density (fails burst_count).
  • Loud shout / bark: energy may be strong but order and shatter-window density do not match (fails order_score).
  • Door slam: strong onset but high-band signature may be weak (fails E_ratio or shatter-window rules).
Recommended reject_reason taxonomy (examples):
NO_IMPACT NO_SHATTER_DENSITY BAD_ORDER LOW_E_RATIO CLIPPED_AUDIO
Figure F3. Glass-break acoustic event model: impact window, shatter window, and decision window. Time-axis diagram showing impact onset, dense shatter bursts, and a decision window that computes energy ratio, order score, burst count, and outputs score and reject reason. Acoustic Signature — Time Windows & Verifiable Features time → Impact window onset + envelope rise Shatter window dense high-band bursts Decision window aggregate + explain Verifiable mic features Energy ratio E_high / E_low Order impact → shatter Density burst_count / duration Explainability fields score_mic per-window reject_reason why rejected clip_count distortion flag Output event_type glass / unknown score_fused with accel log record
Figure F3. A robust glass-break model is a structured event: an impact window followed by a shatter window, then a decision window that emits verifiable features (energy ratio, order, density) and explainable fields (scores, reject reasons, clip flags).
Cite this figure: Figure F3 — Acoustic Time Windows and Verifiable Features Recommended caption: “Impact→shatter time-window model with compact, loggable features for explainable glass-break decisions.”

H2-4. Microphone AFE: Bias, Gain, Band-Pass, and Dynamic Range

The microphone chain is engineered to protect features, not audio fidelity. A stable detector needs a controlled noise floor, enough headroom to avoid clipping on impacts, and filtering aligned to the feature bands used for decision-making.

Signal-chain intent: produce features that stay stable across environments

The AFE should deliver a predictable digital stream where the same real-world event produces similar feature values. The core trade-off is noise floor vs headroom—too much gain raises false alarms; too little gain increases misses.

  • Bias path: stable mic bias and bias health check reduce silent failures and drift.
  • Low-noise preamp: sets the noise floor that drives noise_floor_est and affects threshold stability.
  • Band-pass: aligns energy buckets (e.g., low vs high band) to the acoustic feature model.
  • Limiter/AGC (optional): prevents overload while preserving time-order and burst density.

Why clipping breaks detection (even when “energy looks high”)

ADC clipping does not only distort amplitude; it can reshape the envelope and spectral distribution that the DSP uses. In practice, clipping can create two bad outcomes: false positives (distorted bursts look “dense”) and false negatives (order/duration becomes inconsistent).

  • Mandatory evidence: log clip_count per window and treat it as a distortion flag in reject_reason.
  • Headroom rule: ensure expected impact peaks do not sit near full-scale; keep margin so order and density remain valid.
  • Fast triage: rising clip_count after installation changes usually indicates gain/bias changes or EMI coupling, not “new glass behavior”.

Where to measure (minimum 3 points) and what each proves

Debug should be anchored to measurable points that map directly to failure mechanisms. This avoids tuning thresholds blindly.

  • Point A — AFE output (pre-ADC): verify noise floor, DC offset/bias health, overload behavior, and filter shaping.
  • Point B — ADC/digital stats: verify clip_count, max-abs, and per-window saturation duration.
  • Point C — background estimate: verify noise_floor_est stability (drift here destabilizes thresholds).
First fixes (typical): reduce_gain adjust_bpf enable_soft_limiter improve_bias_filtering EMI_shielding
Figure F4. Microphone AFE: bias, preamp, band-pass, limiter, ADC, and DSP features with debug probes. Detailed block diagram of the microphone front-end showing bias check, low-noise preamp, band-pass filter, limiter/AGC, ADC with clip count, and DSP feature extraction, plus three labeled measurement probes. Microphone AFE — Bias • Gain • Filter • Headroom • Evidence Signal chain Mic + Bias bias check Low-noise Preamp noise floor Band-pass Filter feature bands Limiter / AGC protect features ADC clip_count max_abs DSP features Feature alignment (from H2-3) Band energies E_low • E_high • E_ratio Event order impact→shatter score Density burst_count • duration Distortion evidence clip_count → reject_reason Minimum debug probes (3 points) Probe A — Pre-ADC noise floor • overload • DC drift Probe B — Digital stats clip_count • max_abs • saturation Probe C — Noise estimate noise_floor_est stability
Figure F4. Mic AFE blocks (bias, preamp, band-pass, limiter/AGC, ADC) should be engineered around noise floor and headroom. Debug must include a pre-ADC probe, digital clip statistics, and a noise-floor estimate to explain false alarms vs misses.
Cite this figure: Figure F4 — Microphone AFE Detail with Debug Probes Recommended caption: “Mic AFE chain aligned to glass-break features, with clip evidence and three minimum debug probes.”

H2-5. Accelerometer/Shock Path: Mounting Coupling, Filtering, and Interrupts

The accelerometer path measures a structure-coupled response, not the “impact event” directly. Mounting and enclosure mechanics reshape the spectrum and ring-down, so detection must combine filtering, low-power interrupts, and loggable evidence (peak/RMS/ring time/trigger counts) to stay stable across installs.

Mounting coupling is the hidden transfer function

The same real-world shock can look completely different after it passes through the enclosure, bracket, and fastening method. Treat mounting as a “transfer function” that changes peak, ringing duration, and band energy distribution.

  • Placement: closer to glass/frame vs deep in the housing shifts the dominant resonance and peak shape.
  • Fixation: screw / adhesive / foam changes damping; damping mainly controls ring_time_ms.
  • Axis & range: axis alignment and full-scale range control saturation risk (acc_sat_flag).
Practical rule: thresholds should bind to a profile_id (mount variant), not a single “universal” value.

Filtering that serves features (not “pretty waveforms”)

Filters exist to make shock evidence consistent under gravity drift and environmental vibration, so peak/RMS/ring-time remain comparable between units.

  • High-pass (HP): removes gravity/slow drift so shocks become sharp and comparable in peak/RMS.
  • Low-pass (LP): suppresses high-frequency noise that inflates RMS and causes false wakeups.
  • Optional band buckets: if needed, compute a small set of band energies to separate “single hit” vs “resonant rattle”.
Evidence fields (minimum): acc_peak_g acc_rms_g ring_time_ms trigger_count

Interrupt + second-stage capture: the low-power pattern

A power-efficient shock detector uses a two-stage workflow: a tiny always-on threshold interrupt to wake the system, followed by a short capture window to classify the event with stable features.

  1. Stage 1 (always-on): threshold interrupt with hysteresis and debounce to reduce chatter.
  2. Stage 2 (short capture): sample for a bounded window, compute peak/RMS/ring-time and (optional) band energies.
  3. Stage 3 (classify): output score_accel, then pass to fusion with mic evidence.
Power KPI: log wake_count and trigger_count. A rising rate usually means threshold/policy, not “more glass breaks.”

What to log so field behavior is diagnosable

Shock decisions must be explainable without streaming raw data. Log a compact record per event so installers can identify mounting changes, saturation, and vibration environments quickly.

  • Core evidence: acc_peak_g, acc_rms_g, ring_time_ms
  • Health flags: acc_sat_flag, selftest_state
  • Counts: trigger_count, wake_count (per hour/day bins)
  • Config identity: profile_id, profile_ver, profile_crc
Figure F5. Accelerometer shock path: mounting coupling, HP/LP filtering, peak detect, interrupt wake, and DSP classification. Block diagram showing mounting coupling effects feeding an accelerometer, high-pass and low-pass filters, peak/RMS/ring-time extraction, threshold interrupt to wake, short capture window, and DSP classification outputting score and evidence fields. Shock Path — Mounting Coupling, Filters, Interrupt Wake, Evidence Mounting coupling placement • fixation • damping Screw Glue Foam Accelerometer range / ODR self-test HP drift out LP noise down Evidence features peak RMS ring time counts Stage 1: threshold interrupt hysteresis • debounce • low power log: trigger_count / wake_count Stage 2: short capture + classify bounded window → features → score_accel output: score_accel + acc_peak_g / acc_rms_g / ring_time_ms To fusion with mic evidence
Figure F5. Shock detection is dominated by mounting coupling. Use HP/LP filtering to stabilize peak/RMS/ring-time evidence, wake on a low-power interrupt, then classify using a short capture window to keep power and false wakes controlled.
Cite this figure: Figure F5 — Accelerometer Shock Path (Filtering + Interrupt + Evidence) Recommended caption: “Mounting-dependent shock response compressed into loggable peak/RMS/ring-time evidence with interrupt-driven low-power classification.”

H2-6. DSP Feature Extraction: From Raw Waveforms to Robust Decisions

This chapter defines the implementable DSP contract: raw mic + accel streams become compact features, per-path scores, a fused confidence, and an explainable decision record (event_type, confidence, reject_reason) that supports validation and field debugging without raw data capture.

Pipeline overview: raw → features → score → fusion → decision

The DSP pipeline should be built as stages with explicit health gates and outputs. Each stage must produce fields that can be logged so failures are diagnosable.

  1. Windowing: impact/shatter windows (mic) and capture windows (accel).
  2. Prechecks: noise floor estimation, clipping/saturation flags, self-test state.
  3. Feature extraction: acoustic family + shock family features.
  4. Per-path scoring: score_mic, score_accel with confidence modifiers.
  5. Fusion policy: AND/OR/weighted vote → score_fused, confidence, event_type.
  6. Explainability record: emit reject_reason + profile identity (profile_id/profile_ver/profile_crc).

Acoustic feature family (mic): minimal set that stays stable

Acoustic features should match the event model from H2-3 and remain robust under gain changes and background noise. Keep the set small and loggable.

  • Band energies: E_low, E_high, E_ratio (ratio reduces absolute-level sensitivity).
  • Envelope: duration_ms, burst_count, burst_spacing (density rejects isolated clicks).
  • Spectral centroid (optional): centroid as a compact “brightness” indicator.
  • Event order: order_score to enforce impact → shatter structure.
Integrity gate: if clip_count is high, reduce mic confidence or force a reject reason like CLIPPED_AUDIO.

Shock feature family (accel): structure-coupled but explainable

Shock features must handle enclosure-dependent resonance. Favor metrics that explain what changed when installation changes.

  • Magnitude: acc_peak_g and acc_rms_g separate impulses from continuous vibration.
  • Ring-down: ring_time_ms is a mounting fingerprint (damping/looseness detection).
  • Optional band energies: coarse energy buckets can separate “rattle” patterns from single hits.
  • Counts: trigger_count/wake_count for power and nuisance-rate diagnosis.
Health gate: saturation should be explicit (acc_sat_flag) to avoid “false certainty” from clipped shocks.

Fusion policy: AND / OR / weighted vote with confidence

Fusion must be a policy that produces both a decision and a reason. Choose the policy by product risk (false alarm tolerance vs miss tolerance), and always emit a fused confidence.

  • AND (conservative): lowest false alarms; requires both paths to agree (may increase misses).
  • OR (sensitive): catches more events; requires stronger reject rules and nuisance control.
  • Weighted vote (recommended): compute score_fused from score_mic and score_accel, then apply a confidence threshold.
Required outputs: event_type score_fused confidence reject_reason

Explainability contract: reject reasons + profile identity

A detector is maintainable only when every “no alarm” is explainable. Use a compact reject taxonomy and tie decisions to the configuration identity.

Reject reason examples:
NO_IMPACT BAD_ORDER NO_SHATTER_DENSITY LOW_E_RATIO CLIPPED_AUDIO
NO_SHOCK_ENERGY CONTINUOUS_VIBRATION ACC_SATURATED LOW_CONFIDENCE
SELFTEST_FAIL PROFILE_MISMATCH

Always include profile_id, profile_ver, and profile_crc in the event record so field data can be compared across installs and firmware releases without ambiguity.

Figure F6. DSP feature extraction pipeline: raw mic/accel → prechecks → features → scores → fusion → decision + reject codes. Pipeline diagram with two raw inputs (mic and accel), precheck gates (noise floor, clipping, saturation, self-test), feature families, per-path scores, fusion block, and outputs including event type, confidence, fused score, reject reason, and profile identity fields. DSP Pipeline — Raw → Features → Scores → Fusion → Decision (Explainable) Mic raw windowed frames Accel raw capture window Prechecks noise_floor_est clip_count / acc_sat_flag selftest_state health gating Acoustic features E_ratio • order_score burst_count • duration Shock features acc_peak_g • acc_rms_g ring_time_ms • counts score_mic confidence modifier score_accel confidence modifier Fusion / Vote AND / OR / weighted outputs score_fused + confidence Decision record event_type score_fused confidence reject_reason Config identity profile_id • profile_ver • profile_crc ensures comparable logs
Figure F6. The DSP contract converts raw mic/accel into compact features, per-path scores, and a fused decision record. Prechecks (noise/clipping/saturation/self-test) and explicit reject codes make validation and field debug repeatable.
Cite this figure: Figure F6 — DSP Pipeline (Features → Scores → Fusion → Reject Codes) Recommended caption: “Explainable pipeline with precheck gates, feature families, per-path scoring, fusion policy, and reject_reason taxonomy.”

H2-7. Threshold Strategy: Hysteresis, Adaptation, and Environment Immunity

A “threshold” is not a single number. A robust detector uses a policy: multi-threshold state transitions (trigger/confirm/release), hysteresis and cooldown, environment adaptation (noise floor + mode + mount profile), and explicit immunity rules that emit explainable reject_reason and configuration identity (policy_ver/policy_crc).

Multi-threshold state machine (trigger / confirm / release)

Use three thresholds to separate “wake up quickly” from “confirm reliably” and “return safely”:

  • Trigger threshold: low-power wake gate (may allow more candidates).
  • Confirm threshold: applied to score_mic, score_accel, or score_fused inside a short decision window.
  • Release threshold: lower boundary to prevent chatter and repeated alarms during decay (hysteresis).
Minimum state flow: IDLE → TRIGGERED → CONFIRMING → (ALARM | REJECT) → COOLDOWN → IDLE

Hysteresis + cooldown: nuisance control without “turning sensitivity down”

Hysteresis and cooldown reduce nuisance events without destroying detection sensitivity. The goal is to stop repeated triggers from the same physical decay (ring-down, echo, enclosure rebound).

  • Hysteresis: require the signal/score to fall below release before arming again.
  • Cooldown: a short lockout timer after ALARM/REJECT to avoid rapid retriggers (cooldown_ms).
  • Debounce: ignore micro-bursts that do not meet minimum duration/density gates.
Evidence to log: state_enter_ts, cooldown_ms, retrigger_block_count

Adaptation layers: noise floor, mode, and mount profile

Adaptation should be bounded and auditable. Use slow-tracking baselines and explicit modes rather than “auto-magic” thresholds that drift unpredictably.

  • Noise floor tracking: estimate noise_floor_mic and vib_baseline_rms with rate limits and outlier rejection.
  • Mode selection: e.g., day/night or armed/disarmed modes; log mode_id to make field data comparable.
  • Mount profile binding: thresholds and weights depend on profile_id (mounting variant) to handle coupling changes.
  • Freeze-on-event: hold baseline updates during candidate events to prevent baseline “learning the alarm.”
Guardrails: cap adaptation range (min/max) and record the effective thresholds used per event.

Environment immunity: reject rules must be explicit and explainable

Immunity is implemented as reject rules that invalidate unreliable evidence chains. Every reject should have a reason code.

  • Too noisy: noise_floor_mic above limit → ENV_TOO_NOISY
  • Audio clipped: clip_count high → CLIPPED_AUDIO (reduce mic confidence or hard reject)
  • Order/density mismatch: missing impact→shatter structure or burst density → BAD_ORDER / NO_SHATTER_DENSITY
  • Continuous vibration: elevated acc_rms_g baseline with long duration → CONTINUOUS_VIBRATION
  • Low confidence: fused score below gate → LOW_CONFIDENCE (with sub-reasons preserved)
Key requirement: immunity changes behavior without hiding information; emit reject_reason + supporting stats.

Reproducibility: versioning, CRC, and environment statistics

Threshold tuning must be traceable across firmware releases and installations. Record configuration identity and a compact environment summary so event logs remain comparable.

  • Policy identity: policy_ver, policy_crc, threshold_set_id
  • Profile identity: profile_id, profile_ver, profile_crc
  • Noise/vibration stats: noise_floor_mic, noise_p95, vib_baseline_rms, wake_rate
Figure F7. Threshold strategy: trigger/confirm/release with hysteresis and noise floor tracking. Diagram showing a signal level timeline, a slow-tracking noise floor line, and three thresholds for trigger, confirm, and release, plus a state timeline with cooldown and configuration identity fields. Threshold Policy — Multi-Threshold + Hysteresis + Noise-Floor Tracking Level vs time (concept) Confirm Trigger Release Noise floor Decision window State timeline IDLE TRIGGERED CONFIRMING ALARM COOLDOWN Identity policy_ver policy_crc profile_id mode_id Env stats noise_floor noise_p95 vib_baseline wake_rate
Figure F7. A threshold policy combines trigger/confirm/release levels with hysteresis and cooldown. Slow-tracking noise-floor statistics and explicit identity (policy/profile/mode + CRC) make field behavior reproducible.
Cite this figure: Figure F7 — Threshold Policy (Hysteresis + Noise Floor Tracking) Recommended caption: “Multi-threshold state-machine with noise-floor adaptation, hysteresis, and auditable configuration identity.”

H2-8. Self-Test & Sensor Health: Proving the Detector Still Works

Self-test must be a closed-loop proof: stimulus → measured response → verdict. The goal is to detect silent degradation (sensor faults, path opens/shorts, saturation, occlusion, tamper-induced damping changes) and to emit a diagnosable health record (selftest_state, pass/fail, response window, and fail_reason).

Closed-loop self-test: stimulus → response → verdict

Self-test should validate the critical chain used by detection, not just “register access.” A minimal implementation runs: (1) inject a known stimulus, (2) measure response metrics, (3) compare against acceptance windows, (4) emit verdict + reason.

Minimum outputs: selftest_state selftest_pass fail_reason resp_level resp_in_window

Microphone self-test: acoustic injection vs electrical injection

Two complementary approaches prove different parts of the chain:

  • Acoustic injection (beeper/speaker tone): covers the physical acoustic path + microphone + AFE + DSP; more environment-sensitive.
  • Electrical injection (test tone into AFE/ADC): stable and repeatable; proves the electronics path and DSP gating.

Use response metrics that match the detector’s features: band-energy ratio, SNR, and clipping flags.

Evidence fields: mic_stim_level mic_resp_level mic_resp_snr clip_count
Fail reasons: MIC_RESP_LOW MIC_TOO_NOISY MIC_CLIPPED

Accelerometer self-test: built-in excitation + acceptance windows

Many accelerometers provide an internal self-test excitation. Validate that the measured response falls within per-axis windows and that noise/baseline metrics remain plausible.

  • Response window: acc_st_resp must be inside acc_st_minacc_st_max.
  • Saturation awareness: reject self-test results when acc_sat_flag is set.
  • Sanity checks: baseline RMS and axis consistency checks can detect stuck or severely degraded sensors.
Fail reasons: ACC_ST_FAIL ACC_SATURATED ACC_NOISE_HIGH

Tamper and abnormal conditions: detect health anomalies, not “attack narratives”

Tamper-related changes often appear as measurable health anomalies:

  • Mic occlusion / blockage: injected response drops, band-energy ratios shift, noise floor changes.
  • Cover removed / loosened mounting: vibration baseline and ring-time signatures drift; wake rate spikes.
  • Potting / strong damping: ring-down shortens and peak reduces; shock signatures become inconsistent with the selected profile.
Indicators to log: noise_floor_mic vib_baseline_rms ring_time_ms wake_rate profile_id profile_crc

Health record: make failures diagnosable and actionable

A health system is only useful when failures are explainable and comparable across firmware builds. Always include identity/version and a compact response summary.

  • Verdict: selftest_pass + fail_reason
  • Response summary: resp_level, resp_snr, resp_in_window
  • Identity: health_ver, policy_ver, policy_crc, profile_id
  • Policy reaction: degrade confidence or block alarms when health is failing, and emit HEALTH_BLOCKED in logs.
Figure F8. Self-test injection loop: stimulus → AFE → DSP checks → health verdict with fail reasons. Block diagram showing acoustic and electrical stimulus sources, mic and accelerometer paths, DSP health checks, verdict outputs, failure reason codes, and the emitted health record fields. Self-Test & Health — Stimulus → Response → Verdict (Closed Loop) Stimulus sources Acoustic injection beeper Electrical injection test tone Mic path under test Bias Preamp BPF ADC / clip Metrics resp_level • SNR Accel path under test Self-test excite Response resp window DSP health checks resp_in_window clip/saturation gating noise sanity tamper indicators Health verdict PASS / FAIL fail_reason codes MIC_RESP_LOW • ACC_ST_FAIL MIC_TOO_NOISY • TAMPER_SUSPECT Health record selftest_state selftest_pass resp_level / SNR resp_in_window fail_reason policy_ver / crc profile_id
Figure F8. Self-test is a closed-loop proof: inject stimulus, measure response metrics, apply acceptance windows and gating, then emit a health verdict with diagnosable fail-reason codes and a comparable health record.
Cite this figure: Figure F8 — Self-Test Closed Loop (Stimulus → Response → Verdict) Recommended caption: “Closed-loop self-test with injection sources, measurable response windows, DSP health checks, and explicit fail_reason codes.”

H2-9. Low-Power Design: Wake-on-Event, Duty Cycling, and Energy Budget

Battery life is dominated by four measurable levers: always-on current, wake frequency, per-event processing time, and logging/report cost. A reproducible budget uses state currents and durations (sleep/standby/listen/wake/classify/report) plus daily counters such as wake_count_day and event_count_day.

Energy budget template (reproducible)

Compute daily consumption from an always-on baseline plus event-driven energy. Keep the budget auditable by logging the effective thresholds/modes used during the measurement period.

Budget: mAh/day = I_AO*24h + N_wake*E_event + N_report*E_report
Where: E_event = Σ (I_state * t_state) / 3600

Measure I_AO as the current in the “armed but idle” condition, then validate N_wake and E_event using counters and timing logs.

Always-on inventory: what stays awake when “sleeping”

Identify the always-on blocks and pin their contribution first—this is often the #1 determinant of battery life.

  • Accel interrupt: threshold interrupt / wake-on-motion (typical always-on anchor).
  • Mic bias / ULP gate: bias + simple envelope/energy gate, or duty-cycled listen windows.
  • Retention + time base: RTC / retention RAM required for timestamps and health state.
Evidence fields: I_AO_uA ao_blocks_mask mode_id profile_id

Wake-on-event timeline: wake → capture → classify → report

Break event handling into states and measure both current and duration. The fastest wins often come from reducing N_wake (immunity rules) and shortening capture/classify windows.

  • Wake: domain bring-up and clock settle (t_wake_ms).
  • Capture: short audio/accel windows (t_capture_ms), freeze baseline updates during capture.
  • Classify: feature extraction + scoring + fusion (t_classify_ms).
  • Report/Log: store event record; optional uplink (t_report_ms).
Minimum timing log: t_wake_ms t_capture_ms t_classify_ms t_report_ms

Logging and reporting: the silent battery killer

Write logs in layers. Always keep a compact event record, and only capture heavy payloads (raw snippets) by sampling or in a diagnostic mode. Reporting spikes can dominate energy even when classification is efficient.

  • Always: small event record (tens of bytes).
  • Sampled: store a short snippet or feature vector every N events.
  • Diagnostic: temporarily increase detail when FAR rises (record mode_id and policy_crc).
Evidence fields: log_bytes_day report_count_day snip_store_rate

Field evidence: counters that explain battery drain

  • Daily wakes: wake_count_day, trigger_count_day
  • Processing load: event_count_day, avg_t_classify_ms
  • Resets: brownout_count, reset_reason, uvlo_events
  • Environment: noise_floor_mic, vib_baseline_rms (correlate with wake spikes)
Figure F9. Low-power state machine: sleep/standby/listen/wake/classify/report with budget levers. Block diagram showing low-power states, transitions, and where currents and durations are measured, plus a daily budget equation and key counters. Low-Power State Machine — Wake-on-Event + Reproducible Budget States (measure I and t in each) SLEEP I_sleep STANDBY t_wake LISTEN I_listen WAKE t_capture CLASSIFY t_classify REPORT t_report event record + optional uplink reject → return to sleep Budget and counters mAh/day = I_AO*24h + N_wake*E_event + N_report*E_report Counts wake_count/day event_count/day report_count/day Stability brownout_count reset_reason uvlo_events Always-on (I_AO) Accel interrupt Mic bias / ULP gate RTC / retention
Figure F9. Low-power operation is a measured state machine. Battery life is driven by always-on current, wake frequency, per-event processing time, and logging/report cost.
Cite this figure: Figure F9 — Low-Power State Machine & Energy Budget Recommended caption: “Wake-on-event state machine with measurable I/t per state and a reproducible daily budget equation.”

H2-10. Validation Plan: How to Measure Detection vs False Alarms

Validation must be an SOP: define a coverage matrix (glass types, distance, angle, room acoustics, noise, and mounting profiles), compute metrics (TPR/FAR/latency/robustness), and capture enough evidence per event to replay decisions later (timestamps, snippet hashes, feature scores, and verdict reasons).

SOP overview: what to prove and what to export

The output of validation is not “it seems OK.” Export a comparable package per build: TPR, FAR, latency distribution, and reject_reason histogram, all tied to the exact policy/profile identity.

Identity fields: policy_ver policy_crc profile_id mode_id

Coverage matrix: structured, not combinatorial explosion

Cover primary variables fully, sample secondary variables, and keep control variables fixed per run.

  • Primary (must cover): glass type, distance, angle, room acoustics, background noise, mounting profile_id.
  • Secondary (sample): temperature, supply voltage level, unit-to-unit variation.
  • Control (fixed): sampling rates, window lengths, policy thresholds and weights.
Run label: test_id room_id glass_id distance_cm angle_deg

Metrics: definitions that can be computed

  • TPR (detection rate): fraction of defined glass-break events that reach ALARM within the allowed latency.
  • FAR (false alarm rate): false alarms per hour/day, reported per environment and per profile.
  • Latency: event timestamp → alarm timestamp (distribution, not just average).
  • Robustness: delta of TPR/FAR under noise, temperature drift, and mounting change.
Recommended exports: confusion summary + latency histogram + FAR by scenario + top reject reasons.

Data capture: event record + snippet hash + optional raw fragments

Store enough evidence to replay decisions without always storing large payloads.

  • Always: compact event record: event_ts, scores, verdict, reject_reason, identity fields.
  • Always: input_snippet_hash (hash of raw snippet or feature vector).
  • Sampled: store short raw audio/accel windows every N events or in diagnostic mode.
Per-event minimum: event_ts score_mic score_accel score_fused confidence verdict reject_reason policy_ver policy_crc profile_id mode_id input_snippet_hash

Test report template: make failures diagnosable

For each scenario, include: event counts, TPR/FAR, latency distribution, and the top few misclassified cases with their input_snippet_hash and reject reasons. This enables threshold tuning without guesswork.

Failure case bundle: test_id + snippet_hash + scores + thresholds used + reject_reason
Figure F10. Validation setup: room layout, sensor position, noise source, impact point, and capture path. Room layout diagram with glass panel location, sensor node placement, impact point, noise source, and arrows to a recorder/logger with policy identity fields and snippet hash. Validation Setup — Coverage Variables + Evidence Capture Room (scenario) Glass panel type • thickness Impact point Sensor node profile_id Noise source TV / music / talk distance angle capture → record Recorder / Logger event record snippet_hash scores / reasons score_mic • score_acc score_fused • confidence reject_reason identity policy_ver • policy_crc profile_id • mode_id test_id room_id • glass_id
Figure F10. A validation setup must bind detection metrics to controlled coverage variables and to replayable evidence: event records, snippet hashes, scores, and verdict reasons tied to policy/profile identity.
Cite this figure: Figure F10 — Validation Setup (Coverage + Evidence Capture) Recommended caption: “Room-level test layout with controlled variables and per-event evidence capture (snippet_hash, scores, reject_reason, policy identity).”

H2-11. Field Debug Playbook: Symptom → Evidence → Isolate → Fix

This chapter is an on-site decision tree designed for fast, repeatable diagnosis with minimal tools. Every branch is driven by two high-yield evidence sources: power-current waveform and feature/reject logs. The output is a first-fix action that reduces trial-and-error while keeping decisions explainable.

Universal entry: the first 2 measurements (do these before guessing)

  • Measurement #1 — Power current waveform: capture sleep baseline + wake spikes. Correlate spikes with counters to confirm whether the device is waking too often, retrying, or rebooting.
  • Measurement #2 — Feature & reject logs: pull per-event scores and reason codes to see why events were accepted or rejected (avoid “black box” tuning).
Minimum fields to read: wake_count_day event_count_day report_count_day score_mic score_accel score_fused confidence reject_reason noise_floor_mic vib_baseline_rms clip_count selftest_pass fail_reason policy_ver policy_crc profile_id mode_id
Example MPNs for measurement & logging support (optional)
  • Current sense / monitor: INA219AIDCNR, INA226AIDGSR, MAX9938 (high-side current sense amp)
  • Event storage: W25Q32JV (SPI NOR flash), FM24CL64B (I²C FRAM)
  • ESD protection (lines): PESD5V0S1UL, SMF05C
Evidence-first Explainable reasons Minimal tools

Symptom A — Frequent false alarms (spike in alarms or wake triggers)

Fast triage (what to check first):

  • Is wake_count_day rising with the same policy_crc? If yes, the trigger path is too sensitive or the environment changed.
  • Is clip_count elevated? Saturation often produces “big energy” but broken features → false accepts.
  • Is noise_floor_mic or vib_baseline_rms drifting upward? Environmental immunity may be missing.

Discriminators (quick branches):

  • Clip-driven: high clip_count and many accepts with low explainability → treat clipping as a reject gate or reduce AFE gain.
  • Noise-floor driven: high noise_floor_mic and reject_reason=ENV_TOO_NOISY missing/rare → add noise tracking + hysteresis + night mode.
  • Mounting/coupling driven: high vib_baseline_rms with repetitive trigger timing → adjust accel filters, interrupt threshold, and profile binding.

First fixes (minimum-change, highest win-rate):

  • Enable multi-threshold strategy: separate trigger / confirm / release thresholds with hysteresis + cooldown.
  • Clip gating: when clipping is detected, lower mic weight or force reject_reason=AUDIO_CLIPPED.
  • Bind tuning to installation: select thresholds/weights by profile_id (do not reuse one threshold across different housings).
Example MPNs (to reduce false alarms via robustness)
  • MEMS microphones (analog/PDM): ICS-40720, ICS-43434, SPH0645LM4H-1, MP34DT06JTR
  • Low-power comparators / gating: TLV3691, MCP6541
  • Op-amps / AFE building blocks: TLV9062, OPA344
  • 3-axis accelerometers (stable interrupts): ADXL362, LIS2DW12, BMA400
Evidence to capture: current wake spikes + clip_count + noise_floor_mic + vib_baseline_rms + reject_reason histogram.

Symptom B — Missed detections (glass break happens, no alarm)

Fast triage:

  • Check selftest_pass. If self-test is failing, health gating may suppress detection.
  • Compare score_mic and score_accel: are both low (“did not see event”), or one high but fused decision low (“saw it but rejected”)?
  • Confirm identity: wrong profile_id/mode_id can systematically under-trigger.

Discriminators:

  • Sensor didn’t see it: both score_mic and score_accel low → suspect obstruction, gain too low, band-pass mismatch, weak mechanical coupling.
  • Policy too strict: high single-path score but low score_fused → fusion weights/confirm threshold too tight for the selected profile.
  • Health-gated: selftest_pass=0 → fix injection/response window before threshold tuning.

First fixes:

  • Verify installation/profile mapping: enforce correct profile_id selection and log it on every event window.
  • Adjust capture window alignment: ensure “impact → shatter” ordering windows are not truncated by sleep timing.
  • Retune confirm threshold using stored evidence: prioritize improving true positives without opening trigger too far.
Example MPNs (to improve sensitivity without exploding false alarms)
  • Audio ADC / PDM interface options: ADAU7002 (PDM-to-I²S), TLV320ADC3101 (audio ADC)
  • Low-power MCUs with DSP headroom: STM32L432KCU6, STM32L476RG, ATSAML21E18B, MSP430FR2433
  • Accelerometers with self-test / stable motion detect: ADXL362, LIS2DW12
Evidence to capture: per-event scores + profile_id/mode_id + window timing (t_capture_ms) + reject_reason.

Symptom C — Battery drains too fast (unexpectedly short life)

Fast triage:

  • Use current waveform to separate high baseline vs frequent spikes vs reboot loops.
  • Check wake_count_day and report_count_day. Many systems die from “wake storms” or retry-heavy reporting.
  • Look at brownout_count and reset_reason. Reboots can create persistent high consumption.

Discriminators:

  • Wake storm: wake_count_day high → tighten trigger immunity; reduce duty-cycled listen rate; strengthen reject rules.
  • Report cost: report_count_day high or long t_report_ms → reduce report frequency, batch logs, limit retries.
  • Baseline too high: elevated idle current (I_AO) → audit always-on blocks and disable unnecessary domains.

First fixes:

  • Reduce N_wake first (immunity/threshold strategy), then reduce E_event (shorter capture/classify), and only then optimize I_AO.
  • Switch to event-layer logging: compact records always; raw snippet storage only sampled or diagnostic.
  • Guard against brownout loops: log reset reason and treat repeated resets as a “power integrity” alarm state.
Example MPNs (to cut consumption in the dominant paths)
  • Ultra-low power accelerometers (wake-on-motion): ADXL362, BMA400
  • Buck converters for low quiescent current: TPS62740, TPS62743, TPS62840
  • Load switch / power gating: TPS22916, TPS22918
  • RTC (low-power time base): PCF85263A, RV-3028-C7
Evidence to capture: current baseline + spike rate + wake_count_day/report_count_day + reset/brownout counters.

Symptom D — Self-test fails (or health is unstable)

Fast triage:

  • Confirm the self-test state machine runs: current waveform should show a short, repeatable activity signature during test.
  • Check fail_reason and the response window fields (example: response amplitude in-range vs timeout).
  • Separate “injection did not happen” from “injection happened but response not measured” from “window/threshold wrong.”

Discriminators:

  • No injection activity: no current signature + fail → scheduler/power domain gating or injection path not enabled.
  • Response too low: injection occurs but response amplitude below window → sensor/AFE path issue or obstruction/coupling.
  • Window mismatch: response exists but verdict fails → capture timing or acceptance window incorrect; baseline not frozen.

First fixes:

  • Align stimulus and capture: ensure injection start is inside the measured window; record policy_crc for traceability.
  • Make verdict explainable: store resp_level and resp_in_window with each self-test run.
  • Gate detection on stable health: when health is unstable, force conservative mode and request service action instead of tuning thresholds blindly.
Example MPNs (to implement reliable injection/health paths)
  • Analog switches for injection routing: TS5A23157, ADG884
  • DAC / waveform generation (if needed): MCP4725A0T-E/CH, DAC60501
  • Accelerometers with built-in self-test: ADXL362, LIS2DW12
  • Mic front-end building blocks: TLV9062, OPA344, TLV3691
Evidence to capture: self-test current signature + selftest_pass + fail_reason + response amplitude fields.

On-site “minimum evidence pack” (copy into service tickets)

  • Identity: policy_ver, policy_crc, profile_id, mode_id
  • Event stats: wake_count_day, event_count_day, report_count_day
  • Decision evidence: score_mic, score_accel, score_fused, confidence, reject_reason
  • Signal health: clip_count, noise_floor_mic, vib_baseline_rms
  • Power & stability: current waveform snapshot, reset_reason, brownout_count, uvlo_events
  • Self-test: selftest_pass, fail_reason, response amplitude/window fields
Tip: attach 1–3 representative cases using input_snippet_hash so tuning can replay the exact evidence chain.
Figure F11. Field debug decision tree: symptom → evidence → isolate → first fix. Decision tree diagram for false alarms, missed detections, battery drain, and self-test failures. Uses two universal measurements: current waveform and feature/reject logs. Branches lead to isolate and first-fix actions. F11 — Field Debug Decision Tree (Evidence-first) Symptom False alarms ↑ wake/alarm spike Missed detections event but no alarm Battery drains fast life below target Self-test fails health unstable Evidence (first 2 measurements) Power current waveform baseline + spikes + reset pattern Feature / reject logs scores + reject_reason + IDs Isolate (fast discriminators) False alarms clip_count? noise_floor_mic? vib_baseline_rms? Miss selftest_pass? score_mic/acc low? policy/profile mismatch? Battery wake_count/day ↑? report_count/day ↑? brownout/reset loops? Self-test injection ran? resp_level in window? timing/window mismatch? First fix (highest win-rate) False alarms Add hysteresis + cooldown Clip gating / reduce gain Bind tuning to profile_id Miss Verify profile/mode Fix capture window alignment Retune confirm threshold Battery Reduce N_wake first Batch logs / limit retries Audit always-on blocks Self-test Align stimulus & capture Log resp_level + fail_reason Fix injection routing
Figure F11. A field-debug playbook should be evidence-first: current waveform + feature/reject logs drive fast isolation and high win-rate first fixes.
Cite this figure: Figure F11 — Field Debug Decision Tree (Evidence-first) Recommended caption: “On-site decision tree mapping symptom → evidence → isolate → first fix using current waveform and feature/reject logs.”

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12. FAQs ×12

Each answer stays inside this page’s evidence chain: current waveform + feature/reject logs + policy/profile IDs. Every item maps back to chapters for deeper action.
False alarms only at night—noise floor drift or threshold strategy issue?

Short answer: Prove whether night noise rises, or policy lacks a night-aware hysteresis strategy.

  • Measure: noise_floor_mic night vs day trend.
  • Measure: reject_reason histogram + wake_count_day.
  • First fix: Add night profile + trigger/confirm/release + cooldown.

Maps to: H2-7 / H2-9 / H2-11

Self-test passes but real events are missed—what’s the next discriminator?

Short answer: Decide whether sensors “didn’t see” the event, or saw it and rejected in fusion.

  • Measure: score_mic vs score_accel per window.
  • Measure: profile_id/mode_id + reject_reason.
  • First fix: Fix window alignment or fusion weights before thresholds.

Maps to: H2-8 / H2-11

Why do two installations behave differently with the same firmware?

Short answer: Different mounting changes coupling and baselines; one profile cannot fit every housing.

  • Measure: vib_baseline_rms and trigger rate per site.
  • Measure: active profile_id + policy_crc.
  • First fix: Bind thresholds/filters to installation profile and re-baseline.

Maps to: H2-5 / H2-7

Mic waveform clips during events—how does that break feature extraction?

Short answer: Clipping distorts band-energy ratios and event order, corrupting scores and decisions.

  • Measure: clip_count + ADC headroom settings.
  • Measure: score shifts in score_mic/reject_reason.
  • First fix: Reduce gain/limit; gate clipped frames (reject or down-weight).

Maps to: H2-4 / H2-6

Shock triggers but glass-break doesn’t—mounting or acoustic path?

Short answer: If accel fires but mic stays weak, the acoustic path is blocked, mis-tuned, or gated.

  • Measure: score_accel high but score_mic low cases.
  • Measure: noise_floor_mic + mic gain/band-pass config.
  • First fix: Verify mic port/bias/gain and shatter window before re-mounting.

Maps to: H2-3 / H2-5

How to tune thresholds without increasing false alarms?

Short answer: Keep trigger conservative; improve confirm logic with hysteresis and reject rules using a test matrix.

  • Measure: FAR/TPR on validation set, not one room.
  • Measure: reject_reason breakdown for error modes.
  • First fix: Separate trigger/confirm/release + cooldown, then re-validate.

Maps to: H2-7 / H2-10

Battery life is short—what 3 counters prove the root cause?

Short answer: Use counters to prove wake storms, expensive reporting, or reboot loops are dominating energy.

  • Measure: wake_count_day and report_count_day.
  • Measure: brownout_count / reset_reason frequency.
  • First fix: Reduce wakes, batch logs, cap retries, stop reset loops.

Maps to: H2-9 / H2-11

Wind/rain causes alarms—what rejection features help most?

Short answer: Reject sustained noise/vibration patterns and require the correct impact→shatter sequence.

  • Measure: noise_floor_mic + envelope duration stats.
  • Measure: vib_baseline_rms + reject_reason distribution.
  • First fix: Add environment gating + sequence checks + cooldown.

Maps to: H2-6 / H2-7

Can accelerometer-only be enough? What do you lose?

Short answer: It can catch impacts, but loses acoustic “shatter” evidence and becomes highly mounting-sensitive.

  • Measure: FAR across housings using vib_baseline_rms.
  • Measure: missed glass-break cases where mic would differentiate.
  • First fix: If accel-only, enforce strict per-profile thresholds and confirm windows.

Maps to: H2-2 / H2-5

What’s the safest self-test interval for low power vs integrity?

Short answer: Use layered self-test: frequent lightweight checks, infrequent full injection based on risk.

  • Measure: self-test energy per run + runs/day.
  • Measure: selftest_pass trend + fail_reason.
  • First fix: Lengthen interval if stable; shorten under extremes/tamper signals.

Maps to: H2-8 / H2-9

CRC ok but behavior changed after update—threshold table or feature weights?

Short answer: CRC may validate a table, but the active profile/mode or weight set can still change.

  • Measure: policy_ver/policy_crc/profile_id/mode_id.
  • Measure: score + reject_reason distribution pre/post update.
  • First fix: Pin active IDs, log them on boot, reproduce with snippet hash.

Maps to: H2-6 / H2-7 / H2-11

How to log events for audit without leaking power budget?

Short answer: Use tiered logging: compact records always, raw snippets only sampled or on-demand.

  • Measure: bytes/day + write time + current spikes.
  • Measure: report_count_day and retry behavior.
  • First fix: Batch writes, cap reporting, keep timestamp + snippet hash for audit.

Maps to: H2-9 / H2-10