Fingerprint Access & Time Attendance Hardware
← Back to: Security & Surveillance
Core idea: A reliable fingerprint access terminal is won by evidence—not guesses: stabilize the capture chain (sensor/AFE/power), then control scoring (FAR/FRR), and finally lock templates/keys inside a trusted domain so updates and board swaps can’t break identity. This page turns every common symptom into two measurements and a first fix, so field issues can be reproduced, isolated, and closed fast.
H2-1. Definition & Boundary (What this page owns)
Core statement: A fingerprint access / time-attendance terminal is a device-local pipeline that captures a fingerprint signal, produces a match decision using encrypted templates, and exposes that decision through local UI and ports—without depending on cloud policy engines or building-wide panel logic.
This page owns (deep coverage):
- Capture chain: sensor modality → sensing matrix → AFE → ADC/quality gating (noise, saturation, baseline drift, ESD survivability).
- Template security chain: enrollment/verification data objects (raw/feature/template/score), template encryption, secure element (SE) storage and anti-clone boundaries.
- Terminal outputs: touch display/HMI behavior, BLE/Wi-Fi as a local transport, and port-level interfaces (e.g., RS-485/OSDP/Wiegand) as an output of the decision (not a building system design).
This page does NOT cover (explicit exclusions to prevent scope creep):
- Access Control Panel architecture, wiring topology, zone/door policies, or multi-door scheduling.
- Smart door lock motor/solenoid drive, latch mechanics, or full lock power stages.
- Face/depth/IR pipelines, cameras, PTZ/thermal/starlight/ANPR imaging blocks.
- NVR/VMS recording platforms, video integrity/compliance systems, or centralized audit services.
- PoE switch/PSE, fiber panels, or network infrastructure design (only terminal power/ports are in-scope).
Evidence-based acceptance (how to validate the writing and the design):
- Every claim must map to at least one measurable artifact: TP waveforms (AFE/output rails), score distributions (FAR/FRR), quality codes, or device-local logs.
- Every troubleshooting statement must specify a discriminator: a measurement or log field that separates two likely root causes.
- Every interface mention stays at port level (electrical timing, isolation, ESD path), avoiding system wiring/policy design.
Security & Surveillance — Fingerprint Access / Time Attendance, Fig. F1 (System Boundary)
H2-2. System Architecture: From Finger to Decision (End-to-end data flow)
Why this architecture matters: Most real failures (high rejects, false accepts, “works on one board only”, firmware update drift) occur when data objects (raw/feature/template/score) cross a boundary without a measurable gate. This section defines a single evidence-driven flow so every later chapter can “fall back” to the same chain.
A. Two canonical flows (Enroll vs Verify)
- Enroll (template creation): capture → quality gate → preprocess → feature extraction → template build + version tag → encrypt + store (SE/secure storage).
- Verify (decision): capture → quality gate → preprocess + feature → secure template fetch (or match-in-SE) → score + threshold → decision + event log.
B. Four data objects (and the “allowed-to-store” rule)
- RAW (sensor frames): transient only; long-term storage is a security and privacy risk.
- FEAT (intermediate features): transient; only permitted in controlled debug builds with explicit erase policy.
- TPL (template): the only object designed for long-term retention, and it must be encrypted + bound to device identity.
- SCORE (match score / decision): allowed for device-local audit and field debug; must not contain reversible biometric data.
C. Three boundaries (each needs a measurable gate)
- Analog → Digital boundary: sensor/AFE/ADC quality must be gated by measurable indicators (noise RMS, saturation flags, baseline drift).
- Normal → Trusted boundary: keys/templates must remain inside SE or a trusted execution boundary; if templates ever enter normal memory, the design must prove wipe + access control.
- Device → External boundary: external interfaces should see only decision + minimal event (not templates, not raw frames).
D. Minimal evidence set (logs that make failures diagnosable)
These fields are intentionally “small but sufficient” so they can be preserved even in constrained devices and still support root-cause isolation.
- Identity tags:
enroll_id,template_version,algo_version(avoid plain user identifiers; use hashes if needed). - Quality & capture:
capture_quality_code,retry_count,sensor_status(noise/saturation/ESD flags). - Match:
match_score,threshold,decision. - Security status:
se_attest_ok,rollback_counter,key_state(device-local only). - Power/reset context:
last_reset_reason,brownout_count,rf_tx_peak_event(if applicable).
E. Fast isolation mapping (log → first 2 measurements)
- Low quality codes + high retries: measure AFE output noise (TP_AFE_OUT) and ADC reference stability (TP_VREF) during a capture burst.
- Score drift after firmware update: compare template_version/algo_version and check whether enrollment templates were regenerated or only thresholds changed.
- Brownout/reset spikes during match: probe 3V3 rail droop (TP_3V3) and RF burst current (TP_IPEAK) to separate power integrity from RF coexistence issues.
Security & Surveillance — Fingerprint Access / Time Attendance, Fig. F2 (End-to-End Data Flow)
H2-3. Sensor Modality & Selection (Why wet/dry/glove behaves differently)
Selection principle: Stability across wet fingers, dry cracks, low temperature, or gloves is primarily a SNR + failure-mode problem. Modality choice and matrix sizing must be justified by measurable outcomes: quality codes, retry counts, and FAR/FRR under controlled finger-condition sweeps.
A. Use one common comparison coordinate (signal → noise → failure)
- Signal source: what physical contrast is measured (capacitance / reflectance / acoustic impedance).
- Noise sources: what degrades that contrast (contact variation, ambient light, coupling/temperature/packaging).
- Failure modes: how the degradation presents in logs (low quality, saturation, drift, high retries).
B. Modality notes (engineering-level, not marketing)
- Capacitive: strong when skin contact is consistent; weak when dielectric/contact changes dominate (dry cracks, gloves, thick cover). Watch for baseline drift and contact-driven SNR collapse.
- Optical: relies on illumination uniformity and surface cleanliness; weak when ambient light or contamination reduces ridge/valley contrast. Watch for low-contrast frames and stray light artifacts.
- Ultrasonic: can tolerate certain surface conditions better; sensitive to coupling stack, temperature drift, and mechanical packaging consistency. Watch for coupling-related attenuation and temperature-dependent offsets.
C. Matrix size / resolution / frame rate → FAR/FRR behavior
- Matrix size: increases usable features and tolerance to partial contact/offset; typically reduces FRR in marginal fingers but may increase cost/power/scan time.
- Resolution: improves separability until the chain becomes noise-limited; beyond that point, higher resolution mostly amplifies noise and processing load rather than adding reliable features.
- Frame rate & capture strategy: higher rate helps motion/placement variability (shorter time-to-decision), but can stress AFE settling and power rails; multi-frame fusion can lower FRR without raising FAR if the quality gate is strict.
D. Cover material, ESD, contamination constraints (what must be specified)
- Cover stack: thickness/material affects coupling (capacitive/ultrasonic) and surface optics (optical). Treat as a design input, not an afterthought.
- ESD return path: define where discharge energy goes; if it enters the AFE reference or scan lines, expect post-ESD instability even when lab tests “pass”.
- Contamination: distinguish instant failures (low contrast) from slow drift (baseline shifts). Logs must separate these with quality codes and trend counters.
E. Evidence deliverable: “failure-rate matrix” template (device-local)
Each cell should be populated from repeated attempts per condition (not single trials). Use the same enrollment set and threshold policy during the sweep.
| Modality / Config | Wet | Dry-crack | Oily | Low-temp | Glove | Dust/dirty |
|---|---|---|---|---|---|---|
| Capacitive (M×N, dpi, fps) | FRR + retries + q-code | FRR + retries + q-code | FRR + retries + q-code | FRR + drift + q-code | FRR (often high) | FRR + drift |
| Optical (illum, sensor, fps) | contrast drop | OK / variable | smear artifacts | illum shift | blocked | contrast drop |
| Ultrasonic (stack, fps) | coupling check | usually stable | stable / variable | temp offset | depends on glove | stable / variable |
Security & Surveillance — Fingerprint Access / Time Attendance, Fig. F3 (Modality Comparison)
H2-4. Fingerprint AFE Deep Dive (Excite, amplify, sample, noise budget)
Goal: map “false accepts / high rejects / all-black or all-white frames / slow drift / post-ESD instability” to measurable circuit evidence. Each symptom must be isolatable using a minimal set of test points and counters.
A. Excitation & scan timing (what must be stable)
- Excitation waveform must be repeatable across rows/columns; jitter or amplitude sag directly lowers contrast and inflates retries.
- Scan-to-sample alignment must ensure AFE settling before ADC capture; misalignment commonly appears as stripes/texture artifacts.
- Evidence: correlate capture quality drops with
TP_EXCITEamplitude/phase drift during a burst.
B. Input protection & ESD return path (why “passes lab” still fails in field)
- ESD energy path must avoid injecting into AFE reference, scan lines, or bias networks; otherwise expect long-tail instability and intermittent lockups.
- Protection capacitance is not “free”: added C can reduce effective signal contrast (especially for capacitive sensing).
- Evidence: compare post-ESD capture: baseline shift, increased low-frequency noise, and abnormal I²C/SPI error counts (if the sensor is digital after AFE).
C. Noise budget (the four buckets that explain most “noisy images”)
- kT/C & bandwidth noise: set by sampling capacitance and front-end bandwidth; appears as broadband grain and raises the quality gate threshold.
- 1/f & slow drift: shows up as baseline wander and condition-dependent FRR (often worse at low temperature or after contamination).
- Coupled noise: RF TX bursts, backlight PWM, touch scan, DC/DC switching; diagnose via correlation (noise grows only when aggressor toggles).
- Reference movement: VREF ripple or bias drift can masquerade as “sensor problem”; always check
TP_VREFalongside AFE output.
D. ADC dynamic range, saturation, and calibration boundaries
- Saturation must be detectable (counter/flag). “All-white/all-black” frames typically correspond to saturate states or reference collapse.
- AGC / offset correction can stabilize amplitude and centering, but cannot recover lost information when the chain is noise-limited or clipped.
- Evidence: track saturation counter, VREF stability, and AGC/offset trajectories across temperature and finger conditions.
E. Minimal test-point set (TP) for field isolation
- TP_EXCITE: excitation amplitude and timing stability.
- TP_AFE_OUT: noise RMS, drift trend, clipping/rail hits.
- TP_ADC_IN (if accessible): verify front-end headroom and settling.
- TP_VREF: ripple, load steps, and post-ESD bias shifts.
F. Fast mapping (symptom → discriminator → first fix)
- High FRR only on wet fingers: discriminator = quality code drop + AFE contrast collapse; first fix = tighten quality gate + verify excitation stability and cover stack behavior.
- “All black/white” intermittently: discriminator = saturation counter spikes + VREF event; first fix = increase headroom / improve reference decoupling / confirm scan timing.
- Drift over hours/days: discriminator = baseline trend in TP_AFE_OUT vs temperature; first fix = bias stabilization + slow recalibration policy (without touching templates).
Security & Surveillance — Fingerprint Access / Time Attendance, Fig. F4 (AFE Chain & Test Points)
H2-5. Image/Feature Pipeline & Matching (Score distribution, thresholds)
Why scores drift: match_score stability depends on (1) capture quality, (2) pre-processing staying within its “repair boundary”, and (3) a threshold policy aligned with the genuine vs impostor score distributions under real finger conditions. This chapter keeps the discussion device-local: quality gates, score histograms, reject reasons, and retry patterns.
A. End-to-end chain (capture → quality gate → feature → score → decision)
- Capture quality gate first: if the input is noise-limited, no amount of post-processing can reliably recover discriminative detail.
- Feature stability second: the same user should produce a tight “genuine” distribution when quality is within spec.
- Threshold last: decision threshold must sit between the genuine and impostor distributions; a single fixed threshold often fails across wet/dry/low-temp.
B. Pre-processing boundary (what it may fix vs what it must not hide)
- Permitted fixes: mild denoise, local equalization, fixed-pattern removal, sparse bad-pixel mapping, gentle ridge enhancement.
- Do not mask hardware faults: AFE saturation, reference collapse, scan timing misalignment, large-area contamination, baseline drift after ESD.
- Practical rule: if processing “improves” the image but
reject_reasonandretry_countremain abnormal, the root cause is upstream (H2-3/H2-4 evidence).
C. Match score distributions and FAR/FRR trade-off (how to read the histogram)
- Genuine scores: same user vs enrolled template. Expect a cluster at higher scores when quality is good; drift/widening indicates unstable capture.
- Impostor scores: different users. Expect a low-score cluster; overlap with genuine scores is what drives FAR.
- Threshold position: raising threshold typically reduces FAR but increases FRR; lowering threshold does the opposite. The correct threshold is the one that meets the required FAR while keeping FRR acceptable under target conditions.
D. Evidence: “reject reasons” and “retry patterns” (make every failure explainable)
Use short enumerations so every reject maps to a measurable cause and a chapter evidence chain.
| reject_reason | Meaning (device-local) | Primary evidence | First isolation step |
|---|---|---|---|
| quality_fail | capture quality below gate | capture_quality_code, noise_RMS | check TP_AFE_OUT + aggressor correlation |
| saturation | ADC/headroom clipped | saturation_counter, histogram pinned | check TP_VREF + TP_ADC_IN headroom |
| timeout | too many retries or slow settle | retry_count, capture_time_ms | verify scan timing + rail droop events |
| liveness_fail | fingerprint-only lightweight anti-spoof | liveness_flag, false reject patterns | check condition sensitivity (wet/low-temp) |
| template_mismatch | quality OK but score below threshold | match_score histogram, threshold | compare genuine drift vs impostor overlap |
E. Lightweight liveness (engineering boundary)
- Scope: fingerprint-only, device-local checks that add friction to static spoofing without requiring heavy models or cross-modal fusion.
- Integration rule: liveness is an additional reject reason, not a replacement for template encryption or anti-replay.
- Safety rule: if liveness_fail spikes under wet/low-temp, treat it as a tuning/quality-gate issue to avoid usability collapse.
Security & Surveillance — Fingerprint Access / Time Attendance, Fig. F5 (Score Histogram & Threshold)
H2-6. Template Security Model (Encryption, key ownership, anti-clone)
Security objective: templates must not be readable, exportable, clonable, or replayable. The enforceable boundary is device-local: secure boot checkpoints, key ownership in a Secure Element (SE), wrapping, attestation, counters/nonces, and auditable template versioning.
A. Template export policy (the first question auditors ask)
- Default stance: template export is disabled. If export exists (service/backup), it must require explicit authorization and produce a wrapped blob only.
- No plaintext on bus: templates must not appear as plaintext over debug ports or field interfaces.
- Evidence: export path audit record + template blob format (encrypted + versioned + integrity protected).
B. Key ladder (device-unique → wrap key → session key)
- Device-unique key (DUK): bound to the device/SE; not readable; used as the root for wrapping.
- Wrap key: encrypts (wraps) the template blob for storage; supports rotation without re-enrolling users if designed with a re-wrap mechanism in SE.
- Session key: protects per-transaction operations (e.g., MAC/sign) to reduce long-term key exposure and enable anti-replay.
C. Anti-clone (why copying flash should not work)
- Binding rule: a template blob encrypted under device-owned keys is useless on another device.
- Attestation rule: template operations should require SE attestation (device state OK, rollback counter OK) before unwrap/match.
- Evidence: attestation result field logged during enroll/verify or during key use (pass/fail + reason).
D. Anti-replay (counter/nonce/signature — device-local only)
- Nonce/challenge: each verification session uses a fresh nonce; responses are authenticated (MAC/signature) using a session key.
- Monotonic counter: stored in SE or protected storage; prevents rolling back to older states or replaying older authorization contexts.
- Evidence: counter increments + replay detection flag; secure-boot checkpoints ensure firmware/state not downgraded.
E. Template integrity and versioning (tamper visibility)
- Blob metadata: template_version + algo_version + policy_version + integrity tag (CRC/MAC).
- Rotation & erase: support secure erase (and optional rotation) with auditable event logs.
- Evidence: template CRC/version checks and failure codes; erase events stored in an append-only local log (device-local).
Security & Surveillance — Fingerprint Access / Time Attendance, Fig. F6 (Template Lifecycle & Boundary)
H2-7. Secure Element / TEE Integration (Make SE non-cosmetic)
Goal: prevent “keys/templates dumped” and “board swap breaks rollback protection” by enforcing a hard trust boundary: keys never leave SE/TEE, template wrap/unwrap happens only in trusted code, and monotonic counters are stored in non-rollbackable protected storage.
A. Responsibility split (SE vs MCU) — the boundary must be testable
- SE/TEE MUST own: device identity, device-unique key root, template wrap/unwrap, attestation, monotonic counter.
- MCU MUST own: enroll/verify state machine, policy (when to allow operations), UI prompts, retries/lockout, local logging.
- Red-line rule: plaintext templates and long-term keys must not appear in normal RAM, debug ports, or field interfaces.
B. Command channel (I2C/SPI) and the “no-secret-on-the-bus” rule
- Transport: MCU issues signed/authorized commands; SE returns status + wrapped blobs only.
- Data types allowed outside SE: template_blob (wrapped), attestation_result, policy/version metadata, failure codes.
- Data types forbidden outside SE: device private keys, plaintext template, raw key ladder material.
C. Monotonic counter placement (rollback protection that survives power loss and cloning)
- Where it lives: inside SE or protected non-rollbackable storage; never in MCU flash where images can be rolled back.
- What it gates: firmware accept, key usage, template unwrap, and any privileged export/maintenance operation.
- What to log: expected_counter vs observed_counter, rollback_detected flag, deny_reason code.
D. Board swap / field replacement (rebind policy that does not brick devices)
- When MCU changes but SE stays: require secure rebind (attestation + policy match + counter continuity) before enabling unwrap/match.
- When SE changes: device identity changes; templates wrapped for the old SE must become unusable by design (re-enroll required).
- Evidence metric: successful recovery rate after controlled board swap tests and clear failure codes when recovery is rejected.
E. Debug states (dev → production lock) — enforceable and auditable
- State model: define dev / test / production. Production state disables sensitive reads and requires signed updates.
- Lock proof: readback of lock state (fuse/secure state register) is recorded in manufacturing log.
- Field service boundary: allow health diagnostics and signed firmware update; disallow any template/key extraction path.
Security & Surveillance — Fingerprint Access / Time Attendance, Fig. F7 (MCU/SE Boundary)
H2-8. Touch Display & HMI (Touch noise, ESD black screen, coupling paths)
Goal: eliminate “ghost touch”, “I2C lockups”, and “ESD black screen” by separating HMI into three domains (touch, display, backlight), then controlling power/ground reference and ESD return paths. Every failure must map to counters and test points.
A. Split HMI into 3 domains (touch / display / backlight)
- Touch domain: touch controller + I2C/SPI + INT/RESET. Sensitive to ground bounce and conducted noise.
- Display domain: panel interface (DSI/SPI/RGB) + reset/enable. Sensitive to ESD-induced latchup and I/O damage.
- Backlight domain: boost/constant-current driver + PWM dimming + LED strings. Primary noise injector if not contained.
B. Three coupling paths that explain most “random” HMI failures
- Power coupling: backlight switching ripple modulates HMI rails → touch baseline drifts → ghost touches.
- Interface coupling: ESD on I2C lines causes NACK/bus hang → touch freezes or spams interrupts.
- Ground/reference coupling: ESD return current takes the wrong path → reference shifts → display black / MCU reset.
C. Evidence counters to require in firmware logs (device-local)
- I2C health: NACK_count, bus_hang_count, recovery_count, max_stretch_time_us.
- Backlight health: led_current_mA, boost_ovp_flag, ocp_flag, dimming_freq_Hz.
- Touch noise: noise_metric, baseline_drift, ghost_rate_per_min, frame_drop_count.
D. Minimal 2-point isolation (fast triage without scope overload)
- Ghost touch: correlate
touch_noise_metricwithbacklight_pwmand HMI rail ripple (TP_HMI_3V3). - ESD black screen: distinguish “backlight still on” (TP_BL_I) vs “panel interface dead” (reset/enable/IRQ flags).
- I2C lock: detect SCL stuck low; record recovery attempts and whether ESD event preceded it.
Security & Surveillance — Fingerprint Access / Time Attendance, Fig. F8 (HMI Domains)
H2-9. Connectivity & Interfaces (BLE/Wi-Fi + port-level access interfaces)
Goal: make “dropouts / slow provisioning / unstable links” explainable by hardware evidence (RF, power, antenna, coexistence) while keeping access interfaces strictly at port level (protection, isolation boundary, grounding, ESD localization).
A. Wireless evidence chain (RF / power / antenna / coexistence)
- Coexistence: Wi-Fi Tx bursts and scan/associate phases can disturb BLE through current spikes and in-band coupling.
- Power-to-RF coupling: RF_VDD droop during peak current → PER/retry increases → disconnect events.
- Antenna detune: enclosure/hand/metal shifts matching → RSSI variance rises and retry patterns become posture-dependent.
- EMI injectors: backlight switching and long interface cables can inject noise into RF ground/reference if boundaries are weak.
B. Required device-local metrics (no cloud dependency)
- RF link:
rssi_min,rssi_avg,rssi_std,retry_rate,per,disconnect_reason. - Provisioning:
assoc_time_ms,scan_time_ms,auth_fail_count(reason codes only). - Power correlation:
peak_current_mA,min_rf_vdd_mV,brownout_countwith aligned timestamps.
C. Port-level access interfaces (RS-485 / OSDP / Wiegand) — electrical only
- Protection chain: connector → TVS → common-mode choke/series R → transceiver → (optional) isolation barrier.
- Grounding rule: define a single reference and control return current paths; avoid ESD current crossing RF/HMI sensitive grounds.
- Isolation decision: use isolation when ground potential difference or surge environment can exceed transceiver common-mode limits.
- ESD localization: identify damage side by clamp behavior (TVS), leakage/short at connector pins, and transceiver thermal signature.
D. Minimal 2-point triage (fast, repeatable)
- Wireless dropout: measure
TP_RF_VDDdroop + logretry_rate/disconnect_reasonat the same timestamp. - Slow provisioning: compare
assoc_time_msagainstpeak_current_mA(scan/associate power event). - RS-485 instability: capture differential waveform at
TP_485_A/B+ check clamp/ESD stress around the connector.
Security & Surveillance — Fingerprint Access / Time Attendance, Fig. F9 (Wireless + Interfaces)
H2-10. Power Tree & Low-Power Operation (domains, transients, wake state machine)
Goal: solve “high standby current / reboot on unlock event / colder weather failures” by defining power domains (RF/MCU/Sensor/HMI), capturing inrush/UVLO/brownout evidence, and enforcing a low-power state machine with measurable wake sources.
A. Domain partition (each domain must have a switch strategy and a test point)
- RF domain: BLE/Wi-Fi module + RF LDO/DC-DC; sensitive to droop during Tx peaks.
- MCU domain: MCU/MPU + storage; brownout/reset reason must be logged.
- Sensor domain: fingerprint sensor/AFE; requires clean reference and controlled ramp.
- HMI domain: touch/display/backlight; major transient and noise injector if not isolated.
B. Transients and protection (inrush → UVLO → brownout)
- Inrush: simultaneous rail enable (RF + backlight + sensor) can exceed source impedance → voltage sag → reboot.
- UVLO behavior: define UVLO thresholds and holdoff; record
uvlo_flagandreset_reason. - Brownout counters:
brownout_countmust align withevent_id(unlock, report, scan, etc.).
C. Low-power state machine (power domains per state)
- Deep sleep: MCU retention on; RF/sensor/HMI off.
- Idle standby: MCU on; sensor periodic or interrupt-armed; RF off.
- Verify: sensor + MCU on; HMI optional; RF off.
- Report/sync: RF short-duty on; measure Tx peak current and RF_VDD droop.
- Update/maintenance: RF on longer + storage writes; requires droop margin and rollback-safe policy.
D. Evidence set (scope + logs) to make failures repeatable
- Current waveforms: startup, verify, report, HMI on/off (capture peak and average).
- Reset telemetry:
reset_reason,brownout_count,min_sys_vdd_mV. - Wake statistics:
wake_sourcehistogram (finger touch / touch / GPIO / timer). - Cold behavior: compare
min_sys_vddandbrownout_countat low temperature vs room.
Security & Surveillance — Fingerprint Access / Time Attendance, Fig. F10 (Power Domains + State Machine)
H2-11. Validation & Field Debug Playbook (Symptom → Evidence → Isolate → Fix)
Purpose: a repeatable SOP that localizes failures using only a multimeter + oscilloscope + device-local logs. Each symptom is forced into: First 2 measurements → Discriminator → First fix. This prevents “guess-and-swap” troubleshooting and makes outcomes auditable.
Field kit (minimum)
- Multimeter + oscilloscope (≥100 MHz recommended for rail droop/edges)
- Current probe or a temporary shunt (for event peak current)
- ESD gun (if validation lab) or controlled ESD test point
- Device-local log export (UART/USB) with timestamps
Tip: time-align scope captures and logs using a shared timestamp_ms / event_id.
Non-negotiable log fields (device-local)
timestamp_ms,event_id(verify/enroll/report/update)match_score,reject_reason,retry_countsensor_status,bad_frame_count,saturation_countreset_reason,brownout_count,min_sys_vdd_mVrssi_avg,rssi_std,retry_rate,disconnect_reasoni2c_nack_count,bus_hang_count(for HMI/sensor buses)
S1 — High reject rate (FRR) / “slow or inconsistent matching”
First 2 measurements
- Log: capture
match_scoredistribution + topreject_reason+retry_countfor the same finger (10–30 trials). - Scope: measure
TP_SENS_VDDmin/ ripple duringevent_id=verify; correlate tobad_frame_count/saturation_count.
Discriminator
- If score shifts left but sensor rail is clean and frames are stable → pipeline/threshold/template issue (H2-5/H2-6).
- If score variance spikes with TP_SENS_VDD ripple/droop or rising
bad_frame_count→ AFE/power integrity issue (H2-4/H2-10).
First fix
- Stabilize sensor domain first: add/verify local decoupling, ensure clean reference, and gate noisy domains (HMI/RF) away during capture.
- Only after electrical stability is proven, tighten reject reasons/threshold policy based on score histogram (H2-5) without changing other subsystems.
- Low-noise LDO (sensor domain): TI
TPS7A20, ADIADP150 - Load switch (domain gating): TI
TPS22918, onsemiNCP45520 - Voltage supervisor (clean reset behavior): TI
TPS3839, MaximMAX16054
S2 — False accept / “wrong person accepted” (FAR concern)
First 2 measurements
- Log: extract the accepted event’s
match_score,enroll_id,template_version, and whether a replay/nonce/counter check was performed. - Security status: record secure-boot and SE/TEE status flags (pass/fail only) + rollback/counter values at the time of accept.
Discriminator
- If “high score accepts” cluster around a specific
template_versionor migration step → template lifecycle / compatibility / corruption (H2-6). - If issues appear after board swap or downgrade attempts → device binding / monotonic counter / rollback protection misplacement (H2-7).
First fix
- Enforce template non-export + wrap templates with device-unique keys inside a secure element, and require freshness (nonce/counter) on match sessions.
- Lock production debug and ensure rollback protection uses a monotonic counter stored in the trusted domain (SE/secure storage).
- Secure element (keys + attestation): Microchip
ATECC608B, NXPSE050, STSTSAFE-A110 - MCU with secure boot features (example families): ST
STM32U5, NXPLPC55Sxx(use built-in secure boot + external SE)
S3 — Wet/dirty finger failure / “works dry but fails wet”
First 2 measurements
- Log: compare
sensor_statusflags (wet/partial/low-contrast) +bad_frame_countbetween dry vs wet trials (same user). - Scope: measure sensor capture chain stability (
TP_SENS_VDD+TP_AFE_OUTnoise RMS / saturation markers) during wet trials.
Discriminator
- If wet trials mainly show low contrast / partial contact with normal rails → modality/cover/glass contamination sensitivity (H2-3/H2-4).
- If wet trials trigger saturation / baseline drift → AFE excitation/protection/leakage paths or reference stability issue (H2-4).
First fix
- Harden the front-end against leakage and baseline drift: tighten ESD/leak paths, review excitation and guard strategies, and validate baseline recovery time after each scan.
- Use an explicit wet-condition reject reason and require a retry pattern that prevents repeated noisy frames from entering the feature pipeline.
- ESD diode array (sensor/bus lines): TI
TPD4E02B04, NexperiaPESD5Vseries - Analog switch / input protection building blocks (example): TI
TS5A23157(use as needed for gating/protection)
S4 — ESD event causes hang / black screen / “touch stops responding”
First 2 measurements
- Log: capture
bus_hang_count/i2c_nack_countand whether automatic bus recovery ran; notereset_reasonif watchdog fired. - Scope: measure
TP_HMI_3V3stability and observe I²C/SPI line stuck-low behavior during/after ESD (if accessible).
Discriminator
- If rails are stable but the bus is stuck (SCL/SDA low) → interface upset + missing recovery path; ESD current likely returned through the wrong ground boundary (H2-8).
- If rail dips or resets coincide with the ESD hit → power/UVLO margin or protection clamp placement is insufficient (H2-10).
First fix
- Fix the ESD return path first: place clamp devices at the connector/entry, control the return to chassis/ground, and keep ESD current out of RF/sensor references.
- Add deterministic bus recovery + watchdog policy (reset the peripheral domain, not the full system when possible).
- ESD protection (I/O lines): TI
TPD2E2U06/TPD4E02B04, LittelfuseSP3012series - TVS for power entry (if outdoor / long cable): Littelfuse
SMBJseries (select voltage per rail)
S5 — Wireless drops / unstable link / slow provisioning (BLE/Wi-Fi)
First 2 measurements
- Log: capture
rssi_std,retry_rate/per,disconnect_reason,assoc_time_msand align withtimestamp_ms. - Scope: measure
TP_RF_VDDdroop and peak current during scan/associate/Tx bursts (same time window as the disconnect).
Discriminator
- If disconnects align with RF_VDD droop and current peaks → RF domain margin issue (H2-9/H2-10).
- If rssi_std is high and failures are posture/enclosure dependent → antenna detune / keep-out violation (H2-9).
First fix
- Increase RF domain transient margin: shorter supply path, stronger local decoupling, and isolate noisy domains during provisioning bursts.
- Enforce antenna keep-out and re-check matching network placement; confirm improvements via reduced
rssi_stdandretry_rate.
- RF buck converter (high efficiency for bursts): TI
TPS62840, TITPS62133 - Low-noise RF LDO (post-reg): TI
TPS7A02, ADIADP150 - BLE/Wi-Fi module examples: u-blox
NINA-W156, EspressifESP32-WROOM-32E
S6 — Random reboot / “reboots during unlock or reporting” (brownout-like)
First 2 measurements
- Log: check
reset_reason+brownout_countand the precedingevent_id(verify/report/HMI on). - Scope: capture
TP_MCU_VDDmin voltage and inrush current when domains switch on (RF + HMI + sensor).
Discriminator
- If resets occur at domain transitions with a clear droop → inrush/UVLO margin or sequencing issue (H2-10).
- If resets correlate with cable events/ESD/surge on ports → entry protection/ground boundary issue coupling into core rails (H2-9/H2-10).
First fix
- Apply domain sequencing and inrush limiting (stagger RF/HMI enable, add soft-start where needed) and confirm droop margin with repeat captures.
- Add/verify eFuse or hot-swap protection at the input domain to prevent collapses during transients.
- eFuse / inrush limiter: TI
TPS25942, TITPS2595 - Surge/hot-swap style protection (DC input): ADI
LTC4367(use per input constraints) - Load switch (domain sequencing): TI
TPS22918, onsemiNCP45520
MPN cheat-sheet (common building blocks used across fixes)
These are example parts frequently used in access terminals; choose voltage/current/package equivalents per your BOM.
- RS-485 transceiver (port-level): TI
SN65HVD72, MaximMAX3485, ADIADM3065E - Digital isolator (if required): TI
ISO7721, ADIADuM1201 - TVS for data lines (ESD): Nexperia
PESDfamilies, LittelfuseSPfamilies - Common-mode choke (cable EMI control): TDK
ACMseries (select impedance per interface)
H2-12. FAQs ×12 (Evidence-Backed, No Scope Creep)
Each answer is constrained to: Short answer What to measure (2) First fix, and maps back to the chapter evidence chain.
1) Wet fingers cause high reject rate — AFE noise or threshold?
Short answer: Decide whether capture quality collapsed or the accept threshold is too strict for low-contrast wet ridges. What to measure (2): (1) TP_AFE_OUT noise RMS plus saturation_count during verify; (2) match_score histogram with top reject_reason. First fix: stabilize sensor rails (e.g., TPS7A20) and clamp leakage/ESD paths (TPD4E02B04) before tuning thresholds.
2) FAR increased after firmware update — template versioning or scoring drift?
Short answer: Most “FAR jumps” come from a template migration/version mismatch or a shifted score distribution after pipeline changes. What to measure (2): (1) correlation of accepts with template_version and enroll_id; (2) before/after match_score histogram and tail overlap. First fix: enforce versioned migration + rollback blocking, and keep template wrapping bound to SE keys (e.g., ATECC608B/SE050).
3) Can templates be backed up safely without enabling cloning?
Short answer: Yes—backup must store only device-bound, wrapped blobs that cannot be replayed on another board. What to measure (2): (1) backup contents: only encrypted/wrapped template + metadata (template_version, CRC), never raw features; (2) restore requires SE attestation and device-unique key binding. First fix: wrap/unwrap inside SE (SE050, STSAFE-A110) and require nonce/counter freshness on restore.
4) Board swap breaks authentication — where should the monotonic counter live?
Short answer: The monotonic counter must live in the trusted domain that survives swaps and resists rollback—typically the secure element. What to measure (2): (1) rollback_fail/counter_mismatch logs after swap; (2) confirm it’s not power corruption by checking brownout_count and reset_reason around the event. First fix: store counter in SE secure NVM (ATECC608B/SE050) and define a verified re-provision flow.
5) ESD test passes but field units still freeze — return path or touch I²C lock?
Short answer: “Pass in lab, freeze in field” is usually a non-equivalent return path or a bus stuck-low that lacks deterministic recovery. What to measure (2): (1) bus_hang_count / i2c_nack_count spikes after ESD-like events; (2) TP_HMI_3V3 droop and SCL/SDA stuck-low duration. First fix: move clamps to the entry and control return (TPD2E2U06/SP3012), then add bus-recovery + watchdog policy.
6) CRC is OK but matching is wrong — what evidence next?
Short answer: CRC proves bits arrived intact, not that the correct template/pipeline was used. What to measure (2): (1) match_score shift plus reject_reason (low-score vs abnormal high-score); (2) template_version and template CRC/ID against the running pipeline build ID. First fix: if versions mismatch, block verify and force re-enroll/migration; if scores shift globally, validate capture quality (bad_frame_count, saturation_count) before threshold edits.
7) Why does Wi-Fi drop during unlock bursts?
Short answer: Unlock bursts often coincide with RF supply droop or ground noise that collapses link margin. What to measure (2): (1) TP_RF_VDD min voltage during scan/Tx + peak current; (2) retry_rate and disconnect_reason aligned to timestamp_ms. First fix: increase RF transient headroom (buck TPS62840 + low-noise LDO TPS7A02), and sequence RF away from noisy domain transitions.
8) High standby current — which power domains are leaking?
Short answer: Treat standby current as a domain-by-domain step test until one rail explains the excess. What to measure (2): (1) sleep current steps while forcibly gating domains (RF/HMI/sensor) with a load switch; (2) wakeup_src and domain-enable flags to confirm the state machine truly entered deep sleep. First fix: hard-gate the culprit domain (e.g., TPS22918) and remove leakage paths from clamps/pullups before firmware tuning.
9) Some users always fail — sensor modality mismatch or enrollment quality?
Short answer: Persistent single-user failures are more often poor enrollment quality than “bad hardware,” unless a specific skin condition defeats the modality. What to measure (2): (1) enrollment quality/coverage metric (or equivalent) for that user; (2) verify-time match_score paired with bad_frame_count under consistent conditions. First fix: enforce an enrollment quality gate + guided re-enroll; only then consider modality/cover constraints (no global FAR threshold changes).
10) How to detect sensor contamination vs hardware failure?
Short answer: Contamination improves after cleaning; hardware failure does not and usually shows persistent saturation or rail anomalies. What to measure (2): (1) before/after cleaning change in bad_frame_count and sensor_status flags; (2) TP_AFE_OUT baseline/noise RMS and saturation_count stability. First fix: add a contamination-detect threshold and maintenance alert; if electrical metrics stay abnormal, inspect ESD damage paths and replace the sensor/front-end module.
11) What should never be overwritten during field update?
Short answer: Never overwrite identity-anchoring assets: keys, monotonic counters, template store, and calibration/trim that defines capture correctness. What to measure (2): (1) secure-boot + SE attestation status before/after update; (2) protected-partition integrity: template_version, template CRC, and counter continuity. First fix: split code/data partitions, require signed updates, and keep secrets/counters inside SE (SE050/STSAFE-A110) with explicit write policies.
12) How to set pass/fail criteria for FAR/FRR in production?
Short answer: Production criteria must be histogram-based and tied to a minimal condition matrix, not a single “looks OK” run. What to measure (2): (1) per-condition match_score histogram and reject tail overlap; (2) top reject_reason plus stability counters (bad_frame_count, brownout_count). First fix: if tails overlap, fix capture/power noise first (LDO TPS7A20, eFuse TPS25942) before adjusting thresholds.