Eartip Fit & Bio-Sensing Module: Fit, Temp & Acoustic Cal
Core idea: This module turns “ear-tip fit” into measurable evidence by combining pressure/contact stability, in-ear temperature sanity, and an acoustic leak signature, then reporting a repeatable fit score with ultra-low power.
Instead of guessing, it uses a two-proof method (signal feature + power/timing correlation) to quickly separate mechanical seal issues from AFE leakage/offset and scheduling/rail collisions—so the first fix is obvious and testable.
H2-1. What the Module Is and What “Good Fit” Means (Engineering Definition)
This page defines an eartip fit & bio-sensing module as a measurable, testable sub-system: pressure/contact stability + a compact ear-canal acoustic signature for seal/leak indication, plus in-ear temperature with low-power telemetry. The boundary is the module (sensors + AFEs + ULP PMIC + BLE), not the full earbud audio product.
Module boundary (what is inside vs. outside)
The module includes the eartip mechanics (seal/vent/membrane), pressure/contact sensing, temperature sensing, sensor AFEs (excitation/IA/PGA/filter/ADC), a small controller or BLE SoC for telemetry, and an ultra-low-power PMIC that gates rails and enforces sleep. The host interface is limited to I²C/SPI/GPIO (configuration + readback) and optional “calibration trigger” for an acoustic test tone.
What “fit / seal / contact” means in signals (not opinions)
- Contact presence is an event + persistence problem: the module must detect “in-contact / out-of-contact” with controlled debounce/latency and a low false-trigger rate under motion.
- Stability is the “does it settle?” question: after insertion, the pressure/contact signal should converge to a stable band (no random walk, no step-like toggling).
- Seal quality is a cross-check: an ear-canal acoustic calibration feature (transfer signature) indicates “leak-like” vs “sealed-like” behavior, and is used to validate or correct pressure-only conclusions.
What “bio-sensing” means here (strictly scoped)
“Bio-sensing” on this page is limited to in-ear temperature and contact presence as the quality gate (temperature is meaningful only when contact is stable). It does not include PPG/SpO₂/EDA or clinical-grade sensing.
Minimal acceptance criteria (written as testable statements)
- Repeatable fit score: repeated insertions of the same user produce a tight distribution (no bimodal “sometimes good, sometimes bad” without a matching mechanical explanation).
- Stable temperature reading: temperature output includes a validity flag (warming/settled), and drift correlates with thermal physics (not PMIC self-heating artifacts).
- Low energy cost: continuous monitoring uses duty-cycled sensing + event-driven BLE; acoustic calibration is an occasional burst with bounded energy impact.
Evidence & checks (first two measurements)
- Fit score repeatability across insertions: record “stable-window” fit metrics after each insertion, inspect distribution shape (spread, outliers, bimodality) and time-to-stable.
- Contact latency & false positives: log the raw contact/pressure waveform and the reported event timestamps; verify debounce/hysteresis removes motion chatter without adding unacceptable lag.
H2-2. System Architecture: Signals, Interfaces, and Data Paths
The architecture is best understood as three evidence paths sharing one low-power controller: (A) pressure/contact for presence + stability, (B) temperature with validity state, and (C) an acoustic calibration loop that produces a compact seal/leak signature. Each path has measurable test points and a defined timing window to avoid power/EMI coupling.
Signal inputs and outputs (what is carried across the interface)
- Inputs: pressure/contact sensor raw channel(s), temperature sensor channel, optional “calibration trigger” event (to run a short test tone + mic capture).
- Outputs: fit metrics (stability + seal quality), contact events (edge-triggered), temperature value plus validity (warming/settled), and diagnostics flags (sensor open/short, saturation, low-battery inhibit).
- Integrity fields: sequence counters + rolling CRC for telemetry packets (to detect gaps without heavy logging).
Three paths, three failure patterns (kept hardware-first)
- (A) Pressure/Contact Path — event + stability: excitation → sensor → IA/PGA → LPF → ADC → debounce/hysteresis → contact & stability metrics. Common root causes: leakage on high-impedance nodes, moisture-induced bias drift, mechanical rebound.
- (B) Temperature Path — slow signal + thermal coupling: bias/ADC → smoothing matched to thermal time constant → temp + validity flag. Common root causes: PMIC self-heating coupling, insulation by wax/sweat, placement-driven lag.
- (C) Acoustic Calibration Loop — feature signature: short test tone → ear canal → mic capture → feature extraction → seal/leak signature → cross-check with pressure stability. Common root causes: noisy environments, blocked mic port, timing collisions with radio events.
Interfaces to host (minimal set to preserve module independence)
- I²C/SPI: configure thresholds, sampling duty-cycle, calibration enable; read back metrics/flags.
- IRQ/GPIO: contact change, fit-fail, cal-done (event-driven, reduces polling power).
- Optional trigger: host requests a calibration burst (test tone + capture). No full audio pipeline is described here.
Timing budget (why windowing matters)
The module should be scheduled around non-overlapping windows to preserve measurement fidelity: (1) sensing window (quiet rails, stable bias), (2) compute window (feature + thresholds), (3) BLE window (radio burst), and (4) optional calibration window (tone + capture). In practice, radio current spikes and ground bounce can corrupt high-impedance sensing unless windows are explicitly separated.
Evidence & checks (what to measure before changing anything)
- Timing trace: log the sensing/compute/BLE event timeline and correlate with fit-score jitter or contact flicker.
- Current profile: capture the rail current waveform during (A) normal monitoring and (B) calibration burst to ensure the average budget holds and no state gets stuck “on.”
- Test points (recommended): AFE output/ADC input (TP1), temperature ADC read (TP2), mic feature capture (TP3), PMIC rail current sense/shunt (TP4), BLE sequence counter & retry stats (TP5).
H2-3. Pressure & Contact Sensing Fundamentals (What the Sensor Really Measures)
Fit sensing becomes reliable only when it is anchored to what the sensor actually measures. Many “pressure” or “contact” channels are dominated by material compression, contact area, and vent/leak dynamics rather than a clean, absolute ear-canal pressure number. This chapter maps sensing modalities to their dominant error sources and the waveforms that prove each root cause.
Pressure sensing options (what each modality is truly sensitive to)
- Piezoresistive (strain/compression): strong response to mechanical deformation of foam/silicone and support structures. Typical signature is a fast step at insertion followed by creep (slow drift) as the material relaxes. Best for: contact/stability cues. Watch for: hysteresis and temperature drift.
- Capacitive (gap/area changes): measures changes in distance or effective contact area. It can be extremely sensitive to small geometric changes, but also sensitive to moisture/contamination and parasitic coupling in compact assemblies. Best for: proximity/contact gating. Watch for: humidity shifts and parasitic capacitance.
- MEMS barometer repurpose (micro-pressure/leak dynamics): becomes useful when the eartip + ear canal forms a quasi-cavity. The value is often the seal/leak behavior (how pressure decays) rather than absolute pressure. Best for: leak/vent discrimination. Watch for: “not a real cavity” cases and vent geometry changes.
Contact sensing options (quality gate before other measurements)
- Capacitive proximity/contact: good for “present/not present” gating and can be low power, but requires careful control of parasitics and stable referencing. Failure pattern: moisture films mimic contact; motion changes coupling.
- Resistive contact: simple and robust if contact surfaces remain stable. Failure pattern: sweat/wax changes contact resistance; micro-slip causes chatter without true loss of seal.
- Impedance-based “skin contact” concept: can separate “touch” vs “firm contact” by observing impedance changes under controlled excitation. Failure pattern: excitation/AFE leakage makes the channel look “always contacted.”
Dominant error sources (symptom → physical cause → discriminator)
- Vent leakage: signals show faster decay and poor low-frequency retention. Discriminator: a stable “contact present” with a drifting/decaying channel suggests leak/vent dynamics.
- Insertion depth variance: changes the initial amplitude and geometry but can still produce a stable plateau. Discriminator: wide insertion-to-insertion spread with otherwise clean stabilization.
- Jaw motion (chew/talk): introduces repeatable low-frequency modulation. Discriminator: periodic waveform tied to jaw cadence rather than random noise.
- Cable/structure tug: produces short step-like events and rapid recovery if the seal is intact. Discriminator: sharp spikes with fast return vs slow drift.
- Foam compliance & creep: causes slow drift after insertion and clear hysteresis over cycles. Discriminator: drift time constant matches material relaxation; repeated cycles show looped trajectories.
Evidence & checks (what to run before changing thresholds)
- Hysteresis vs insertion cycle: run repeated insert/remove cycles, capture peak → plateau → release paths. Confirm whether the “same fit” produces consistent plateaus or shows cycle-dependent offsets.
- Motion artifact signatures (step vs drift): use two controlled disturbances: (1) short tug/tap (step-like), (2) chew/turn (slow modulation/drift). Compare recovery time and event chatter rate.
H2-4. Temperature Sensing in the Ear: Accuracy, Lag, and Drift Budget
In-ear temperature is only meaningful when the system accounts for placement, the thermal time constant, and self-heating coupling. A robust module treats temperature as a measured value plus a validity state (warming vs settled), and it separates true ear-canal trends from PMIC/battery thermal artifacts.
Sensor placement trade-offs (thermal coupling vs protection)
- Near canal: faster response and closer to actual ear-canal temperature, but higher exposure to moisture/wax and mechanical abrasion. Protection layers (membranes/coatings) improve reliability but can increase thermal resistance and slow response.
- Shell-mounted: easier to protect and integrate, but more sensitive to self-heating from nearby PMIC/radio activity and less representative of canal temperature during short windows.
- Embedded: stable mechanically and manufacturable, but large thermal time constant; it becomes a “trend sensor” unless you explicitly model warm-up and validity.
Thermal time constant and “warm-up” behavior (why instant readings mislead)
Temperature sensing is governed by a simple thermal RC: heat must flow from the ear canal through materials and interfaces (thermal resistance) into the sensor mass (thermal capacitance). After insertion, a “warm-up curve” is expected. Robust designs define a settled criterion (e.g., slope below a threshold over a window) rather than trusting the first few seconds.
Calibration strategy (production-friendly, module-scoped)
- Offset trim: corrects device-to-device baseline errors with minimal cost. Useful when placement is consistent.
- Slope trim: improves cross-temperature accuracy but requires two-point characterization or tighter test control.
- Ambient compensation proxy: when direct ambient sensing is unavailable, use module state cues (sleep/active, radio burst) as a proxy to prevent reporting “disturbed” temperature as true ear temperature.
Moisture and wax: protection vs thermal coupling
Hydrophobic membranes, coatings, and sealing features reduce contamination and corrosion risk, but they can reduce thermal coupling and increase response time. The design should be validated with step response tests and long-duration drift tests under sweat/wax exposure, then adjusted with placement, materials, or reporting logic (validity flags) rather than over-fitting thresholds.
Evidence & checks (two tests that reveal most issues)
- Step response & settling time: record the temperature curve at insertion under controlled conditions to extract time-to-settle and verify the validity state thresholds.
- 30–60 minute drift + correlation with heating: log temperature alongside PMIC/radio activity markers and current profile. If temperature jumps align with radio or power bursts, self-heating coupling is likely dominating.
H2-5. AFEs for Pressure/Contact/Temp: Noise, Offset, Excitation, ADC Choices
A reliable fit/bio module is limited less by “sensor type” and more by the measurement chain: excitation → sensor interface → IA/PGA → filtering → ADC. In this form factor, the dominant failure modes are usually low-frequency noise (1/f), offset drift, and leakage paths (including ESD clamp leakage and surface contamination). This chapter turns those into testable budgets.
Excitation strategies (resistive vs capacitive) — why they reshape noise and power
Resistive sensors / resistive contact
Constant-current excitation supports ratiometric interpretation and consistent sensitivity, but can introduce self-heating and longer settling after duty-cycling.
Constant-voltage excitation is simple but makes measurements more sensitive to supply variation and contact resistance changes.
Duty-cycled excitation reduces energy, but increases sensitivity to transient settling, making “false contact spikes” more likely if sampling occurs too early.
Capacitive proximity/contact
Charge-transfer / switched-cap methods can be efficient, but parasitics dominate unless sensor routing and reference structures are controlled.
Oscillator / frequency methods are easy to digitize, but can be affected by coupling from nearby clocks and RF bursts.
Synchronous excitation/demod improves immunity, but raises design complexity and can cost energy if windows are too long.
Chopper/auto-zero vs bandwidth; bias/leakage; ESD clamp leakage risks
- Chopper / auto-zero: reduces low-frequency offset and 1/f noise, which directly improves “stable plateau” behavior. The trade-off is added ripple or bandwidth constraints, so timing windows must avoid sampling during switching artifacts.
- Input bias & high-Z leakage: even tiny bias currents or surface leakage can shift high-impedance nodes, creating “always-contact” or “never-contact” behavior that looks like a mechanical problem.
- ESD clamp leakage: protection devices can add parasitic leakage paths that change with humidity, contamination, and temperature. This is a common root cause when lab units pass but field units drift or latch into wrong states.
ADC selection: resolution × sampling window × energy; ratiometric measurement
- Resolution: choose effective resolution to keep event thresholds stable (avoid jitter-triggered false toggles), not to chase headline bits.
- Sample rate & windows: transient events (insert/tug) need short high-rate bursts; steady-state needs low-rate monitoring. Windowing is often a bigger energy lever than ADC architecture choice.
- Ratiometric measurement: when possible, measure sensor output against the same excitation/reference so supply drift cancels. This reduces apparent “offset drift” that is actually supply movement.
Guarding and shielding in a tiny form factor (what breaks first)
- High-Z nodes near fast edges: RF clocks, DC-DC switching nodes, and GPIO edges couple into capacitive/resistive interfaces.
- Parasitics dominate: long sensor traces behave as antennas; small geometry changes shift capacitance and bias.
- Guard / driven shield: used to stabilize high-impedance sensing by controlling the electric field around the node.
- Cleanliness & coatings: surface resistance and leakage can change dramatically with moisture; production process matters as much as schematics.
Evidence & checks (turn hardware choices into pass/fail proof)
- Input-referred noise target: measure noise in representative modes (excitation on/off, RF bursts, different windows). Evaluate as false event rate and threshold jitter, not only as µV/√Hz.
- Leakage validation (high-Z node test): apply a known bias/charge to the sensing node and record decay across humidity, temperature, and post-ESD conditions. Use decay time constant and residual offset as acceptance criteria.
H2-6. Ear-Canal Acoustic Calibration: What You Calibrate and What “Seal” Looks Like
Acoustic calibration is treated here as an engineering measurement, not a DSP theory lesson. A short stimulus and a controlled capture window can extract a small set of interpretable metrics: resonance shift, a low-frequency leakage indicator, and transfer magnitude consistency. These metrics cross-validate pressure/contact channels to reduce misclassification.
Test stimulus types (chirp / sweep / multitone) and why windowing matters
- Chirp: compact in time; useful when the system needs a quick measurement window and minimal user disruption.
- Sweep: easier to reason about in controlled tests; can be longer, which impacts energy and susceptibility to motion during the window.
- Multitone: captures sparse spectral points quickly; can be robust when only a few features are needed for seal/leak inference.
Extractable metrics (interpretable, module-scoped)
- Resonance shift: changes in ear-canal geometry and insertion depth shift the resonance location/shape. This helps separate “depth variance” from pure sensor drift.
- Low-frequency leakage indicator: seal degradation tends to reduce low-frequency retention. A stable contact signal with an abnormal LF leakage metric strongly suggests vent/leak behavior.
- Transfer magnitude consistency: repeatability across re-insertion is often more diagnostic than absolute magnitude. Wide variance indicates mechanical/fit instability rather than algorithm instability.
Failure modes (what “bad seal” vs “blocked mic” looks like)
- Poor seal / leakage: LF leakage indicator abnormal; metrics fluctuate with small motion; repeatability is weak.
- Venting effects: systematic leakage patterns that align with pressure decay behavior; often stable but consistently “leaky.”
- Occlusion / geometry anomaly: resonance feature shifts beyond typical re-insertion spread; may appear as a consistent but “off” signature.
- Mic port blockage: transfer magnitude becomes abnormal (often broad attenuation), producing a failure pattern that does not match pressure/contact evidence.
Cross-validation with pressure/contact (reduce misclassification)
- Contact present + acoustic LF leak abnormal: likely vent/leak or insufficient seal, not “sensor noise.”
- Pressure/contact stable + acoustic magnitude abnormal: suspect mic port blockage/contamination before changing fit thresholds.
- All channels unstable: suspect micro-slip/jaw motion coupling or structural looseness; focus on repeatability tests and mechanical stabilization.
Evidence & checks (two pattern tests that catch most issues)
- Leak signature vs blocked-mic signature: build a small pattern library by capturing features in three states: normal, intentionally loosened seal, and simulated mic blockage. Use the library to classify field logs.
- Repeatability across re-insertion: run N re-insertions and compare feature distributions. If variance stays high, treat it as a mechanical/assembly problem before refining the scoring logic.
H2-7. Mechanics & Materials: Where Sensors Live and Why It Dominates Stability
In an eartip module, many “fit sensing” errors are not electronic—they are mechanical: material hysteresis, micro-slip, moisture/wax contamination, and strain-induced microphonics. Sensor placement and protective membranes can improve robustness, but they can also reshape the acoustic signature used for calibration. This chapter keeps scope strictly inside the eartip module (not the full earbud).
Sensor placement: stem vs skirt vs core (trade-offs tied to artifact patterns)
| Location | What it sees well | Typical stability risks (artifact sources) |
|---|---|---|
| Stem | Routing-friendly zone; less direct compression; good for insertion-depth cues and “presence” stability. | May under-represent true seal region; depth variance can look like fit changes; cable/strain coupling can dominate. |
| Skirt | Closest to sealing interface; strongest sensitivity to leak and contact stability at the boundary. | Higher hysteresis and shear; micro-slip under jaw motion; sweat film and wax contamination shift leakage and electrical bias. |
| Core | More controlled geometry; potential for mechanical isolation; repeatability can be higher if assembly is consistent. | Packaging stress and thermal paths can bias sensors; harder routing; protection layers can alter acoustic response if not modeled. |
Foam vs silicone vs hybrid: compliance, hysteresis, and moisture behavior
Foam
Compliance: seals well at low force; can improve leak resistance.
Hysteresis: higher; “insert–remove” cycles can shift baseline and mimic slow drift.
Moisture: absorbs sweat; can change surface resistance and acoustic damping across time.
Silicone
Compliance: predictable; easier to model; often better repeatability for contact metrics.
Hysteresis: lower than foam; micro-slip can still occur under shear/jaw motion.
Moisture: less absorption but sweat film can form leakage paths on high-Z sensing nodes.
Hybrid
Goal: combine sealing at the boundary with structural stability around sensor seats.
Risk: interface adhesion/process variability becomes the main repeatability limiter.
Check: treat it as a manufacturing consistency problem; validate across lots, not only samples.
Routing constraints (tiny space)
Short paths: reduce parasitics and motion-induced coupling.
Fixation points: prevent strain from pulling on sensor seats.
Isolation: keep sensitive runs away from flex zones that amplify microphonics.
Membranes and hydrophobic vents: protection vs altered acoustic signature
- Protection benefit: membranes and vents reduce sweat/wax ingress and help keep mic/ports functional over time.
- Measurement cost: they can reshape the calibration transfer path, shifting resonance and leakage indicators. A “perfectly protected” design can still misclassify fit if the membrane/vent effect is not modeled.
- Module-level requirement: treat membrane/vent variants as a controlled configuration and include them in the pattern library.
Strain relief and microphonics (mechanical-to-electrical coupling)
- Strain-induced artifacts: pulling, twisting, or flexing can inject step-like changes into contact/pressure signals.
- Microphonics: mechanical vibration can couple into high-impedance nodes and appear as false activity.
- Mitigations: dedicated strain relief, stable fixation points, and avoiding long flexible spans near sensor routing.
Evidence & checks (make mechanical stability measurable)
- Compression cycle test: fixed compression/relaxation for N cycles; track baseline return and hysteresis spread to quantify repeatability loss.
- Wash/sweat test: sweat exposure + dry cycles; monitor contact false positives, leakage decay behavior, and calibration feature shift.
- Wax contamination test: simulate partial/complete port blockage; verify that “blocked-mic” patterns separate cleanly from “leak” patterns.
H2-8. Ultra-Low-Power Power Tree: ULP PMIC, Duty Cycling, and Energy Budget
Ultra-low-power is achieved by architecture and scheduling: define power domains, gate rails aggressively, and keep “quiet windows” for sensing and calibration. The limiting factors are often sleep leakage (nA–µA class), rail settling, and RF burst peak current, which can create false events or brownout resets if not budgeted.
Power domains and rail gating (what stays on vs what must be windowed)
- AON domain: wake logic/RTC and minimal state retention; target the lowest leakage and stable wake thresholds.
- Sense domain: AFE + sensor bias; typically duty-cycled with controlled settling time to avoid transient misreads.
- Compute domain: MCU/BLE baseband; short compute bursts following sensing windows.
- RF domain: TX/RX bursts; highest peak current; must be isolated from sensitive measurement windows.
Regulators: LDO vs buck; load-switch leakage as the real sleep limiter
- LDO: low noise and predictable behavior; efficiency penalty grows when input-to-output ratio is large.
- Buck: improves efficiency at higher loads; requires switching-noise management and careful placement around high-Z sensing.
- Load switch: enables hard power-off of domains; the key parameter is off-state leakage, not only on-resistance.
Energy modes (state machine) and minimal telemetry behavior
- Idle: AON only; contact detection may run in sparse windows.
- Detect: short AFE window for contact/pressure; quick decision + validity flag.
- Calibrate: longer window for stimulus + capture; feature extraction; cross-check with pressure/contact.
- Advertise: short RF bursts; keep scheduling away from sense windows when possible.
- Connected telemetry (minimal): transmit only events/summary stats; avoid long on-time patterns that look like “streaming.”
Brownout/UVLO behavior and data integrity (module-safe state, not a storage tutorial)
- UVLO hysteresis: prevents repeated reset oscillation during marginal battery conditions.
- Validity flags: mark samples collected during rail settling/RF overlap as invalid; prefer retry over silent acceptance.
- Minimal integrity strategy: record a compact sequence counter + CRC for critical events so field logs are diagnosable after power dips.
Evidence & checks (two measurements that close the power budget)
- Current profiling (two-point method): measure at PMIC input (system energy) and at a key rail (domain attribution). Overlay the waveform on the state timeline to confirm peak current, settling time, and duty ratio.
- Sleep leakage audit: isolate leakage by disabling domains and toggling GPIO states; verify nA–µA targets across temperature/humidity.
H2-9. BLE Telemetry & Robustness: Scheduling, Latency, and Data Integrity
BLE in this module exists only to move small fit/contact/temperature metrics reliably. Robustness comes from aligning radio activity with sensing windows, minimizing “always-on” behavior, and adding lightweight integrity tools: sequence counters, simple CRC, and an optional rolling event log.
What gets reported (and what does not): a strict “metric contract”
| Report (small, high value) | Purpose | Typical trigger |
|---|---|---|
| Contact state + stability flag | Fast presence / insertion changes without streaming raw data | Edge-triggered: 0→1 / 1→0, debounce complete |
| Fit score + validity flag | Summarize seal/fit quality; avoid transmitting raw waveforms | Periodic low-rate update or on fail-to-fit |
| Temperature + warm-up state | Thermal trends; explicitly mark settling/warm-up phases | Low-rate periodic + event on threshold crossing |
| Error flags (brownout seen / retry / self-test fail) | Field diagnosability with minimal bandwidth | Event-driven, sticky until acknowledged |
| Sequence counter + payload CRC | Detect loss/duplication/corruption without heavy protocols | Attached to every critical metric packet |
Advertising vs connection: event-driven reporting is the default
- Advertising mode: low duty cycle for discovery and status beacons (minimal payload).
- Connection mode: short-lived sessions for metric bursts; avoid permanent connections unless required by the host.
- Event-driven triggers: contact change, fit fail, calibration completed, self-test fail, brownout detected.
Latency vs energy: connection interval and slave latency as the control knobs
- Lower energy: longer connection interval + higher slave latency + sparse updates.
- Lower latency: shorter interval (higher RF duty), but it raises the chance of overlap with sensing windows.
- Module requirement: protect sensing and acoustic capture with critical sections (quiet windows) and validity flags.
Packet loss handling: SEQ + CRC + rolling log (lightweight but complete)
- Sequence counter: increments per report; receiver detects gaps and reorders without ambiguity.
- Simple CRC: catches corruption, especially near rail transitions or RF peak-current events.
- Rolling event log (optional): a small ring buffer storing the last N events (contact edges, fit fails, brownouts) for field forensics.
Evidence & checks: reproduce dropouts and separate RF from power causes
- RF shadowing reproduction: fixed posture + head/hand blocking sequences; compare SEQ gaps against RSSI trend. Burst losses indicate margin issues or scheduling collisions.
- Peak TX current vs rail droop: capture VBAT/PMIC input waveform during TX bursts; correlate CRC fails/SEQ gaps with droop or UVLO events.
H2-10. Factory Calibration & Self-Test: Make It Repeatable at Scale
Factory calibration prevents “lab-only” solutions. The goal is not perfect absolute accuracy in one sample, but repeatability across units and lots. This requires consistent trimming, screening of hysteresis outliers, fixture-based acoustic baselines, and on-device self-tests that detect open/short and path failures before shipment.
Pressure/contact calibration: trim plus hysteresis screening
- Offset trim: remove static bias from assembly stress and sensor tolerance.
- Gain trim (where applicable): align sensitivity enough for comparable fit scoring across units.
- Hysteresis screening: run a short compression/insert cycle and measure baseline return + spread. Outliers are binned or failed because they break “repeatable reinsertion.”
Temperature calibration: 1-point vs 2-point strategy (cost vs robustness)
- 1-point calibration: lower cost and faster throughput; assumes good linearity and stable slope across lots.
- 2-point calibration: better slope control; requires longer thermal settling and more complex fixtures.
- Module discipline: calibrate the sensor’s electrical behavior in production; handle warm-up/lag behavior as a runtime state (reported as a flag).
Acoustic calibration: fixtures make “ear canal” repeatable
- Coupler fixture concept: a controlled acoustic load with stable volume/impedance to produce comparable transfer signatures.
- Three fast modes: baseline (normal), controlled leak, controlled blockage—used to build a robust pattern library.
- Stored results: high-level feature baselines rather than raw waveforms, so field checks can detect drift.
On-device self-test: verify paths, detect open/short, enforce sanity checks
- Excitation path check: confirm bias/excitation reaches the sensor and returns expected load range.
- Mic path sanity: quick noise-floor and response window check (no DSP deep dive required).
- Open/short detection: catch assembly faults and contamination-induced shorts early.
- Cross-check sanity: simple consistency rules (e.g., contact=0 should not look like a strong sealed acoustic signature).
EOL flow and pass/fail mindset: a minimal but complete production sequence
| Step | Action | Pass/Fail evidence type |
|---|---|---|
| 1 | Power-on, ID/version read, baseline leakage sanity | Signature OK, leakage within window |
| 2 | Pressure/contact offset & gain trim | Trim converges, residual offset bounded |
| 3 | Short cycle screen (compression/insert simulation) | Hysteresis spread & baseline return bounded |
| 4 | Temperature calibration (1-pt or 2-pt) + warm-up flag setup | Offset/slope within limits |
| 5 | Acoustic coupler: baseline/leak/block quick signatures | Feature separation margins met |
| 6 | Self-test suite (excitation/mic/open-short/cross-check) | All flags clear |
| 7 | Write NVM: cal params + version + CRC/signature | Read-back matches; CRC OK |
H2-11. Validation & Field Debug Playbook (Symptom → Evidence → Fix)
This playbook is built for fast isolation with minimal tools. Every symptom uses the same rule: capture two proofs only — one signal proof (sensor/feature) and one power proof (rail/current). If the two proofs do not agree, the problem is usually timing/windowing or leakage, not “noise”.
Recommended test points: TP-S (sensor/feature output or CDC reading), TP-P (VBAT or VDD_AFE/VDD_RF), plus one event counter log (SEQ / retry / fail-reason).
Symptom 1 — Fit score unstable between insertions
First 2 measurements
- Signal proof: fit score spread across N re-insertions (same user, same eartip). Record min/median/max.
- Power proof: TP-P rail droop or peak current during the fit-evaluation window (compare “good” vs “bad” insertion).
Discriminator (what proves what)
- If contact baseline is still jittering after debounce → mostly mechanics/material hysteresis (micro-slip, foam compliance).
- If contact is stable but fit spread is large → suspect vent/membrane acoustic shift or pressure sensor hysteresis.
- If spread appears only when radio/calibration runs → suspect window overlap (feature captured during rail/RF disturbance).
First fix (do first, not “tune forever”)
- Mechanics first: lock sensor seat, reduce shear path, add strain relief; re-check spread.
- Timing next: move feature window into a “quiet slot” (no RF burst, no rail switching transient).
- Threshold last: add hysteresis gate only after mechanical repeatability improves.
MPN suspects (examples)
- Pressure baseline sensor: Bosch BMP390 / Infineon DPS368 / ST LPS22HH
- Capacitive contact CDC: TI FDC2214 / ADI AD7746
- ULP BLE SoC (for clean scheduling): Nordic nRF52832 / TI CC2340R5 / Renesas DA14531 / Silicon Labs EFR32BG22
Symptom 2 — False “good fit” when a leak exists
First 2 measurements
- Signal proof: acoustic leak indicator feature (LF loss / transfer magnitude anomaly) vs a known-good insertion.
- Power proof: check if the “good-fit” decision happens during a rail transient (buck burst / RF spike) → false feature.
Discriminator
- Leak feature says “leak”, but pressure/contact says “stable” → suspect blocked mic port or membrane/vent changing transfer.
- Leak feature unstable across repeats → suspect stimulus window too short or capture overlaps with disturbances.
- Leak feature consistent but decision wrong → suspect gating logic (pressure/contact should gate acoustic “good”).
First fix
- Pattern check: run a forced “blocked-port” vs “leak” controlled test (fixture or simple cap/vent jig) and store signatures.
- Mechanical: revise vent/membrane to preserve LF leakage observability (do not over-damp the leak cue).
- Logic: require contact/pressure stability before allowing acoustic “good-fit”.
MPN suspects (examples)
- Mic for acoustic capture: TDK InvenSense ICS-40730 (analog, differential)
- Pressure sensor: Bosch BMP390 / Infineon DPS368 / ST LPS22HH
- ULP LDO for quiet mic/AFE rail: TI TPS7A02
Symptom 3 — Temperature reads high/low or drifts during use
First 2 measurements
- Signal proof: temperature step response and settling time (warm-up curve) at insertion and after 10–30 min.
- Power proof: correlate temperature drift with current spikes and regulator self-heating (radio/calibration bursts).
Discriminator
- Drift matches load spikes → self-heating coupling (sensor too close to PMIC/MCU or poor thermal isolation).
- Offset is stable but wrong → calibration strategy issue (1-point vs 2-point; assembly-to-assembly spread).
- Slow creep with moisture/wax exposure → thermal path variability (sealants, membranes, contamination layers).
First fix
- Scheduling: sample temperature in a low-power quiet window; reduce adjacent rail activity near sample.
- Placement: increase thermal isolation from hot rails; keep repeatable thermal coupling to canal.
- Calibration: store per-unit offset (and slope only when needed); re-check drift after 30–60 min soak.
MPN suspects (examples)
- Digital temperature sensor: TI TMP117
- ULP PMIC / charger: Nordic nPM1300 / TI BQ25120A
- Rail isolation / gating: TI TPS22910A load switch + TI TPS7A02 LDO
Symptom 4 — Random contact dropouts during motion
First 2 measurements
- Signal proof: dropout histogram: short glitches (ms) vs long gaps (≥ connection interval) with SEQ counter.
- Power proof: check TP-P for droop/UVLO during dropout; compare with TX bursts.
Discriminator
- Mostly short glitches → mechanical micro-slip or high-impedance node sensitivity (sweat leakage, cable tug).
- Long gaps aligned with droop → power domain collapse or load-switch timing.
- Dropouts only during radio activity → sensing window overlaps with RF/rail critical section.
First fix
- Mechanics: add strain relief, reduce shear at electrode/sensor interface; re-run motion script.
- AFE robustness: increase hysteresis and validate leakage paths; add guard/shield routing where possible.
- Timing: shift contact sampling away from TX burst; enforce critical sections for read/commit.
MPN suspects (examples)
- Capacitive contact CDC: TI FDC2214 / ADI AD7746
- Low-leak front-end amplifier option: ADI AD8237 (zero-drift INA, micropower)
- ULP BLE SoC: Nordic nRF52832 / TI CC2340R5 / Renesas DA14531 / Silicon Labs EFR32BG22
Symptom 5 — Battery drain spikes during calibration
First 2 measurements
- Signal proof: calibration timeline: stimulus length, sample rate, retry count, fail-reason counter.
- Power proof: current profile overlay (calibration window + radio events + rail enables).
Discriminator
- Spikes coincide with RF bursts → reporting schedule too dense or connection interval too aggressive.
- Spikes coincide with stimulus/capture only → capture window too long or repeated retries due to invalid features.
- Spikes cause brownout → peak current margin insufficient or rail gating order wrong.
First fix
- Cut retries first: cap retry count; if invalid → degrade and report “fit-unknown” rather than endless loops.
- Shorten window: measure only the minimum features that separate leak vs blocked vs good.
- Power state: separate calibration rail from RF rail; ensure UVLO margin and controlled turn-on.
MPN suspects (examples)
- ULP PMIC / charger: Nordic nPM1300 / TI BQ25120A
- Load switch for rail gating: TI TPS22910A
- Nano-IQ LDO for quiet sensing: TI TPS7A02
MPN Starter List (fast A/B isolation — examples)
These part numbers are practical “swap candidates” to validate root-cause hypotheses quickly. Selection should be finalized by package, leakage, and mechanical integration constraints.
| Function | MPN examples | When it helps in debug |
|---|---|---|
| Pressure baseline | Bosch BMP390 · Infineon DPS368 · ST LPS22HH | Fit repeatability vs insertion depth; leak-vs-contact cross-check; sensor hysteresis screening |
| Capacitive contact CDC | TI FDC2214 · ADI AD7746 | False contact events, motion micro-slip signatures, high-impedance robustness comparison |
| Mic for acoustic capture | TDK InvenSense ICS-40730 | Distinguish “blocked port” vs “true leak” patterns; improve SNR margin for short windows |
| Temp sensor | TI TMP117 | Separate thermal path issues vs calibration offset; drift vs self-heating correlation |
| Instrumentation / low-drift front-end option | ADI AD8237 | Leakage/offset-driven false edges; compare chopper/zero-drift behavior vs bandwidth needs |
| ULP BLE SoC | Nordic nRF52832 · TI CC2340R5 · Renesas DA14531 · Silicon Labs EFR32BG22 | Scheduling determinism; connection interval/latency trade; SEQ/CRC integrity implementation |
| PMIC / charger | Nordic nPM1300 · TI BQ25120A | Calibration current spikes; rail partitioning; battery path stability during bursts |
| Quiet LDO | TI TPS7A02 | Mic/AFE quiet rail; reduce feature corruption from rail noise; improve repeatability |
| Load switch (rail gating) | TI TPS22910A | Duty-cycling AFE/sensors; isolate brownout; enforce clean on/off edges |
Figure F11 — Field Debug Decision Tree (Two-Proof Method)
Use the left column to pick a symptom, then collect exactly two proofs. Route to the first fix bucket and re-test the same script.
H2-12. FAQs ×12 (Evidence-First, No Scope Creep)
Each answer forces a two-proof method: one signal proof (baseline/feature/histogram) plus one power/timing proof (rail/peak current/scheduling overlap). If the two proofs disagree, the root is usually timing-window corruption or leakage, not “random noise”.
1) Fit score changes every insertion — mechanical variance or sensor offset?
Start by separating insertion-to-insertion spread from static drift. Proof #1: record fit score spread across 8–10 reinserts, plus the pressure/contact baseline “return-to-band”. Proof #2: hold a stable insertion and watch baseline drift for 60–120 s. Large spread with good static stability points to mechanics/material hysteresis; static drift points to AFE offset/leakage or calibration trim limits.
2) “Good seal” but bass still leaks — pressure says OK, acoustic says leak: which to trust?
Treat pressure/contact as “stable placement,” not guaranteed acoustic sealing. Proof #1: compare the acoustic leak feature (LF loss / transfer magnitude cue) against a known-good insertion; also run a “blocked-port” control. Proof #2: confirm pressure/contact stability is not merely averaging transient states. If acoustic says leak while contact is stable, prioritize vent/membrane transfer changes or mic-port partial blockage before blaming pressure.
3) Contact detection flickers during running — thresholding or cable/strain microphonics?
Convert “flicker” into a histogram. Proof #1: classify dropouts as short glitches (ms) versus long gaps (≥ connection interval). Proof #2: repeat with a controlled tug/strain script; if edge rate rises with strain, the root is mechanical microphonics/routing. If flicker clusters near a threshold without strain sensitivity, the fix is hysteresis and a clean debounce window (not heavy filtering).
4) Temperature rises during music — real ear temperature or PMIC self-heating?
Use correlation. Proof #1: temperature curve with a warm-up flag (settling versus continuous climb). Proof #2: overlay current/rail activity during playback and radio bursts. If temperature steps or ramps tightly track load peaks, self-heating coupling dominates (placement/thermal isolation/schedule). If temperature changes persist when load is flat and follow a slow thermal constant, the reading likely reflects the ear-canal thermal path.
5) Temperature reads slow — how to reduce lag without losing protection?
Lag is usually set by the thermal RC of packaging + protective layers. Proof #1: measure step response time constant (insertion into a stable environment or controlled coupler). Proof #2: compare two placements (near canal vs shell) while keeping firmware constant. If the time constant is dominated by membranes/sealants, improve thermal coupling consistency (thin protective stack, controlled contact pressure) rather than removing protection. Validate with soak drift.
6) After sweat exposure, pressure baseline shifts — contamination or membrane/venting change?
Separate “surface leakage” from “transfer change.” Proof #1: baseline offset and drift direction before/after sweat, plus reversibility after drying/cleaning. Proof #2: check if the acoustic signature shifts in the same direction (vent/membrane change typically moves acoustic features). If pressure shifts but acoustics stay stable, suspect leakage paths on high-impedance nodes or sensor port contamination. If both shift, prioritize vent/membrane impedance changes.
7) Acoustic calibration fails only in noisy places — mic SNR or timing window?
Determine whether the failure is SNR-limited or collision-limited. Proof #1: capture a simple SNR proxy for the calibration feature (noise floor vs stimulus response) and compare quiet vs noisy sites. Proof #2: log whether failures cluster at specific times aligned with BLE events or rail switching. If SNR collapses, increase robustness (shorter window, stronger stimulus, better mic path). If timing clusters, move the window into a quiet slot and lock critical sections.
8) Battery drain spikes when the user reinserts often — what power state is stuck?
Look for a retry loop or an “exit condition” failure. Proof #1: event counters (calibration starts, retries, fail reasons, time spent in each state). Proof #2: current profile with rail enables (VDD_AFE/VDD_RF) over time. If reinsertion triggers repeated calibrations, the system needs retry caps and a degrade-to-unknown path. If a rail stays on after failure, fix symmetry of enable/disable and validate UVLO margins. Duty-cycle audit should hit nA–µA leakage targets in idle.
9) BLE dropouts correlate with sensing bursts — rail droop or scheduling collision?
Use time alignment between packets and rails. Proof #1: sequence-counter gaps or CRC failures with timestamps. Proof #2: rail droop/peak current during sensing bursts and radio TX. If dropouts align with droop (or UVLO flags), solve peak-current margin and rail partitioning first. If dropouts align with sensing windows but rails are clean, it’s a scheduling collision: move sensing to a quiet slot, enforce critical sections around read/commit, and reduce BLE density during bursts.
10) Factory yield is poor on the contact sensor — fixture issue or hysteresis spec?
Production needs repeatability, not “lab perfection.” Proof #1: run the same unit across two fixtures/operators and compare pass/fail consistency; large changes indicate fixture force/angle/placement variability. Proof #2: apply a short hysteresis loop test (compression/relax or controlled proximity) and check if the loop width exceeds the decision band. If fixture dominates, control insertion depth/force and add alignment keys. If hysteresis dominates, adjust the spec, screen parts, or change modality/AFE biasing so the hysteresis is measurable and bounded.
11) Fit score drifts over weeks — material creep or AFE leakage aging?
Separate mechanical creep from electrical leakage. Proof #1: run a standardized compression/relax script and measure baseline return over multiple cycles; worsening return indicates material creep or seat deformation. Proof #2: perform a high-impedance leakage audit (static offset drift, humidity sensitivity, recovery after drying). If drift tracks compression history, change material stack or sensor seat geometry. If drift tracks humidity and becomes less reversible, prioritize leakage paths, ESD clamp leakage, and bias/excitation strategy. Store “age flags” and trend metrics to catch degradation early.
12) How to set hysteresis so it’s stable but responsive?
Hysteresis should be set between the noise/drift envelope and the true event amplitude. Proof #1: collect distributions for real events (reinsertion, walking, jaw motion) and find the low-percentile event amplitude. Proof #2: measure the high-percentile noise/drift amplitude in quiet steady wear. Set hysteresis slightly above the noise high-percentile but below the event low-percentile. Validate with two scripts: (a) no false toggles during steady wear, (b) fast detection during reinsertion. Avoid “bigger is safer” bias.
Figure F12 — FAQ → Evidence Chain Map (Chapter Anchors)
Each FAQ is forced back to the same evidence chain: sensing fundamentals, mechanics, calibration, power, BLE robustness, factory/self-test, and field decision tree.