123 Main Street, New York, NY 10001

Eartip Fit & Bio-Sensing Module: Fit, Temp & Acoustic Cal

← Back to: Audio & Wearables

Core idea: This module turns “ear-tip fit” into measurable evidence by combining pressure/contact stability, in-ear temperature sanity, and an acoustic leak signature, then reporting a repeatable fit score with ultra-low power.

Instead of guessing, it uses a two-proof method (signal feature + power/timing correlation) to quickly separate mechanical seal issues from AFE leakage/offset and scheduling/rail collisions—so the first fix is obvious and testable.

H2-1. What the Module Is and What “Good Fit” Means (Engineering Definition)

This page defines an eartip fit & bio-sensing module as a measurable, testable sub-system: pressure/contact stability + a compact ear-canal acoustic signature for seal/leak indication, plus in-ear temperature with low-power telemetry. The boundary is the module (sensors + AFEs + ULP PMIC + BLE), not the full earbud audio product.

Inputs pressure/contact, temperature, cal test tone trigger Outputs fit metrics/flags, contact events, temperature + validity Proof waveforms + feature signature + current profile

Module boundary (what is inside vs. outside)

The module includes the eartip mechanics (seal/vent/membrane), pressure/contact sensing, temperature sensing, sensor AFEs (excitation/IA/PGA/filter/ADC), a small controller or BLE SoC for telemetry, and an ultra-low-power PMIC that gates rails and enforces sleep. The host interface is limited to I²C/SPI/GPIO (configuration + readback) and optional “calibration trigger” for an acoustic test tone.

Anti-overlap rule: anything that depends on full earbud audio architecture (ANC, spatial audio pipelines, LE Audio features, charging case design) stays outside this page. Only the minimal “test tone → mic capture” loop is referenced, solely to calibrate the ear-canal seal signature.

What “fit / seal / contact” means in signals (not opinions)

  • Contact presence is an event + persistence problem: the module must detect “in-contact / out-of-contact” with controlled debounce/latency and a low false-trigger rate under motion.
  • Stability is the “does it settle?” question: after insertion, the pressure/contact signal should converge to a stable band (no random walk, no step-like toggling).
  • Seal quality is a cross-check: an ear-canal acoustic calibration feature (transfer signature) indicates “leak-like” vs “sealed-like” behavior, and is used to validate or correct pressure-only conclusions.

What “bio-sensing” means here (strictly scoped)

“Bio-sensing” on this page is limited to in-ear temperature and contact presence as the quality gate (temperature is meaningful only when contact is stable). It does not include PPG/SpO₂/EDA or clinical-grade sensing.

Minimal acceptance criteria (written as testable statements)

  • Repeatable fit score: repeated insertions of the same user produce a tight distribution (no bimodal “sometimes good, sometimes bad” without a matching mechanical explanation).
  • Stable temperature reading: temperature output includes a validity flag (warming/settled), and drift correlates with thermal physics (not PMIC self-heating artifacts).
  • Low energy cost: continuous monitoring uses duty-cycled sensing + event-driven BLE; acoustic calibration is an occasional burst with bounded energy impact.

Evidence & checks (first two measurements)

  • Fit score repeatability across insertions: record “stable-window” fit metrics after each insertion, inspect distribution shape (spread, outliers, bimodality) and time-to-stable.
  • Contact latency & false positives: log the raw contact/pressure waveform and the reported event timestamps; verify debounce/hysteresis removes motion chatter without adding unacceptable lag.
Eartip Module Boundary ear canal (cross-section) eartip vent membrane pressure contact temp test tone mic capture Inside the Module (Electronics) Pressure/Contact AFE excitation • IA/PGA • LPF • ADC Temperature AFE sensor bias • ADC • drift control Acoustic Cal Engine feature signature (seal/leak) MCU / BLE SoC I²C/SPI config • events • telemetry I²C/SPI GPIO / IRQ ULP PMIC rail gating • sleep control • leakage audit VAFE • VSENSE • VRADIO (gated) pressure/contact temperature cal features enable/sleep Host (outside this page) config + readback + optional calibration trigger only Out of Scope ANC / LE Audio / full earbud audio pipeline / charging case ICNavigator • Cite Figure F1
Figure F1. Module boundary for an eartip fit & bio-sensing sub-system: sensors + AFEs + BLE telemetry + ULP PMIC + minimal host interface. Full earbud audio features remain out of scope.
Cite this figure: Figure F1 — Eartip module boundary & interfaces
Suggested caption: “Boundary view of pressure/contact/temp sensing with acoustic calibration features and BLE/ULP PMIC telemetry.”

H2-2. System Architecture: Signals, Interfaces, and Data Paths

The architecture is best understood as three evidence paths sharing one low-power controller: (A) pressure/contact for presence + stability, (B) temperature with validity state, and (C) an acoustic calibration loop that produces a compact seal/leak signature. Each path has measurable test points and a defined timing window to avoid power/EMI coupling.

Path A contact/stability metrics Path B temp + validity Path C seal/leak signature

Signal inputs and outputs (what is carried across the interface)

  • Inputs: pressure/contact sensor raw channel(s), temperature sensor channel, optional “calibration trigger” event (to run a short test tone + mic capture).
  • Outputs: fit metrics (stability + seal quality), contact events (edge-triggered), temperature value plus validity (warming/settled), and diagnostics flags (sensor open/short, saturation, low-battery inhibit).
  • Integrity fields: sequence counters + rolling CRC for telemetry packets (to detect gaps without heavy logging).

Three paths, three failure patterns (kept hardware-first)

The goal is not “more algorithms.” The goal is discriminators: pressure-only vs acoustic-leak, real temperature vs PMIC self-heating, true contact drop vs motion chatter.
  • (A) Pressure/Contact Path — event + stability: excitation → sensor → IA/PGA → LPF → ADC → debounce/hysteresis → contact & stability metrics. Common root causes: leakage on high-impedance nodes, moisture-induced bias drift, mechanical rebound.
  • (B) Temperature Path — slow signal + thermal coupling: bias/ADC → smoothing matched to thermal time constant → temp + validity flag. Common root causes: PMIC self-heating coupling, insulation by wax/sweat, placement-driven lag.
  • (C) Acoustic Calibration Loop — feature signature: short test tone → ear canal → mic capture → feature extraction → seal/leak signature → cross-check with pressure stability. Common root causes: noisy environments, blocked mic port, timing collisions with radio events.

Interfaces to host (minimal set to preserve module independence)

  • I²C/SPI: configure thresholds, sampling duty-cycle, calibration enable; read back metrics/flags.
  • IRQ/GPIO: contact change, fit-fail, cal-done (event-driven, reduces polling power).
  • Optional trigger: host requests a calibration burst (test tone + capture). No full audio pipeline is described here.

Timing budget (why windowing matters)

The module should be scheduled around non-overlapping windows to preserve measurement fidelity: (1) sensing window (quiet rails, stable bias), (2) compute window (feature + thresholds), (3) BLE window (radio burst), and (4) optional calibration window (tone + capture). In practice, radio current spikes and ground bounce can corrupt high-impedance sensing unless windows are explicitly separated.

Evidence & checks (what to measure before changing anything)

  • Timing trace: log the sensing/compute/BLE event timeline and correlate with fit-score jitter or contact flicker.
  • Current profile: capture the rail current waveform during (A) normal monitoring and (B) calibration burst to ensure the average budget holds and no state gets stuck “on.”
  • Test points (recommended): AFE output/ADC input (TP1), temperature ADC read (TP2), mic feature capture (TP3), PMIC rail current sense/shunt (TP4), BLE sequence counter & retry stats (TP5).
System Architecture (Three Evidence Paths) Sensors Pressure / Contact presence • stability Temperature value • validity state Acoustic Cal Loop test tone → mic capture seal/leak signature AFEs + Conversion AFE-A (Pressure/Contact) excitation • IA/PGA • LPF • ADC AFE-B (Temperature) bias • ADC • drift control Cal Feature Engine capture window • feature extraction seal/leak discriminator Control + Telemetry MCU / BLE SoC metrics • events • seq/CRC I²C/SPI IRQ/GPIO ULP PMIC gated rails • sleep • leakage VAFE / VSENSE / VRADIO (gated) Outputs fit metrics • temp • events • flags enable TP1 AFE/ADC TP2 temp ADC TP3 cal feature TP4 rail current TP5 seq/CRC Timing Windows (separate to protect sensing) Sensing Compute BLE Burst Optional Cal ICNavigator • Cite Figure F2
Figure F2. Three-path architecture with recommended test points and timing windows. The acoustic loop is used only to generate a compact seal/leak signature, not to describe full audio DSP pipelines.
Cite this figure: Figure F2 — Three evidence paths, test points & timing windows
Suggested caption: “Pressure/contact, temperature, and acoustic calibration signatures scheduled into non-overlapping windows for low-noise sensing and robust BLE telemetry.”

H2-3. Pressure & Contact Sensing Fundamentals (What the Sensor Really Measures)

Fit sensing becomes reliable only when it is anchored to what the sensor actually measures. Many “pressure” or “contact” channels are dominated by material compression, contact area, and vent/leak dynamics rather than a clean, absolute ear-canal pressure number. This chapter maps sensing modalities to their dominant error sources and the waveforms that prove each root cause.

Goal physics-first discriminators Outputs contact presence + stability Evidence hysteresis + step-vs-drift

Pressure sensing options (what each modality is truly sensitive to)

  • Piezoresistive (strain/compression): strong response to mechanical deformation of foam/silicone and support structures. Typical signature is a fast step at insertion followed by creep (slow drift) as the material relaxes. Best for: contact/stability cues. Watch for: hysteresis and temperature drift.
  • Capacitive (gap/area changes): measures changes in distance or effective contact area. It can be extremely sensitive to small geometric changes, but also sensitive to moisture/contamination and parasitic coupling in compact assemblies. Best for: proximity/contact gating. Watch for: humidity shifts and parasitic capacitance.
  • MEMS barometer repurpose (micro-pressure/leak dynamics): becomes useful when the eartip + ear canal forms a quasi-cavity. The value is often the seal/leak behavior (how pressure decays) rather than absolute pressure. Best for: leak/vent discrimination. Watch for: “not a real cavity” cases and vent geometry changes.
Practical rule: treat “pressure amplitude” as secondary. The primary fit proof is stability over time and how the signal responds to controlled disturbances (insert, tug, chew, re-seat).

Contact sensing options (quality gate before other measurements)

  • Capacitive proximity/contact: good for “present/not present” gating and can be low power, but requires careful control of parasitics and stable referencing. Failure pattern: moisture films mimic contact; motion changes coupling.
  • Resistive contact: simple and robust if contact surfaces remain stable. Failure pattern: sweat/wax changes contact resistance; micro-slip causes chatter without true loss of seal.
  • Impedance-based “skin contact” concept: can separate “touch” vs “firm contact” by observing impedance changes under controlled excitation. Failure pattern: excitation/AFE leakage makes the channel look “always contacted.”
Anti-overlap note: this page treats contact as a module-level quality gate. It does not expand into clinical impedance sensing, physiology interpretation, or wearable bio-signal modalities outside in-ear temperature + contact presence.

Dominant error sources (symptom → physical cause → discriminator)

  • Vent leakage: signals show faster decay and poor low-frequency retention. Discriminator: a stable “contact present” with a drifting/decaying channel suggests leak/vent dynamics.
  • Insertion depth variance: changes the initial amplitude and geometry but can still produce a stable plateau. Discriminator: wide insertion-to-insertion spread with otherwise clean stabilization.
  • Jaw motion (chew/talk): introduces repeatable low-frequency modulation. Discriminator: periodic waveform tied to jaw cadence rather than random noise.
  • Cable/structure tug: produces short step-like events and rapid recovery if the seal is intact. Discriminator: sharp spikes with fast return vs slow drift.
  • Foam compliance & creep: causes slow drift after insertion and clear hysteresis over cycles. Discriminator: drift time constant matches material relaxation; repeated cycles show looped trajectories.

Evidence & checks (what to run before changing thresholds)

  • Hysteresis vs insertion cycle: run repeated insert/remove cycles, capture peak → plateau → release paths. Confirm whether the “same fit” produces consistent plateaus or shows cycle-dependent offsets.
  • Motion artifact signatures (step vs drift): use two controlled disturbances: (1) short tug/tap (step-like), (2) chew/turn (slow modulation/drift). Compare recovery time and event chatter rate.
Sensor Modality Map (What It Really Measures) Pressure Options Contact Options Dominant Error Sources Piezoresistive strain / compression step + creep Capacitive gap / area change humidity sensitive MEMS Baro Reuse leak / decay dynamics cavity dependent Cap Proximity presence gate parasitics matter Resistive Contact simple interface contamination risk Impedance Contact firm vs touch AFE leakage sensitive Vent leakage decay / low-f loss Depth variance geometry spread Jaw motion LF modulation Tug / micro-slip step spikes Foam compliance creep / hysteresis Evidence Checks Insertion hysteresis loop Step vs drift signatures Dominant error mapping ICNavigator • Cite Figure F3
Figure F3. Modality map linking pressure/contact sensing options to dominant mechanical error sources and the two fastest evidence checks: insertion hysteresis loops and step-vs-drift motion signatures.
Cite this figure: Figure F3 — Sensor modality map & dominant error sources
Suggested caption: “Pressure/contact modalities mapped to leak, depth variance, jaw motion, tug events, and foam compliance using hysteresis and motion signatures as discriminators.”

H2-4. Temperature Sensing in the Ear: Accuracy, Lag, and Drift Budget

In-ear temperature is only meaningful when the system accounts for placement, the thermal time constant, and self-heating coupling. A robust module treats temperature as a measured value plus a validity state (warming vs settled), and it separates true ear-canal trends from PMIC/battery thermal artifacts.

Placement response vs robustness Model thermal RC + heat coupling Evidence step response + 60-min drift

Sensor placement trade-offs (thermal coupling vs protection)

  • Near canal: faster response and closer to actual ear-canal temperature, but higher exposure to moisture/wax and mechanical abrasion. Protection layers (membranes/coatings) improve reliability but can increase thermal resistance and slow response.
  • Shell-mounted: easier to protect and integrate, but more sensitive to self-heating from nearby PMIC/radio activity and less representative of canal temperature during short windows.
  • Embedded: stable mechanically and manufacturable, but large thermal time constant; it becomes a “trend sensor” unless you explicitly model warm-up and validity.
Engineering rule: temperature output should always carry a validity flag (warming / settled / disturbed), because insertion, radio bursts, and calibration activity can temporarily bias the reading.

Thermal time constant and “warm-up” behavior (why instant readings mislead)

Temperature sensing is governed by a simple thermal RC: heat must flow from the ear canal through materials and interfaces (thermal resistance) into the sensor mass (thermal capacitance). After insertion, a “warm-up curve” is expected. Robust designs define a settled criterion (e.g., slope below a threshold over a window) rather than trusting the first few seconds.

Calibration strategy (production-friendly, module-scoped)

  • Offset trim: corrects device-to-device baseline errors with minimal cost. Useful when placement is consistent.
  • Slope trim: improves cross-temperature accuracy but requires two-point characterization or tighter test control.
  • Ambient compensation proxy: when direct ambient sensing is unavailable, use module state cues (sleep/active, radio burst) as a proxy to prevent reporting “disturbed” temperature as true ear temperature.

Moisture and wax: protection vs thermal coupling

Hydrophobic membranes, coatings, and sealing features reduce contamination and corrosion risk, but they can reduce thermal coupling and increase response time. The design should be validated with step response tests and long-duration drift tests under sweat/wax exposure, then adjusted with placement, materials, or reporting logic (validity flags) rather than over-fitting thresholds.

Evidence & checks (two tests that reveal most issues)

  • Step response & settling time: record the temperature curve at insertion under controlled conditions to extract time-to-settle and verify the validity state thresholds.
  • 30–60 minute drift + correlation with heating: log temperature alongside PMIC/radio activity markers and current profile. If temperature jumps align with radio or power bursts, self-heating coupling is likely dominating.
Thermal Model + Placement Options Placement Trade-offs Near Canal fast response moisture/wax risk needs protection Shell-Mounted robust packaging self-heating risk state-aware validity Embedded mechanically stable slow thermal lag trend sensing Simplified Thermal RC (with Self-Heating Coupling) Ear Canal Node true temperature Sensor Node measured temp Rth Cth thermal mass Heat Source PMIC / radio bursts Rth2 Validity State (reporting) warming → settled → disturbed use slope/variance thresholds Evidence Checks step response → settle time 60-min drift ↔ current bursts ICNavigator • Cite Figure F4
Figure F4. Placement options and a simplified thermal RC model showing how true ear-canal temperature couples to the sensor node, while PMIC/radio activity can inject self-heating bias. Temperature should be reported with a validity state.
Cite this figure: Figure F4 — Thermal RC model & placement trade-offs
Suggested caption: “In-ear temperature depends on placement, thermal RC time constant, and self-heating coupling; validity states prevent reporting disturbed readings as true ear temperature.”

H2-5. AFEs for Pressure/Contact/Temp: Noise, Offset, Excitation, ADC Choices

A reliable fit/bio module is limited less by “sensor type” and more by the measurement chain: excitation → sensor interface → IA/PGA → filtering → ADC. In this form factor, the dominant failure modes are usually low-frequency noise (1/f), offset drift, and leakage paths (including ESD clamp leakage and surface contamination). This chapter turns those into testable budgets.

Priority stability over peak Risks drift + leakage Proof noise & decay tests

Excitation strategies (resistive vs capacitive) — why they reshape noise and power

Resistive sensors / resistive contact

Constant-current excitation supports ratiometric interpretation and consistent sensitivity, but can introduce self-heating and longer settling after duty-cycling.

Constant-voltage excitation is simple but makes measurements more sensitive to supply variation and contact resistance changes.

Duty-cycled excitation reduces energy, but increases sensitivity to transient settling, making “false contact spikes” more likely if sampling occurs too early.

Capacitive proximity/contact

Charge-transfer / switched-cap methods can be efficient, but parasitics dominate unless sensor routing and reference structures are controlled.

Oscillator / frequency methods are easy to digitize, but can be affected by coupling from nearby clocks and RF bursts.

Synchronous excitation/demod improves immunity, but raises design complexity and can cost energy if windows are too long.

Engineering rule: when the output is an event (contact present / stable), prioritize excitation schemes that minimize threshold jitter and slow bias drift rather than maximizing raw sensitivity.

Chopper/auto-zero vs bandwidth; bias/leakage; ESD clamp leakage risks

  • Chopper / auto-zero: reduces low-frequency offset and 1/f noise, which directly improves “stable plateau” behavior. The trade-off is added ripple or bandwidth constraints, so timing windows must avoid sampling during switching artifacts.
  • Input bias & high-Z leakage: even tiny bias currents or surface leakage can shift high-impedance nodes, creating “always-contact” or “never-contact” behavior that looks like a mechanical problem.
  • ESD clamp leakage: protection devices can add parasitic leakage paths that change with humidity, contamination, and temperature. This is a common root cause when lab units pass but field units drift or latch into wrong states.
Field reality: an ESD event or contamination can convert a stable sensor node into a slow “bias ramp.” Without a leakage validation test, this often gets misdiagnosed as seal/foam creep.

ADC selection: resolution × sampling window × energy; ratiometric measurement

  • Resolution: choose effective resolution to keep event thresholds stable (avoid jitter-triggered false toggles), not to chase headline bits.
  • Sample rate & windows: transient events (insert/tug) need short high-rate bursts; steady-state needs low-rate monitoring. Windowing is often a bigger energy lever than ADC architecture choice.
  • Ratiometric measurement: when possible, measure sensor output against the same excitation/reference so supply drift cancels. This reduces apparent “offset drift” that is actually supply movement.

Guarding and shielding in a tiny form factor (what breaks first)

  • High-Z nodes near fast edges: RF clocks, DC-DC switching nodes, and GPIO edges couple into capacitive/resistive interfaces.
  • Parasitics dominate: long sensor traces behave as antennas; small geometry changes shift capacitance and bias.
  • Guard / driven shield: used to stabilize high-impedance sensing by controlling the electric field around the node.
  • Cleanliness & coatings: surface resistance and leakage can change dramatically with moisture; production process matters as much as schematics.

Evidence & checks (turn hardware choices into pass/fail proof)

  • Input-referred noise target: measure noise in representative modes (excitation on/off, RF bursts, different windows). Evaluate as false event rate and threshold jitter, not only as µV/√Hz.
  • Leakage validation (high-Z node test): apply a known bias/charge to the sensing node and record decay across humidity, temperature, and post-ESD conditions. Use decay time constant and residual offset as acceptance criteria.
AFE Chain Options (Noise • Offset • Power) Measurement Path Excitation duty / sync Sensor P/C/T IA / PGA gain / chopper LPF windowed ADC ENOB / energy Resistive V / I / duty Capacitive CT / osc / sync ESD Clamp leak path Noise (1/f) threshold jitter Offset / Drift slow bias ramp Power window length Ratiometric cancel drift Evidence Checks Noise Test false event rate • jitter Leakage Decay Test humidity • temperature • post-ESD ICNavigator • Cite Figure F5
Figure F5. AFE chain options for pressure/contact/temperature sensing. The reliability drivers are low-frequency noise, offset drift, and leakage (including ESD clamp leakage). Evidence checks translate these risks into measurable pass/fail tests.
Cite this figure: Figure F5 — AFE chain options: noise/offset/power/leakage
Suggested caption: “Excitation and AFE choices determine threshold stability; leakage and drift must be validated with noise and decay tests to avoid false fit/contact states.”

H2-6. Ear-Canal Acoustic Calibration: What You Calibrate and What “Seal” Looks Like

Acoustic calibration is treated here as an engineering measurement, not a DSP theory lesson. A short stimulus and a controlled capture window can extract a small set of interpretable metrics: resonance shift, a low-frequency leakage indicator, and transfer magnitude consistency. These metrics cross-validate pressure/contact channels to reduce misclassification.

Stimulus short & windowed Metrics resonance • LF leak • consistency Cross-check with pressure/contact

Test stimulus types (chirp / sweep / multitone) and why windowing matters

  • Chirp: compact in time; useful when the system needs a quick measurement window and minimal user disruption.
  • Sweep: easier to reason about in controlled tests; can be longer, which impacts energy and susceptibility to motion during the window.
  • Multitone: captures sparse spectral points quickly; can be robust when only a few features are needed for seal/leak inference.
Windowing principle: the measurement window should be short enough to reduce motion contamination, but long enough to stabilize transients from the stimulus source and microphone front-end. This trade-off is usually more important than the exact stimulus waveform.

Extractable metrics (interpretable, module-scoped)

  • Resonance shift: changes in ear-canal geometry and insertion depth shift the resonance location/shape. This helps separate “depth variance” from pure sensor drift.
  • Low-frequency leakage indicator: seal degradation tends to reduce low-frequency retention. A stable contact signal with an abnormal LF leakage metric strongly suggests vent/leak behavior.
  • Transfer magnitude consistency: repeatability across re-insertion is often more diagnostic than absolute magnitude. Wide variance indicates mechanical/fit instability rather than algorithm instability.

Failure modes (what “bad seal” vs “blocked mic” looks like)

  • Poor seal / leakage: LF leakage indicator abnormal; metrics fluctuate with small motion; repeatability is weak.
  • Venting effects: systematic leakage patterns that align with pressure decay behavior; often stable but consistently “leaky.”
  • Occlusion / geometry anomaly: resonance feature shifts beyond typical re-insertion spread; may appear as a consistent but “off” signature.
  • Mic port blockage: transfer magnitude becomes abnormal (often broad attenuation), producing a failure pattern that does not match pressure/contact evidence.

Cross-validation with pressure/contact (reduce misclassification)

  • Contact present + acoustic LF leak abnormal: likely vent/leak or insufficient seal, not “sensor noise.”
  • Pressure/contact stable + acoustic magnitude abnormal: suspect mic port blockage/contamination before changing fit thresholds.
  • All channels unstable: suspect micro-slip/jaw motion coupling or structural looseness; focus on repeatability tests and mechanical stabilization.

Evidence & checks (two pattern tests that catch most issues)

  • Leak signature vs blocked-mic signature: build a small pattern library by capturing features in three states: normal, intentionally loosened seal, and simulated mic blockage. Use the library to classify field logs.
  • Repeatability across re-insertion: run N re-insertions and compare feature distributions. If variance stays high, treat it as a mechanical/assembly problem before refining the scoring logic.
Acoustic Calibration Loop (Engineering Measurement) Calibration Path Stimulus chirp • sweep • multitone windowed Ear-Canal Path seal • vent • occlusion geometry dependent Mic Capture AFE • ADC short window Features resonance shift LF leak • consistency Fit Score Output repeatable? validity state log evidence Cross-Validation pressure/contact corroboration reduce misclassification seal vs blockage patterns Failure Pattern Library Leak Signature LF leak abnormal Blocked Mic magnitude abnormal Unstable Fit repeatability poor ICNavigator • Cite Figure F6
Figure F6. Calibration loop modeled as a measurement: a short stimulus is captured through the ear-canal path, features are extracted (resonance shift, LF leakage indicator, magnitude consistency), and cross-validated against pressure/contact to produce a fit score with repeatability/validity flags.
Cite this figure: Figure F6 — Acoustic calibration loop & cross-validation
Suggested caption: “Acoustic calibration treated as an engineering measurement: windowed stimulus → capture → interpretable features → cross-check with pressure/contact for robust seal and blockage discrimination.”

H2-7. Mechanics & Materials: Where Sensors Live and Why It Dominates Stability

In an eartip module, many “fit sensing” errors are not electronic—they are mechanical: material hysteresis, micro-slip, moisture/wax contamination, and strain-induced microphonics. Sensor placement and protective membranes can improve robustness, but they can also reshape the acoustic signature used for calibration. This chapter keeps scope strictly inside the eartip module (not the full earbud).

Root cause mechanics first Drivers hysteresis • slip • contamination Proof cycles • sweat • wax

Sensor placement: stem vs skirt vs core (trade-offs tied to artifact patterns)

Location What it sees well Typical stability risks (artifact sources)
Stem Routing-friendly zone; less direct compression; good for insertion-depth cues and “presence” stability. May under-represent true seal region; depth variance can look like fit changes; cable/strain coupling can dominate.
Skirt Closest to sealing interface; strongest sensitivity to leak and contact stability at the boundary. Higher hysteresis and shear; micro-slip under jaw motion; sweat film and wax contamination shift leakage and electrical bias.
Core More controlled geometry; potential for mechanical isolation; repeatability can be higher if assembly is consistent. Packaging stress and thermal paths can bias sensors; harder routing; protection layers can alter acoustic response if not modeled.
Practical rule: placement should be chosen by the dominant artifact you can tolerate: “depth variance,” “hysteresis,” or “routing/strain coupling.” The module must log enough evidence to distinguish them.

Foam vs silicone vs hybrid: compliance, hysteresis, and moisture behavior

Foam

Compliance: seals well at low force; can improve leak resistance.

Hysteresis: higher; “insert–remove” cycles can shift baseline and mimic slow drift.

Moisture: absorbs sweat; can change surface resistance and acoustic damping across time.

Silicone

Compliance: predictable; easier to model; often better repeatability for contact metrics.

Hysteresis: lower than foam; micro-slip can still occur under shear/jaw motion.

Moisture: less absorption but sweat film can form leakage paths on high-Z sensing nodes.

Hybrid

Goal: combine sealing at the boundary with structural stability around sensor seats.

Risk: interface adhesion/process variability becomes the main repeatability limiter.

Check: treat it as a manufacturing consistency problem; validate across lots, not only samples.

Routing constraints (tiny space)

Short paths: reduce parasitics and motion-induced coupling.

Fixation points: prevent strain from pulling on sensor seats.

Isolation: keep sensitive runs away from flex zones that amplify microphonics.

Membranes and hydrophobic vents: protection vs altered acoustic signature

  • Protection benefit: membranes and vents reduce sweat/wax ingress and help keep mic/ports functional over time.
  • Measurement cost: they can reshape the calibration transfer path, shifting resonance and leakage indicators. A “perfectly protected” design can still misclassify fit if the membrane/vent effect is not modeled.
  • Module-level requirement: treat membrane/vent variants as a controlled configuration and include them in the pattern library.

Strain relief and microphonics (mechanical-to-electrical coupling)

  • Strain-induced artifacts: pulling, twisting, or flexing can inject step-like changes into contact/pressure signals.
  • Microphonics: mechanical vibration can couple into high-impedance nodes and appear as false activity.
  • Mitigations: dedicated strain relief, stable fixation points, and avoiding long flexible spans near sensor routing.

Evidence & checks (make mechanical stability measurable)

  • Compression cycle test: fixed compression/relaxation for N cycles; track baseline return and hysteresis spread to quantify repeatability loss.
  • Wash/sweat test: sweat exposure + dry cycles; monitor contact false positives, leakage decay behavior, and calibration feature shift.
  • Wax contamination test: simulate partial/complete port blockage; verify that “blocked-mic” patterns separate cleanly from “leak” patterns.
Pass condition mindset: a design is stable when it can distinguish “leak,” “blockage,” and “motion” using repeatable signatures, not when it produces a high fit score in a single insertion.
Eartip Module Cross-Section (Mechanics → Artifacts) Ear Canal Core seat / structure Skirt Stem P C T pressure contact temp Membrane Hydro Vent Routing Strain Relief Artifact Sources hysteresis • micro-slip moisture film • wax microphonics ICNavigator • Cite Figure F7
Figure F7. Eartip module mechanics and material interfaces dominate sensing stability. Placement (stem/skirt/core), membranes/vents, routing and strain relief create repeatable artifact sources (hysteresis, micro-slip, moisture film, wax blockage, microphonics) that must be validated with cycle and contamination tests.
Cite this figure: Figure F7 — Eartip cross-section: placement & artifact sources
Suggested caption: “Mechanical placement and protection layers shape both electrical stability and acoustic signature; artifact sources should be tracked via cycle/sweat/wax tests.”

H2-8. Ultra-Low-Power Power Tree: ULP PMIC, Duty Cycling, and Energy Budget

Ultra-low-power is achieved by architecture and scheduling: define power domains, gate rails aggressively, and keep “quiet windows” for sensing and calibration. The limiting factors are often sleep leakage (nA–µA class), rail settling, and RF burst peak current, which can create false events or brownout resets if not budgeted.

Domains AON • Sense • Compute • RF Key risk leakage & peak I Proof profiling & audit

Power domains and rail gating (what stays on vs what must be windowed)

  • AON domain: wake logic/RTC and minimal state retention; target the lowest leakage and stable wake thresholds.
  • Sense domain: AFE + sensor bias; typically duty-cycled with controlled settling time to avoid transient misreads.
  • Compute domain: MCU/BLE baseband; short compute bursts following sensing windows.
  • RF domain: TX/RX bursts; highest peak current; must be isolated from sensitive measurement windows.
Quiet window requirement: schedule AFE sampling and acoustic capture away from RF bursts and regulator switching edges whenever possible. If overlap is unavoidable, require “validity flags” and retry logic rather than accepting contaminated samples.

Regulators: LDO vs buck; load-switch leakage as the real sleep limiter

  • LDO: low noise and predictable behavior; efficiency penalty grows when input-to-output ratio is large.
  • Buck: improves efficiency at higher loads; requires switching-noise management and careful placement around high-Z sensing.
  • Load switch: enables hard power-off of domains; the key parameter is off-state leakage, not only on-resistance.
Common trap: a design that profiles “good average current” in active modes can still fail battery-life targets if sleep leakage is left uncontrolled or varies strongly with humidity/temperature.

Energy modes (state machine) and minimal telemetry behavior

  • Idle: AON only; contact detection may run in sparse windows.
  • Detect: short AFE window for contact/pressure; quick decision + validity flag.
  • Calibrate: longer window for stimulus + capture; feature extraction; cross-check with pressure/contact.
  • Advertise: short RF bursts; keep scheduling away from sense windows when possible.
  • Connected telemetry (minimal): transmit only events/summary stats; avoid long on-time patterns that look like “streaming.”

Brownout/UVLO behavior and data integrity (module-safe state, not a storage tutorial)

  • UVLO hysteresis: prevents repeated reset oscillation during marginal battery conditions.
  • Validity flags: mark samples collected during rail settling/RF overlap as invalid; prefer retry over silent acceptance.
  • Minimal integrity strategy: record a compact sequence counter + CRC for critical events so field logs are diagnosable after power dips.

Evidence & checks (two measurements that close the power budget)

  • Current profiling (two-point method): measure at PMIC input (system energy) and at a key rail (domain attribution). Overlay the waveform on the state timeline to confirm peak current, settling time, and duty ratio.
  • Sleep leakage audit: isolate leakage by disabling domains and toggling GPIO states; verify nA–µA targets across temperature/humidity.
Pass condition mindset: the power tree is correct when (1) the state timeline matches the current waveform, and (2) sleep leakage is predictable and bounded across environment, not only low in a single lab snapshot.
ULP Power Timeline (State Machine + Rail Enables) States Idle Detect Calibrate Advertise Conn time → Rail Enables VBAT AON AFE MCU RF quiet window Peak I (RF burst) settling / inrush Measurement Points Point A: PMIC/VBAT input (system energy) Point B: key rail (AFE or RF) (domain attribution) Sleep leakage audit: nA–µA target across env ICNavigator • Cite Figure F8
Figure F8. ULP power architecture expressed as a state timeline with rail enables. The key engineering checks are (1) current profiling aligned to state windows, and (2) sleep leakage audit (nA–µA), while managing RF burst peak current, rail settling, and validity flags for contaminated samples.
Cite this figure: Figure F8 — Power timeline & rail enable chart
Suggested caption: “Duty-cycled domains and scheduled quiet windows dominate battery life and measurement integrity; validate with two-point current profiling and sleep leakage audit.”

H2-9. BLE Telemetry & Robustness: Scheduling, Latency, and Data Integrity

BLE in this module exists only to move small fit/contact/temperature metrics reliably. Robustness comes from aligning radio activity with sensing windows, minimizing “always-on” behavior, and adding lightweight integrity tools: sequence counters, simple CRC, and an optional rolling event log.

Model event-driven telemetry Trade latency ↔ energy Integrity SEQ + CRC

What gets reported (and what does not): a strict “metric contract”

Report (small, high value) Purpose Typical trigger
Contact state + stability flag Fast presence / insertion changes without streaming raw data Edge-triggered: 0→1 / 1→0, debounce complete
Fit score + validity flag Summarize seal/fit quality; avoid transmitting raw waveforms Periodic low-rate update or on fail-to-fit
Temperature + warm-up state Thermal trends; explicitly mark settling/warm-up phases Low-rate periodic + event on threshold crossing
Error flags (brownout seen / retry / self-test fail) Field diagnosability with minimal bandwidth Event-driven, sticky until acknowledged
Sequence counter + payload CRC Detect loss/duplication/corruption without heavy protocols Attached to every critical metric packet
Scope discipline: sending raw waveforms or long continuous telemetry turns BLE into a data pipe and breaks the ULP budget. The module should only transmit metrics that are already “decision-level.”

Advertising vs connection: event-driven reporting is the default

  • Advertising mode: low duty cycle for discovery and status beacons (minimal payload).
  • Connection mode: short-lived sessions for metric bursts; avoid permanent connections unless required by the host.
  • Event-driven triggers: contact change, fit fail, calibration completed, self-test fail, brownout detected.

Latency vs energy: connection interval and slave latency as the control knobs

  • Lower energy: longer connection interval + higher slave latency + sparse updates.
  • Lower latency: shorter interval (higher RF duty), but it raises the chance of overlap with sensing windows.
  • Module requirement: protect sensing and acoustic capture with critical sections (quiet windows) and validity flags.

Packet loss handling: SEQ + CRC + rolling log (lightweight but complete)

  • Sequence counter: increments per report; receiver detects gaps and reorders without ambiguity.
  • Simple CRC: catches corruption, especially near rail transitions or RF peak-current events.
  • Rolling event log (optional): a small ring buffer storing the last N events (contact edges, fit fails, brownouts) for field forensics.
Do not hide bad samples: if RF activity overlaps a sensitive measurement window, mark the metric as invalid and retry, instead of smoothing it into a “normal-looking” value that masks the root cause.

Evidence & checks: reproduce dropouts and separate RF from power causes

  • RF shadowing reproduction: fixed posture + head/hand blocking sequences; compare SEQ gaps against RSSI trend. Burst losses indicate margin issues or scheduling collisions.
  • Peak TX current vs rail droop: capture VBAT/PMIC input waveform during TX bursts; correlate CRC fails/SEQ gaps with droop or UVLO events.
Pass condition mindset: metric delivery is robust when packet gaps can be explained by RF shadowing or power droop evidence, and the module can recover using retries without silently accepting corrupted samples.
BLE Schedule vs Sensing Windows (Critical Sections) time → Sensing Acoustic BLE AFE window AFE window AFE capture + features ADV ADV CONN EVT CONN EVT critical: avoid TX Telemetry Payload (compact) SEQ counter FLAGS validity • errors METRICS CRC ICNavigator • Cite Figure F9
Figure F9. BLE activity should be scheduled around sensing and acoustic capture windows. “Critical sections” protect measurement integrity; SEQ + FLAGS + CRC provide lightweight detection of loss and corruption without heavy protocols.
Cite this figure: Figure F9 — BLE schedule vs sensing windows
Suggested caption: “Align BLE connection events with quiet sensing windows; use sequence counters and CRC to maintain integrity under RF shadowing and peak-current events.”

H2-10. Factory Calibration & Self-Test: Make It Repeatable at Scale

Factory calibration prevents “lab-only” solutions. The goal is not perfect absolute accuracy in one sample, but repeatability across units and lots. This requires consistent trimming, screening of hysteresis outliers, fixture-based acoustic baselines, and on-device self-tests that detect open/short and path failures before shipment.

EOL trim + screen + store Screen hysteresis outliers Self-test excitation + mic + open/short

Pressure/contact calibration: trim plus hysteresis screening

  • Offset trim: remove static bias from assembly stress and sensor tolerance.
  • Gain trim (where applicable): align sensitivity enough for comparable fit scoring across units.
  • Hysteresis screening: run a short compression/insert cycle and measure baseline return + spread. Outliers are binned or failed because they break “repeatable reinsertion.”
Why screening matters: the most damaging unit-to-unit variation is often not mean error, but “state memory” (hysteresis) that changes fit interpretation after repeated insertions.

Temperature calibration: 1-point vs 2-point strategy (cost vs robustness)

  • 1-point calibration: lower cost and faster throughput; assumes good linearity and stable slope across lots.
  • 2-point calibration: better slope control; requires longer thermal settling and more complex fixtures.
  • Module discipline: calibrate the sensor’s electrical behavior in production; handle warm-up/lag behavior as a runtime state (reported as a flag).

Acoustic calibration: fixtures make “ear canal” repeatable

  • Coupler fixture concept: a controlled acoustic load with stable volume/impedance to produce comparable transfer signatures.
  • Three fast modes: baseline (normal), controlled leak, controlled blockage—used to build a robust pattern library.
  • Stored results: high-level feature baselines rather than raw waveforms, so field checks can detect drift.
Common failure: “acoustic calibration” that relies on human insertion or uncontrolled ear geometry will not scale. A coupler-like fixture is the minimum requirement for repeatability.

On-device self-test: verify paths, detect open/short, enforce sanity checks

  • Excitation path check: confirm bias/excitation reaches the sensor and returns expected load range.
  • Mic path sanity: quick noise-floor and response window check (no DSP deep dive required).
  • Open/short detection: catch assembly faults and contamination-induced shorts early.
  • Cross-check sanity: simple consistency rules (e.g., contact=0 should not look like a strong sealed acoustic signature).

EOL flow and pass/fail mindset: a minimal but complete production sequence

Step Action Pass/Fail evidence type
1 Power-on, ID/version read, baseline leakage sanity Signature OK, leakage within window
2 Pressure/contact offset & gain trim Trim converges, residual offset bounded
3 Short cycle screen (compression/insert simulation) Hysteresis spread & baseline return bounded
4 Temperature calibration (1-pt or 2-pt) + warm-up flag setup Offset/slope within limits
5 Acoustic coupler: baseline/leak/block quick signatures Feature separation margins met
6 Self-test suite (excitation/mic/open-short/cross-check) All flags clear
7 Write NVM: cal params + version + CRC/signature Read-back matches; CRC OK
EOL Calibration & Self-Test (Fixtures + NVM Map) Manufacturing Flow 1) Power-on + ID + baseline sanity 2) Pressure/Contact trim (offset/gain) 3) Hysteresis screen (short cycle) 4) Temperature cal (1-pt / 2-pt) 5) Acoustic coupler: base/leak/block 6) Self-test: excitation/mic/open-short 7) Write NVM + CRC + read-back Fixtures (concept) Compression cycle jig Clamp Thermal block Hot/Cold Acoustic coupler Load NVM Map (high level) Cal version + device signature Pressure offset/gain + contact thresholds Temp offset/slope + warm-up params Acoustic baselines CRC Self-test flags + EOL pass bin ICNavigator • Cite Figure F10
Figure F10. A scalable EOL flow combines trimming, hysteresis screening, fixture-based acoustic baselines, self-test gates, and a high-level NVM map that stores parameters with versioning and CRC for traceability.
Cite this figure: Figure F10 — EOL flow + fixtures + NVM map
Suggested caption: “Production repeatability requires trim + hysteresis screening + fixture-based acoustic signatures, stored with versioning and CRC for field traceability.”

H2-11. Validation & Field Debug Playbook (Symptom → Evidence → Fix)

This playbook is built for fast isolation with minimal tools. Every symptom uses the same rule: capture two proofs only — one signal proof (sensor/feature) and one power proof (rail/current). If the two proofs do not agree, the problem is usually timing/windowing or leakage, not “noise”.

Proof #1 Signal baseline / feature Proof #2 Rail / current correlation Goal 1st fix action in <30 min Scope module-only evidence

Recommended test points: TP-S (sensor/feature output or CDC reading), TP-P (VBAT or VDD_AFE/VDD_RF), plus one event counter log (SEQ / retry / fail-reason).

Symptom 1 — Fit score unstable between insertions

First 2 measurements

  • Signal proof: fit score spread across N re-insertions (same user, same eartip). Record min/median/max.
  • Power proof: TP-P rail droop or peak current during the fit-evaluation window (compare “good” vs “bad” insertion).

Discriminator (what proves what)

  • If contact baseline is still jittering after debounce → mostly mechanics/material hysteresis (micro-slip, foam compliance).
  • If contact is stable but fit spread is large → suspect vent/membrane acoustic shift or pressure sensor hysteresis.
  • If spread appears only when radio/calibration runs → suspect window overlap (feature captured during rail/RF disturbance).

First fix (do first, not “tune forever”)

  • Mechanics first: lock sensor seat, reduce shear path, add strain relief; re-check spread.
  • Timing next: move feature window into a “quiet slot” (no RF burst, no rail switching transient).
  • Threshold last: add hysteresis gate only after mechanical repeatability improves.

MPN suspects (examples)

  • Pressure baseline sensor: Bosch BMP390 / Infineon DPS368 / ST LPS22HH
  • Capacitive contact CDC: TI FDC2214 / ADI AD7746
  • ULP BLE SoC (for clean scheduling): Nordic nRF52832 / TI CC2340R5 / Renesas DA14531 / Silicon Labs EFR32BG22

Symptom 2 — False “good fit” when a leak exists

First 2 measurements

  • Signal proof: acoustic leak indicator feature (LF loss / transfer magnitude anomaly) vs a known-good insertion.
  • Power proof: check if the “good-fit” decision happens during a rail transient (buck burst / RF spike) → false feature.

Discriminator

  • Leak feature says “leak”, but pressure/contact says “stable” → suspect blocked mic port or membrane/vent changing transfer.
  • Leak feature unstable across repeats → suspect stimulus window too short or capture overlaps with disturbances.
  • Leak feature consistent but decision wrong → suspect gating logic (pressure/contact should gate acoustic “good”).

First fix

  • Pattern check: run a forced “blocked-port” vs “leak” controlled test (fixture or simple cap/vent jig) and store signatures.
  • Mechanical: revise vent/membrane to preserve LF leakage observability (do not over-damp the leak cue).
  • Logic: require contact/pressure stability before allowing acoustic “good-fit”.

MPN suspects (examples)

  • Mic for acoustic capture: TDK InvenSense ICS-40730 (analog, differential)
  • Pressure sensor: Bosch BMP390 / Infineon DPS368 / ST LPS22HH
  • ULP LDO for quiet mic/AFE rail: TI TPS7A02

Symptom 3 — Temperature reads high/low or drifts during use

First 2 measurements

  • Signal proof: temperature step response and settling time (warm-up curve) at insertion and after 10–30 min.
  • Power proof: correlate temperature drift with current spikes and regulator self-heating (radio/calibration bursts).

Discriminator

  • Drift matches load spikes → self-heating coupling (sensor too close to PMIC/MCU or poor thermal isolation).
  • Offset is stable but wrong → calibration strategy issue (1-point vs 2-point; assembly-to-assembly spread).
  • Slow creep with moisture/wax exposure → thermal path variability (sealants, membranes, contamination layers).

First fix

  • Scheduling: sample temperature in a low-power quiet window; reduce adjacent rail activity near sample.
  • Placement: increase thermal isolation from hot rails; keep repeatable thermal coupling to canal.
  • Calibration: store per-unit offset (and slope only when needed); re-check drift after 30–60 min soak.

MPN suspects (examples)

  • Digital temperature sensor: TI TMP117
  • ULP PMIC / charger: Nordic nPM1300 / TI BQ25120A
  • Rail isolation / gating: TI TPS22910A load switch + TI TPS7A02 LDO

Symptom 4 — Random contact dropouts during motion

First 2 measurements

  • Signal proof: dropout histogram: short glitches (ms) vs long gaps (≥ connection interval) with SEQ counter.
  • Power proof: check TP-P for droop/UVLO during dropout; compare with TX bursts.

Discriminator

  • Mostly short glitches → mechanical micro-slip or high-impedance node sensitivity (sweat leakage, cable tug).
  • Long gaps aligned with droop → power domain collapse or load-switch timing.
  • Dropouts only during radio activity → sensing window overlaps with RF/rail critical section.

First fix

  • Mechanics: add strain relief, reduce shear at electrode/sensor interface; re-run motion script.
  • AFE robustness: increase hysteresis and validate leakage paths; add guard/shield routing where possible.
  • Timing: shift contact sampling away from TX burst; enforce critical sections for read/commit.

MPN suspects (examples)

  • Capacitive contact CDC: TI FDC2214 / ADI AD7746
  • Low-leak front-end amplifier option: ADI AD8237 (zero-drift INA, micropower)
  • ULP BLE SoC: Nordic nRF52832 / TI CC2340R5 / Renesas DA14531 / Silicon Labs EFR32BG22

Symptom 5 — Battery drain spikes during calibration

First 2 measurements

  • Signal proof: calibration timeline: stimulus length, sample rate, retry count, fail-reason counter.
  • Power proof: current profile overlay (calibration window + radio events + rail enables).

Discriminator

  • Spikes coincide with RF bursts → reporting schedule too dense or connection interval too aggressive.
  • Spikes coincide with stimulus/capture only → capture window too long or repeated retries due to invalid features.
  • Spikes cause brownout → peak current margin insufficient or rail gating order wrong.

First fix

  • Cut retries first: cap retry count; if invalid → degrade and report “fit-unknown” rather than endless loops.
  • Shorten window: measure only the minimum features that separate leak vs blocked vs good.
  • Power state: separate calibration rail from RF rail; ensure UVLO margin and controlled turn-on.

MPN suspects (examples)

  • ULP PMIC / charger: Nordic nPM1300 / TI BQ25120A
  • Load switch for rail gating: TI TPS22910A
  • Nano-IQ LDO for quiet sensing: TI TPS7A02

MPN Starter List (fast A/B isolation — examples)

These part numbers are practical “swap candidates” to validate root-cause hypotheses quickly. Selection should be finalized by package, leakage, and mechanical integration constraints.

Function MPN examples When it helps in debug
Pressure baseline Bosch BMP390 · Infineon DPS368 · ST LPS22HH Fit repeatability vs insertion depth; leak-vs-contact cross-check; sensor hysteresis screening
Capacitive contact CDC TI FDC2214 · ADI AD7746 False contact events, motion micro-slip signatures, high-impedance robustness comparison
Mic for acoustic capture TDK InvenSense ICS-40730 Distinguish “blocked port” vs “true leak” patterns; improve SNR margin for short windows
Temp sensor TI TMP117 Separate thermal path issues vs calibration offset; drift vs self-heating correlation
Instrumentation / low-drift front-end option ADI AD8237 Leakage/offset-driven false edges; compare chopper/zero-drift behavior vs bandwidth needs
ULP BLE SoC Nordic nRF52832 · TI CC2340R5 · Renesas DA14531 · Silicon Labs EFR32BG22 Scheduling determinism; connection interval/latency trade; SEQ/CRC integrity implementation
PMIC / charger Nordic nPM1300 · TI BQ25120A Calibration current spikes; rail partitioning; battery path stability during bursts
Quiet LDO TI TPS7A02 Mic/AFE quiet rail; reduce feature corruption from rail noise; improve repeatability
Load switch (rail gating) TI TPS22910A Duty-cycling AFE/sensors; isolate brownout; enforce clean on/off edges
Tip: For each A/B swap, keep firmware constant and only change one hardware variable; log SEQ counters + fail reason codes to avoid “felt better” bias.

Figure F11 — Field Debug Decision Tree (Two-Proof Method)

Use the left column to pick a symptom, then collect exactly two proofs. Route to the first fix bucket and re-test the same script.

F11 — Field Debug Decision Tree (2-Proof Method) SYMPTOMS TWO PROOFS LIKELY ROOT + FIRST FIX S1 Fit unstable spread across re-insertions S2 False “good fit” leak exists but score passes S3 Temp drift high/low or creeping S4 Contact dropouts motion-triggered gaps S5 Drain spikes during acoustic calibration Proof #1 — Signal evidence baseline / feature / histogram TP-S: CDC/AFE/mic feature Proof #2 — Power evidence rail droop / peak current TP-P: VBAT / VDD_AFE / VDD_RF Bucket A — Mechanics micro-slip / hysteresis / vents First fix: seat/strain/vent Bucket B — AFE leakage offset/drift / Hi-Z sensitivity First fix: guard/bias/hysteresis Bucket C — Timing & rails window overlap / RF bursts / UVLO First fix: schedule + rail gating Cite this figure: ICNavigator • Audio & Wearables • Eartip Fit & Bio-Sensing Module • Fig F11 © ICNavigator
Cite this figure: ICNavigator — Eartip Fit & Bio-Sensing Module — Fig F11 (Decision Tree)

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12. FAQs ×12 (Evidence-First, No Scope Creep)

Each answer forces a two-proof method: one signal proof (baseline/feature/histogram) plus one power/timing proof (rail/peak current/scheduling overlap). If the two proofs disagree, the root is usually timing-window corruption or leakage, not “random noise”.

Two proofs only Discriminator included First fix action Maps back to chapters
1) Fit score changes every insertion — mechanical variance or sensor offset?

Start by separating insertion-to-insertion spread from static drift. Proof #1: record fit score spread across 8–10 reinserts, plus the pressure/contact baseline “return-to-band”. Proof #2: hold a stable insertion and watch baseline drift for 60–120 s. Large spread with good static stability points to mechanics/material hysteresis; static drift points to AFE offset/leakage or calibration trim limits.

Maps to: H2-3 / H2-7 / H2-10. Quick swap candidates (for A/B): BMP390 / DPS368 / LPS22HH.
2) “Good seal” but bass still leaks — pressure says OK, acoustic says leak: which to trust?

Treat pressure/contact as “stable placement,” not guaranteed acoustic sealing. Proof #1: compare the acoustic leak feature (LF loss / transfer magnitude cue) against a known-good insertion; also run a “blocked-port” control. Proof #2: confirm pressure/contact stability is not merely averaging transient states. If acoustic says leak while contact is stable, prioritize vent/membrane transfer changes or mic-port partial blockage before blaming pressure.

Maps to: H2-6 / H2-3. First fix: build a leak-vs-block signature library and gate “good” by both cues.
3) Contact detection flickers during running — thresholding or cable/strain microphonics?

Convert “flicker” into a histogram. Proof #1: classify dropouts as short glitches (ms) versus long gaps (≥ connection interval). Proof #2: repeat with a controlled tug/strain script; if edge rate rises with strain, the root is mechanical microphonics/routing. If flicker clusters near a threshold without strain sensitivity, the fix is hysteresis and a clean debounce window (not heavy filtering).

Maps to: H2-7 / H2-3 / H2-11. First fix: strain relief + threshold hysteresis tuned to noise floor.
4) Temperature rises during music — real ear temperature or PMIC self-heating?

Use correlation. Proof #1: temperature curve with a warm-up flag (settling versus continuous climb). Proof #2: overlay current/rail activity during playback and radio bursts. If temperature steps or ramps tightly track load peaks, self-heating coupling dominates (placement/thermal isolation/schedule). If temperature changes persist when load is flat and follow a slow thermal constant, the reading likely reflects the ear-canal thermal path.

Maps to: H2-4 / H2-8 / H2-11. Quick parts: TMP117 (temp), nPM1300 or BQ25120A (power path).
5) Temperature reads slow — how to reduce lag without losing protection?

Lag is usually set by the thermal RC of packaging + protective layers. Proof #1: measure step response time constant (insertion into a stable environment or controlled coupler). Proof #2: compare two placements (near canal vs shell) while keeping firmware constant. If the time constant is dominated by membranes/sealants, improve thermal coupling consistency (thin protective stack, controlled contact pressure) rather than removing protection. Validate with soak drift.

Maps to: H2-4 / H2-7. First fix: adjust sensor seat + protective stack to reduce RC without exposing to sweat.
6) After sweat exposure, pressure baseline shifts — contamination or membrane/venting change?

Separate “surface leakage” from “transfer change.” Proof #1: baseline offset and drift direction before/after sweat, plus reversibility after drying/cleaning. Proof #2: check if the acoustic signature shifts in the same direction (vent/membrane change typically moves acoustic features). If pressure shifts but acoustics stay stable, suspect leakage paths on high-impedance nodes or sensor port contamination. If both shift, prioritize vent/membrane impedance changes.

Maps to: H2-7 / H2-3 / H2-11. First fix: add contamination controls + leakage audit (Hi-Z validation).
7) Acoustic calibration fails only in noisy places — mic SNR or timing window?

Determine whether the failure is SNR-limited or collision-limited. Proof #1: capture a simple SNR proxy for the calibration feature (noise floor vs stimulus response) and compare quiet vs noisy sites. Proof #2: log whether failures cluster at specific times aligned with BLE events or rail switching. If SNR collapses, increase robustness (shorter window, stronger stimulus, better mic path). If timing clusters, move the window into a quiet slot and lock critical sections.

Maps to: H2-6 / H2-9 / H2-11. Quick parts: ICS-40730 (mic), TPS7A02 (quiet LDO) to protect capture.
8) Battery drain spikes when the user reinserts often — what power state is stuck?

Look for a retry loop or an “exit condition” failure. Proof #1: event counters (calibration starts, retries, fail reasons, time spent in each state). Proof #2: current profile with rail enables (VDD_AFE/VDD_RF) over time. If reinsertion triggers repeated calibrations, the system needs retry caps and a degrade-to-unknown path. If a rail stays on after failure, fix symmetry of enable/disable and validate UVLO margins. Duty-cycle audit should hit nA–µA leakage targets in idle.

Maps to: H2-8 / H2-11. Quick parts: TPS22910A (rail gating), nPM1300 / BQ25120A (power path).
9) BLE dropouts correlate with sensing bursts — rail droop or scheduling collision?

Use time alignment between packets and rails. Proof #1: sequence-counter gaps or CRC failures with timestamps. Proof #2: rail droop/peak current during sensing bursts and radio TX. If dropouts align with droop (or UVLO flags), solve peak-current margin and rail partitioning first. If dropouts align with sensing windows but rails are clean, it’s a scheduling collision: move sensing to a quiet slot, enforce critical sections around read/commit, and reduce BLE density during bursts.

Maps to: H2-8 / H2-9 / H2-11. Quick parts: TPS7A02 (quiet rail), TPS22910A (gating), nRF52832/CC2340R5 (BLE).
10) Factory yield is poor on the contact sensor — fixture issue or hysteresis spec?

Production needs repeatability, not “lab perfection.” Proof #1: run the same unit across two fixtures/operators and compare pass/fail consistency; large changes indicate fixture force/angle/placement variability. Proof #2: apply a short hysteresis loop test (compression/relax or controlled proximity) and check if the loop width exceeds the decision band. If fixture dominates, control insertion depth/force and add alignment keys. If hysteresis dominates, adjust the spec, screen parts, or change modality/AFE biasing so the hysteresis is measurable and bounded.

Maps to: H2-10 / H2-3. Quick parts: FDC2214 / AD7746 for CDC comparison, with controlled fixture capacitance targets.
11) Fit score drifts over weeks — material creep or AFE leakage aging?

Separate mechanical creep from electrical leakage. Proof #1: run a standardized compression/relax script and measure baseline return over multiple cycles; worsening return indicates material creep or seat deformation. Proof #2: perform a high-impedance leakage audit (static offset drift, humidity sensitivity, recovery after drying). If drift tracks compression history, change material stack or sensor seat geometry. If drift tracks humidity and becomes less reversible, prioritize leakage paths, ESD clamp leakage, and bias/excitation strategy. Store “age flags” and trend metrics to catch degradation early.

Maps to: H2-7 / H2-5 / H2-11. First fix: mechanical fatigue tests + Hi-Z leakage tests under humidity.
12) How to set hysteresis so it’s stable but responsive?

Hysteresis should be set between the noise/drift envelope and the true event amplitude. Proof #1: collect distributions for real events (reinsertion, walking, jaw motion) and find the low-percentile event amplitude. Proof #2: measure the high-percentile noise/drift amplitude in quiet steady wear. Set hysteresis slightly above the noise high-percentile but below the event low-percentile. Validate with two scripts: (a) no false toggles during steady wear, (b) fast detection during reinsertion. Avoid “bigger is safer” bias.

Maps to: H2-3 / H2-6 / H2-11. First fix: calibrate hysteresis using controlled leak/block/good patterns.

Figure F12 — FAQ → Evidence Chain Map (Chapter Anchors)

Each FAQ is forced back to the same evidence chain: sensing fundamentals, mechanics, calibration, power, BLE robustness, factory/self-test, and field decision tree.

F12 — FAQ → Evidence Chain Map FAQ CLUSTERS (Q1–Q12) CHAPTER ANCHORS (Evidence Sources) Fit repeatability & hysteresis Q1, Q12, Q11 Seal vs acoustic leak / blockage Q2, Q7 Motion artifacts & contact flicker Q3, Q9 Temperature lag / self-heating Q4, Q5 Sweat aging, yield, and drain spikes Q6, Q8, Q10 H2-3 / H2-5 Sensor fundamentals, AFE noise/offset/leakage H2-6 Acoustic calibration: leak vs blockage signatures H2-7 Mechanics/materials: stability, sweat/wax effects H2-4 / H2-8 Temperature lag/drift, power tree and duty cycling H2-9 / H2-10 / H2-11 BLE robustness, factory repeatability, field decision tree Cite this figure: ICNavigator • Audio & Wearables • Eartip Fit & Bio-Sensing Module • Fig F12
Cite this figure: ICNavigator — Eartip Fit & Bio-Sensing Module — Fig F12 (FAQ Map)