123 Main Street, New York, NY 10001

Sleep Monitoring Headband Hardware Blueprint: EEG AFE + ULP Power

← Back to: Audio & Wearables

A Sleep Monitoring Headband works only when µV-level EEG remains clean and time-consistent all night—so the design must control contact impedance, common-mode interference, power/RF coupling, and data integrity end-to-end, then gate smart-wake decisions on signal quality rather than raw features.

This page shows the evidence chain (what to measure first, what proves the root cause) that turns “noisy nights” into reproducible fixes and reliable overnight logging.

H2-1. System Boundary & Signal Pipeline (EEG → insights → smart wake)

What this chapter locks down

A sleep monitoring headband succeeds or fails at the system boundary—not at any single IC. This chapter defines (1) what the headband measures, (2) where decisions are made, and (3) what “done right” means using measurable acceptance metrics.

EEG: µV-level sensing Optional: PPG + accel On-device smart-wake loop BLE + local logging ULP power states

Scope note: This page focuses on hardware evidence chains (noise, artifacts, timing, power, data integrity). It does not claim medical diagnosis or cover cloud/app architecture.

Target signals and what each contributes (engineering view)

  • EEG (microvolt domain): primary channel. The engineering challenge is not “band selection,” but input-referred noise, common-mode rejection, and electrode impedance drift that can bury low-frequency content. EEG is a “contact + AFE + system coupling” problem.
  • Acceleration (optional): used mainly as a discriminator. If EEG variance spikes exactly when acceleration indicates a roll-over, treat it as artifact until proven otherwise.
  • PPG (optional): a secondary channel for trend features (e.g., HR/variability cues) and quality gating. Its LED switching introduces a high-risk coupling path into the EEG chain, so timing and power partitioning must be designed.
Design intent: define each signal by the evidence it can provide, not by a feature wish list. Every later chapter should map back to: noise / artifact / timing / power / integrity.

Pipeline overview (where smart-wake actually lives)

The pipeline must be split into two loops: (A) an on-device closed loop that can trigger wake reliably without depending on a phone, and (B) a data delivery loop that preserves a clean timeline for review and analysis.

  • Closed loop (real-time): electrodes → EEG AFE → ADC → MCU/DSP features → smart-wake decision → vib/buzzer driver. This loop is judged by latency and false trigger control.
  • Delivery loop (integrity): sample timestamp → packetization (seq + CRC + gap marker) → BLE transport → phone (optional sync) + local flash ring buffer (for reconnection recovery).

Key boundary decision: smart-wake should be able to operate with BLE temporarily unavailable. Therefore, local buffering and quality flags must exist on the device.

“Done right” success criteria (measurable, testable)

Define acceptance in metrics that can be measured on the bench and in overnight pilots:

Metric Why it matters How to measure (evidence)
EEG baseline noise (µVrms in band) Sets the usable dynamic range for low-frequency content; defines headroom vs artifacts. Record in a controlled “quiet” posture; compute band-limited RMS; repeat across contact conditions.
Data completeness (% samples present) Sleep insights fail silently when timelines contain gaps or duplicates. Sequence counter + timestamps; summarize missing/duplicate frames per hour and per night.
Battery per night (mAh/night + peak current) One-night reliability depends on both average drain and burst margins (BLE + haptics). Current profile with firmware markers; verify rail droop margin during peak events.
Wake latency (decision → actuator) Smart-wake must feel responsive while avoiding false triggers from artifacts. Timestamp decision + actuator enable edge; measure worst-case under low battery and BLE bursts.
Rule for the rest of the page: every design recommendation must tie back to at least one of the four metrics above (noise, completeness, battery, latency).

First measurements to de-risk the whole project

  • Two-channel capture: EEG AFE output + a chosen rail ripple point (AFE rail or PMIC node).
  • Contact quality trace: impedance estimate (or proxy) over time alongside EEG baseline.
  • Timeline proof: packet sequence counter + timestamp drift across a full overnight session.
  • Current profile: average current + peak current during BLE bursts and haptics events.

These measurements will directly support H2-2 (reality check), H2-8 (power), H2-9 (BLE), and H2-11 (validation).

F1 — Sleep Headband System Boundary and Signal Pipeline Block diagram of a sleep monitoring headband: electrodes to EEG AFE, ADC, MCU/SoC feature extraction and smart-wake trigger, with BLE, flash logging, haptics output, and PMIC power rails. F1 — System Boundary & Signal Pipeline Headband Wearable platform Electrodes (EEG+/EEG-/Ref) EEG AFE PGA • CMRR • 1/f ADC Anti-alias MCU / BLE SoC Feature extract Quality gating Packetization (Seq+CRC) Smart-Wake Trigger Decision → Actuation Haptics Vib / buzzer BLE Link Overnight Local Logging Flash ring buffer Optional Sensors Accel (artifact discriminator) PPG (LED driver + TIA) Power Tree (ULP) Battery Protection ULP PMIC Buck/LDO • sequencing AFE rail Digital rail RF rail µV signal path Night-long runtime depends on states + burst margins
Figure F1. End-to-end pipeline and power tree. The smart-wake loop must remain functional even with temporary BLE unavailability.

H2-2. EEG Signal Reality Check (µV-level, bandwidth, interference map)

Why EEG is “system-hard” (not component-hard)

EEG lives in the microvolt domain and is dominated by low-frequency noise and common-mode interference. In practice, the limiting factors are often electrode contact, CMRR under real impedance imbalance, and coupling from power/RF switching—not the advertised ADC resolution.

Non-negotiable mindset: treat “noise budget” and “interference map” as primary requirements. Everything else (firmware, features, smart-wake logic) is gated by them.

Bandwidth planning (engineering-first, minimal theory)

  • Passband intent: set a low-frequency corner that preserves slow dynamics while avoiding baseline runaway. Keep room for artifact discrimination rather than chasing aggressive filtering.
  • Anti-alias reality: the ADC sampling rate and analog front-end bandwidth must ensure that out-of-band interference (switching edges, RF burst harmonics) does not fold into the EEG band through non-idealities.
  • Notch strategy: 50/60 Hz mitigation is a symptom response. The real goal is to maximize CMRR + contact balance so notch filtering is not doing all the work.

Rule of thumb for writing the rest of the page: avoid “filter-first” thinking. Always ask: “What coupling path injected this energy into the front end?”

Noise budget (who dominates, and when)

Organize noise into three buckets that can be measured independently:

  • AFE intrinsic noise (incl. 1/f): dominates when contact is stable and layout is quiet. Signature: smooth low-frequency “grain” with no event correlation.
  • Electrode/skin interface noise: dominates when impedance drifts or becomes imbalanced. Signature: baseline wander, slow steps, sporadic saturation; often correlates with posture changes or sweat.
  • System coupling noise (power/RF/switching): dominates when rails bounce or RF bursts inject common-mode. Signature: periodic spikes or envelope modulation aligned with BLE events, charging, or LED pulses.
Practical target setting: define an EEG baseline noise goal in µVrms (band-limited), then verify it under at least two conditions: (1) stable contact, (2) intentionally imbalanced impedance. This reveals how much “real-world CMRR” is being lost.

Interference map (source → coupling path → signature → proof)

Each interference source must be paired with a proof measurement. The goal is to avoid ambiguous blame (e.g., “the algorithm is bad”) when the root cause is a measurable coupling path.

Source Coupling path Typical signature Fast proof (2 things to capture)
Mains 50/60 Hz Common-mode injection + reduced real-world CMRR from impedance imbalance Stable hum peak; amplitude rises with poorer contact EEG output spectrum + contact impedance proxy (or imbalance indicator)
BLE TX bursts Ground bounce / rail ripple / near-field coupling into AFE input Periodic spikes or modulation aligned with BLE events EEG output + rail ripple at AFE supply or PMIC node + BLE event marker
Charging / USB noise Supply ripple + leakage paths changing common-mode conditions Noise increases when connected; may appear as low-frequency drift or bursts EEG output + charger input ripple / PMIC switching node (safe probe point)
PPG LED switching (optional) Shared return path / LED driver edges coupling to AFE reference Artifacts at LED cadence; changes with LED current EEG output + LED drive waveform (or timing marker) with on/off subtraction
Motion / rollover Electrode micro-slip + impedance steps Baseline steps, saturation, “slow recovery” EEG output + accel magnitude + contact impedance trace (time-aligned)
ESD / transient events Front-end recovery, latch-up risk, reference disturbance Sudden step + long settling or temporary flatline Event timestamp + brownout/reset counters + post-event noise floor check
Evidence rule: every “noise complaint” must be converted into: signature + correlation + discriminator. If two causes produce the same signature, add one more channel (rail ripple, impedance, accel, event marker) until they separate.
F2 — EEG Interference Map (Sources and Coupling Paths) Interference map diagram showing common sources (mains hum, motion, BLE bursts, charger noise, optional PPG LED switching) coupling into the EEG AFE and electrodes. F2 — EEG Interference Map EEG AFE + Electrodes µV band • low-frequency Real-world CMRR depends on contact impedance balance Mains 50/60 Hz common-mode injection Motion / Slip impedance drift + steps BLE TX Bursts ground bounce / RF Charging / USB rail ripple + leakage Optional: PPG LED switching edges coupling CM injection impedance drift ground bounce rail ripple LED edges How to use this map Convert every symptom into: signature + correlation + discriminator (EEG + one extra channel).
Figure F2. Interference sources and coupling paths. The fastest diagnosis uses EEG plus one correlating channel (rail ripple, impedance, accel, or event markers).

H2-3. Ultra-Low-Noise EEG AFE Architecture (INA + ADC + reference)

Decision goal: pick a topology that survives real-world impedance imbalance

EEG is a microvolt signal chain that fails most often in the real-world CMRR regime: electrode impedance is not balanced, contact changes over the night, and the system injects common-mode energy (mains, charging, RF bursts). This chapter selects the AFE architecture by the specs that actually map to field symptoms and measurable evidence.

Boundary: This is an engineering AFE blueprint (noise, CMRR, headroom, reference/ground). It does not claim medical diagnosis or discuss cloud/app processing.

Front-end options: integrated EEG AFE vs discrete INA + ADC

Both approaches can work, but they fail differently. Choose based on what you can control and validate.

  • Integrated EEG AFE: typically combines input biasing/lead-off, high input impedance, low-noise PGA/ADC, and sometimes DRL/RLD support. Strength is faster convergence to a stable baseline and fewer “board-level” traps. Risk is less freedom to tune protection, reference, and gain staging.
  • Discrete INA + ADC: allows selecting a specific INA, anti-alias network, reference, and ADC bandwidth/ENOB. Strength is controllability; risk is that layout/return paths, reference noise, and protection leakage can dominate unless the team can measure and iterate.

Practical selection rule: if the project needs quick, repeatable overnight performance with limited analog-layout iteration, integrated AFE reduces risk. If the project must customize electrode models, protection, and coupling mitigation aggressively, discrete can be justified—but only with a strong validation plan.

Specs that matter (and what they actually protect you from)

The most expensive mistake is optimizing a “headline spec” while ignoring the spec that dominates the field failure mode. Use this list as the minimum checklist.

  • Input impedance (and balance): high input impedance reduces loading, but the critical issue is impedance imbalance across electrodes. Even a great datasheet CMRR can collapse when imbalance grows.
  • Input bias current / leakage paths: bias current times electrode impedance becomes a slow-varying offset. In a headband, this shows up as baseline drift, slow recovery after motion, or “saturation that takes minutes to settle.”
  • CMRR (real-world): the true target is not the lab CMRR number, but CMRR under intentionally imbalanced electrode impedances. This is the strongest predictor of 50/60 Hz hum complaints.
  • Input range / headroom: EEG is small, but artifacts are not. Headroom must survive motion-induced steps, half-cell potentials, and recovery from transients without flatlining.
  • Programmable gain (PGA): gain must be set by worst-case artifact headroom, not by “max sensitivity.” Over-gain yields clipping, then downstream filters create false structure.
  • ADC ENOB in-band (not nominal bits): evaluate ENOB or noise density in the EEG band. “High resolution” without low in-band noise does not improve readability.
Fast proof concept: test with two electrode impedance conditions—(1) balanced, (2) deliberately imbalanced. If hum rises dramatically in condition (2), the system is limited by real-world CMRR and contact balance, not ADC bits.

Reference and ground strategy (the hidden AFE input)

In practice, the reference node and return paths behave like an additional input to the AFE. The goal is to keep power/RF switching energy out of the AFE reference and to control the return current geometry.

  • Analog ground island: keep the EEG AFE + reference network inside a quiet return region. Join to digital ground at a controlled point near the ADC/AFE boundary.
  • Reference buffering + decoupling: the ADC/AFE reference noise directly appears as measurement noise. Place short decoupling loops; avoid routing switching nodes near the reference network.
  • ADC clock hygiene: avoid routing clock edges through sensitive input regions; clock-coupled interference can look like periodic texture or elevated noise floor.

Evidence-based approach: correlate EEG artifacts with a second channel (AFE rail ripple, reference pin noise, BLE event markers). If correlation exists, treat it as coupling until disproven.

Spec → symptom mapping (convert complaints into measurable causes)

Use this table as a diagnostic bridge between datasheet thinking and field behavior.

Spec / Design lever Typical field symptom What proves it (evidence)
Low real-world CMRR (impedance imbalance sensitivity) 50/60 Hz hum grows as contact worsens; “works for some users, not others.” Spectrum hum peak increases when injecting imbalance; hum tracks contact imbalance indicator.
High bias current / leakage (incl. protection leakage) Baseline drifts; slow recovery after motion; periodic saturations. DC offset trend correlates with impedance drift; recovery time constant stays long even when quiet.
Insufficient input range / headroom Clipping or flatline during roll-over; “signal disappears then returns slowly.” Clipping counter + waveform saturation segments; recover time varies with contact and motion.
Over-gain (PGA set too high) Frequent clipping; downstream filters create fake oscillations or “structured noise.” Clipping events rise with gain; noise floor appears lower but artifact rate rises.
Reference/ground contamination (return path coupling) Artifacts align with BLE bursts or charging; periodic spikes. EEG spikes correlate with BLE markers or rail ripple; improves with power partitioning/decoupling changes.

This mapping is designed to be reused in H2-11 (validation) and H2-12 (FAQ) without scope creep.

F3 — Ultra-Low-Noise EEG AFE Architecture Block diagram of EEG AFE internals: electrodes through protection and bias, INA/PGA, anti-alias, ADC, digital filtering, with reference buffer and AGND/DGND strategy and optional DRL/RLD loop. F3 — EEG AFE Internals (INA + ADC + REF) Electrodes EEG+ EEG- Ref / DRL Input Protection ESD • RC • clamps Bias / Lead-Off contact detect INA / PGA low noise • high Z real-world CMRR Anti-Alias band-limit ADC ENOB in band Digital Filter notch • HP • quality flags Reference Buffer low noise • decoupling REF pin + local caps Ground Strategy AGND island ↔ single tie ↔ DGND Optional DRL/RLD Loop common-mode control (stability matters) REF Aim: low in-band noise + stable real-world CMRR + safe headroom
Figure F3. AFE internals emphasizing real-world CMRR, bias/leakage effects, reference/ground hygiene, and headroom against artifacts.

H2-4. Electrode & Skin Interface (impedance drift, contact quality, comfort)

Why electrodes are a first-class design variable

The electrode-skin interface sets the headband’s real-world noise floor and real-world CMRR. Over an overnight session, sweat, pressure relaxation, and micro-slip can change impedance by orders of magnitude, which converts common-mode energy into in-band noise and causes saturation and slow recovery.

Engineering target: design for impedance stability and imbalance control, not only “low impedance.” A stable medium impedance often beats an unstable low impedance in field performance.

Electrode options (dry / semi-dry / fabric) and what they tend to break

  • Dry electrodes: simplest and cleanest mechanically, but impedance is highly skin-dependent. Failure mode is impedance steps and imbalance during motion, which reduces real-world CMRR.
  • Semi-dry / gel-assisted: improves impedance stability and reduces hum sensitivity, but introduces risks: leakage paths, contamination, and time-varying wetting that can change bias conditions.
  • Fabric electrodes: comfortable and wearable-friendly, but performance depends on pressure distribution, sweat paths, and connector strain relief. Failure mode is slow drift plus sporadic motion artifacts.

Design framing: choose electrode type based on the expected overnight environment (sweat, hair, motion) and the validation method you can run (impedance tracking + artifact correlation).

Measuring contact quality (impedance trend + imbalance + artifact counters)

“Good contact” must be measurable. A practical headband uses a layered approach: impedance estimation, imbalance detection, and signal-quality counters that feed a single Contact Score.

  • Impedance estimate (trend): use a low-impact check (test tone or bias observation) to track impedance over time. The trend is more valuable than an absolute number.
  • Imbalance indicator: when one side drifts more than the other, the system’s real-world CMRR collapses. Track a simple imbalance metric and flag “hum-risk.”
  • Artifact counters: count clipping segments, saturation recovery time, and sudden baseline steps. These counters separate “contact failure” from “algorithm failure.”
  • Contact Score (0–100): combine impedance stability, imbalance, hum peak proxy, and clipping rate into one score. Use it to gate smart-wake decisions and to annotate logs.
Evidence chain: impedance jump → hum peak rises and/or saturation count increases → Contact Score drops. If this chain holds, fix mechanics/contact first before changing filters or algorithms.

Mechanical layout: tension zones, strain relief, and sweat paths

Mechanical design is signal conditioning. The goal is stable pressure at electrodes without discomfort, and isolation of motion and sweat from the electrode interface.

  • Tension zones: define a stable electrode zone (controlled pressure), a comfort buffer zone, and a routing zone that avoids pulling on electrode nodes.
  • Strain relief: every connector or flex transition must protect the electrode node from tugging, otherwise micro-slip appears as baseline steps.
  • Sweat paths: route sweat away from the reference/electrode junctions; avoid creating electrolyte bridges that change leakage and bias conditions.

Field clue: if noise worsens late-night, suspect pressure relaxation + sweat-driven impedance drift. Validate by plotting impedance trend versus hum peak and saturation events.

Impedance-to-symptom evidence (what to log overnight)

What to log Why it matters How it isolates root cause
Impedance trend (relative) Detect drift, wetting, relaxation, contamination. Separates contact-driven drift from electronic baseline shifts.
Imbalance indicator Predicts real-world CMRR loss and hum sensitivity. Hum rise with imbalance implies contact, not ADC resolution, is limiting.
Saturation/clipping count Counts hard failures: headroom exceeded or sudden steps. Correlates with motion or contact steps; helps tune headroom and mechanics.
Hum proxy / bandpower Tracks interference that leaks into the band. Hum tracking impedance imbalance indicates CMRR collapse path.
Accel magnitude (optional) Discriminator for motion artifact. If events align with motion, fix mechanics and filtering; if not, inspect coupling paths.

These logs feed H2-10 smart-wake quality gating and H2-11 validation pass/fail criteria.

F4 — Electrode & Skin Interface: Mechanical Zones and Contact Score Diagram showing a headband with electrode nodes, tension and routing zones, sweat path management, impedance monitoring, and MCU contact score feeding quality gating and logging. F4 — Electrode & Skin Interface (Impedance + Contact Score) Headband (mechanical zones) Tension Zone stable pressure Comfort Zone buffer + fit Routing strain relief EEG+ EEG- Ref/DRL Sweat Path Management avoid electrolyte bridges near electrode junctions Impedance Monitor trend + imbalance + artifact counters Impedance trend Imbalance indicator Saturation / hum proxies MCU / BLE SoC Contact Score (0–100) Quality Gate Log Annotation Objective: stable impedance + low imbalance → higher real-world CMRR → cleaner EEG
Figure F4. Treat contact as measurable: impedance trend + imbalance + artifact counters feed a Contact Score used for quality gating and log annotation.

H2-5. Motion Artifact & Common-Mode Control (RLD/DRL + filtering)

Objective: prove “artifact vs real EEG change” before filtering

Motion artifacts are not just “noise”—they have repeatable signatures that can be distinguished from physiological changes using a minimal evidence chain. This chapter builds a discriminator that combines EEG waveform cues with accelerometer correlation and contact/impedance indicators, then applies common-mode control (DRL/RLD) and minimal filtering without erasing sleep-relevant features.

Saturation signature Baseline wander Transient spikes Accel correlation Quality flags

Artifact signatures: a practical waveform dictionary

Treat each artifact class as a diagnosable failure mode. The goal is not to “make the plot look smooth,” but to isolate the cause so the fix survives overnight variability.

  • Saturation / clipping: flat-topped segments or rails-hit behavior, often followed by slow recovery. Typical causes include insufficient headroom, sudden contact steps, or bias/leakage shifts. Evidence: clipping counter, % time clipped, recovery time constant.
  • Baseline wander: low-frequency drift that rides on the signal like a slow wave. Typical causes include impedance drift, sweat-driven leakage, or electrode imbalance that collapses real-world CMRR. Evidence: drift trend correlates with impedance trend or imbalance indicator.
  • Transient spikes / bursts: narrow impulses or spike trains that appear during micro-slip, cable strain, ESD recovery, or coupling from switching/RF events. Evidence: time alignment with accel spikes or event markers; spike rate increases during motion windows.
Rule: If a symptom is repeatable and correlates with motion/contact metrics, treat it as artifact until proven otherwise.

Discriminators: how to separate artifact from real EEG change

A single signal channel is rarely enough. Use a minimal multi-sensor discriminator so decisions remain stable across users and overnight conditions.

Observation Primary discriminator Most likely root cause First fix direction
EEG spikes align with accel events High EEG–accel time correlation Motion coupling / micro-slip Mechanics + strain relief + quality gating
Hum increases while impedance imbalance rises Imbalance indicator tracks hum proxy Real-world CMRR collapse Contact balancing + DRL tuning + layout hygiene
Clipping bursts during roll-over Clipping counter spikes + accel spike Headroom exceeded by artifact Gain/headroom + contact stability
Baseline drifts without accel change Impedance trend drifts while accel is flat Sweat/leakage/bias effects Electrode interface + leakage control
Change persists with stable contact and low motion Low accel + stable impedance + no clipping Candidate real EEG change Do not over-filter; label as “high-confidence” window

Minimal evidence capture: record EEG AFE output + accel magnitude + contact/impedance metric. Add an event marker for DRL state or RF bursts if available.

Common-mode control: DRL/RLD loop (engineering-level)

DRL/RLD is a common-mode control loop: it senses common-mode behavior at the AFE input and drives the reference electrode to pull the system into a more controlled operating region. The goal is improved effective CMRR under real-world impedance imbalance.

  • What it improves: reduces sensitivity to mains/common-mode injection and impedance imbalance.
  • What can go wrong: unstable or over-aggressive loop gain can introduce low-frequency oscillation-like behavior, extra artifacts, or “texture” that looks like signal.
  • Stability mindset: validate worst-case electrode impedance conditions (high + imbalanced) and confirm the loop remains well-behaved under motion and sweat-driven drift.
Evidence target: hum proxy decreases when DRL is enabled, without increasing clipping rate or baseline wander.

Filtering strategy: minimal notch + controlled high-pass (do not erase features)

Filtering should be a controlled, evidence-driven step—not a replacement for contact and common-mode control. Over-filtering can suppress sleep-relevant dynamics and leave the system blind to quality issues.

  • Notch (50/60 Hz): use as an assist, not as the primary fix. If notch is required constantly, the system is likely limited by real-world CMRR or contact imbalance.
  • High-pass (baseline control): apply conservatively to reduce drift, then verify that discriminators still separate artifact windows from stable windows.
  • Quality flags: when artifacts are detected, mark the window rather than hiding it—so later logic can avoid false decisions.

Pass/fail check: after filtering, hum peak should reduce while the “stable-window” percentage (low motion + stable contact + low clipping) stays high.

F5 — Motion Artifact Discriminator + DRL/RLD Common-Mode Loop Block diagram showing EEG electrodes feeding AFE/ADC/DSP, an optional DRL/RLD driver closing a common-mode loop via reference electrode, and accelerometer-driven artifact detection generating quality flags and gating. F5 — Artifact Discriminator + DRL/RLD Common-Mode Control Electrodes EEG+ EEG- Ref (DRL) EEG AFE INA / PGA / ADC CMR sense node DSP / Filter notch + HP + flags Quality Flag Accelerometer motion magnitude Artifact Detector EEG + accel + contact Decision Logic Contact Metrics impedance / imbalance DRL/RLD Driver common-mode loop Loop Gain Quality Gate skip low-quality windows CMR sense DRL drive Discriminator first → DRL for common-mode → minimal filtering + quality flags
Figure F5. Combine accel/contact evidence with EEG signatures, then use DRL/RLD as common-mode control and apply minimal filtering with quality flags.

H2-6. Optional PPG in Headband (why/when, optical chain, EMI coexistence)

Scope: PPG as an optional helper signal (coexistence-first)

If PPG is included in a sleep headband, it should be treated as an optional helper channel for trend metrics and quality gating—not as a separate product category. The engineering focus is the optical chain and the coexistence problem: LED switching and return currents can contaminate the EEG front-end unless power/ground partitioning and scheduling are designed in from the start.

Optical chain LED pulses TIA sensitivity Scheduling windows Coupling arrows

When PPG is worth adding (and when it is not)

  • Worth adding: when HR/HRV trends help smart-wake gating, and when PPG can provide a robust “LED on/off subtraction” self-check to validate optical integrity.
  • Risky to add: when the system cannot schedule LED pulses or cannot isolate LED current return paths, because EEG cleanliness will degrade and debugging becomes ambiguous.
Coexistence rule: if LED activity measurably raises EEG noise floor, fix power/return/scheduling first before adding more filtering.

Optical chain (engineering view): LED → tissue → PD → TIA → ADC

A headband PPG chain is a time-structured measurement. LED pulses are a strong interference source, while the photodiode/TIA stage is highly sensitive to ground/reference integrity.

  • LED driver: controlled current pulses; edge rates and peak current define EMI and rail droop risk.
  • Photodiode + TIA: converts small current to voltage; vulnerable to ground noise and coupling.
  • Ambient cancellation timing: measure LED-on and LED-off (or multi-phase) to subtract ambient and isolate the optical component.

Evidence metric: “LED on/off subtraction” should remain stable across the night; instability points to motion/fit or return-path contamination.

EEG–PPG coexistence: coupling paths and scheduling windows

The primary hazard is that LED switching introduces noise into the EEG chain through shared rails, shared return paths, and near-field coupling. The fix is to treat coexistence as a system requirement.

  • Power coupling: LED current pulses cause rail droop and ripple; if EEG AFE shares that rail or poor decoupling exists, in-band noise rises.
  • Return-path coupling: LED current return flowing through AFE reference/AGND regions injects common-mode energy.
  • EM coupling: LED driver switching nodes routed near electrodes/reference or AFE inputs create spikes and texture.
  • Scheduling windows: schedule LED pulses in bounded windows, generate event markers, and flag EEG windows affected by LED activity.
Pass/fail coexistence: EEG noise floor and hum proxy should not rise significantly during LED pulses; if it does, improve partitioning and scheduling.

What to measure (minimal) to prove coexistence is healthy

Metric How to compute What it proves
LED on/off subtraction stability Compare LED-on minus LED-off waveform energy over time windows Optical chain integrity and ambient cancellation timing health
EEG noise rise during LED pulses Compare EEG bandpower/noise proxy inside vs outside LED windows Power/return/EM coupling from LED chain into EEG front-end
Event-aligned spikes Check EEG spikes aligned to LED pulse markers Direct coupling path (layout/return) rather than “random noise”
F6 — EEG + Optional PPG Coexistence (Power, Ground, Scheduling) Two-path coexistence diagram showing EEG AFE path and PPG optical path sharing PMIC resources, with separate rails and return paths, scheduling control, and coupling arrows from LED driver into EEG AFE/reference. F6 — EEG + Optional PPG Coexistence (Coexistence-first) EEG Path Electrodes EEG AFE ADC Quality Flags mark LED / motion windows PPG Path (Optional) LED Driver LED + Tissue PD + TIA Ambient Cancellation Timing LED on/off subtraction Power / Ground Partitioning PMIC battery rails AFE Rail quiet supply + AGND island LED Rail pulsed current + controlled return Scheduling Controller bounded LED windows + event markers LED Window Marker → EEG Quality Flag EM / edge coupling rail/return coupling Goal: isolate LED rail + controlled return + scheduled windows → protect EEG cleanliness
Figure F6. Keep PPG bounded and coexistence-focused: separate rails/returns where possible, schedule LED pulses, and flag affected EEG windows.

H2-7. Timebase, Synchronization & Data Integrity (timestamps matter)

Objective: prevent “quiet failure” from drift and gaps

Sleep analytics can fail silently when timestamps drift, packets reorder, or sampling gaps are masked by retry logic. A reliable system treats time as an engineered asset: the analysis timeline should be anchored to a sample counter, while RTC provides coarse session alignment. Transport events (e.g., BLE connection timing) are not a substitute for a trustworthy sampling timebase.

Sample clock ppm drift Sample counter Seq + CRC Gap markers

Clock + timestamp policy (counter-first, RTC for alignment)

The sampling timeline should be defined by the device, not inferred from the phone. Use a monotonic sample counter (or frame counter) as the primary axis for analysis, then attach coarse wall time using RTC to segment a night and support session boundaries.

  • Sampling clock choices: ADC/AFE clock, MCU timer-derived clock, or external crystal-driven clock. The key is predictable rate and bounded drift over hours.
  • Drift mindset: ppm-level error accumulates across an entire night; treat drift per night as a tracked metric, not as an assumption.
  • Timestamp layering: (1) sample counter for per-sample spacing, (2) RTC for coarse absolute alignment, (3) transport timing only as a delivery marker.
Design rule: never allow transport timing jitter to redefine the sampling timeline. Transport can reorder/retry; counters expose it.

Packetization strategy: seq + CRC + explicit gap markers

Packetization should make integrity observable. The system should detect missing, duplicate, and out-of-order payloads, and it should explicitly encode sampling discontinuities so the analytics stack does not “smooth over” real gaps.

  • Sequence counters: detect drop, reorder, and duplicate delivery at the packet level. Use wrap-safe arithmetic.
  • CRC: detect payload corruption in transport or storage. Track CRC-fail counts as a nightly metric.
  • Gap markers: insert an explicit gap event when sampling pauses (brownout, buffer overflow, sensor restart), including the counter range or duration.
Field Purpose Failure detected Nightly metric
Sample counter Primary analysis timeline Hidden gaps, rate drift symptoms Missing sample rate, max gap
Sequence counter Transport integrity Drop/reorder/duplicate packets Reorder count, duplicate count
CRC Content integrity Corrupt payload or storage CRC-fail count
Gap marker Explicit discontinuity Sampling pause & cause tracking Gap count, max gap duration

Local buffering vs streaming (what to log when the link drops)

Streaming alone can hide data loss behind reconnects and retries. Local buffering preserves continuity and allows integrity auditing even when the phone link is unstable.

  • Ring buffer concept: store frames + integrity fields into flash/FRAM with write pointer wrap. Track high-water marks and overwrite events.
  • Link drop behavior: continue sampling into the buffer; when the link recovers, transmit either buffered data or a summarized integrity report, depending on power/bandwidth constraints.
  • Minimum metadata to keep: max continuous gap, gap count, buffer overflow count, reset/brownout reason, and a timebase drift estimate per night.

Evidence-first: if analytics quality drops, integrity logs should explain whether the root cause is drift, gaps, or transport reorder—not guesswork.

Nightly integrity metrics (minimum required)

Convert integrity into measurable pass/fail indicators. These numbers should be computed every night and retained alongside the session summary.

  • Missing sample rate: missing samples / expected samples (from counter continuity)
  • Max continuous gap: worst discontinuity duration (counter delta to time)
  • Drift per night: relative drift estimate vs RTC/session boundary references
  • Reorder/duplicate counts: from sequence counter anomalies
  • CRC fail count: from payload verification
F7 — Timebase → Timestamp → Packet Builder → BLE → Ring Buffer (Integrity Observable) Block diagram showing sampling clock and sample counter feeding a timestamp layer, packet builder adding sequence counter and CRC and gap markers, BLE transport, and a local flash ring buffer to survive link drops. F7 — Timebase, Timestamps & Data Integrity Chain Sample Clock AFE/ADC / MCU timer Rate + Drift (ppm) Sample Counter monotonic timeline Missing Sample Check Timestamp Layer counter-first + RTC alignment RTC (coarse time) Counter (analysis) Packet Builder make integrity observable Seq Counter CRC Gap Marker (explicit) BLE Transport retry / reorder possible Delivery Events X drop Ring Buffer Flash / FRAM Write Ptr Wrap/Overwrite Integrity Metrics (per night) Missing sample rate Max continuous gap Drift / reorder / CRC fails Counter continuity + explicit gap markers → analytics can trust the timeline
Figure F7. Anchor analysis to a sample counter, layer RTC for alignment, and make integrity observable via seq/CRC/gap markers with a local ring buffer.

H2-8. ULP Power Tree & PMIC Strategy (night-long runtime, burst handling)

Objective: runtime is an architecture problem, not a parts list

Night-long operation depends on state architecture, rail partitioning, sequencing, and burst handling. A “low-power part” can still drain the battery if the system spends too much time in high-current states, or if burst events cause droop, brownouts, and data gaps.

Power states Rail partition Buck vs LDO Burst handling Current profiling

Power states: define and mark them (firmware markers)

Build the power tree around a state machine. Each state should be observable in logs and current traces using firmware markers, enabling fast correlation between energy events and data integrity issues.

  • Deep sleep: RTC alive, wake sources armed, minimal rails enabled.
  • Sensing-only: EEG AFE + ADC running, MCU/DSP steady, transport minimized.
  • BLE burst: packet transmit / retry, RF rail peak current, memory reads, scheduler activity.
  • Vib/buzzer event: short high-current pulse, potential ground/rail injection risk.
Evidence target: current profile should show stable plateaus per state, with bounded peaks during burst events.

Rail partition: keep the AFE quiet while RF and haptics burst

Partition rails so that pulsed loads do not share sensitive reference paths with the EEG front-end. The goal is not maximum separation everywhere, but controlled return paths and predictable behavior under bursts.

  • AFE rail: prioritize low noise and stable reference; validate ripple and transient response.
  • Digital rail: efficiency-driven, but with controlled switching ripple and return routing.
  • RF rail: supports peak current; local decoupling and short loops matter.
  • Haptics rail: isolate pulsed return paths; avoid contaminating AFE ground island.

Cross-link: rail droop and brownouts translate directly into H2-7 gap markers and max-gap metrics.

PMIC strategy: buck + LDO where it matters (noise vs efficiency)

Use DC-DC conversion for efficiency where ripple tolerance exists, and use LDO (or a quiet supply strategy) where the EEG AFE requires isolation. Sequencing should ensure the AFE enters a stable operating region before bursty domains begin aggressive switching.

  • Buck domains: digital + RF rails with validated peak current support.
  • LDO domains: AFE rail, references, and any ultra-sensitive analog nodes.
  • Sequencing: avoid “RF starts first” scenarios; ensure AFE bias/reference stability before high-activity windows.
  • Brownout robustness: track UVLO/brownout counters and reset reasons; treat them as nightly health metrics.

Current profiling plan: shunt + coulomb counter + state markers

Current profiling should yield repeatable waveform archetypes for each state, plus rail noise checkpoints. Combine fast waveform capture (shunt + scope) with long-term energy accounting (coulomb counter/fuel gauge) and firmware markers so spikes can be attributed, not guessed.

  • Shunt + scope: capture burst peaks and inrush; verify the droop margin on RF and haptics.
  • Coulomb counter: estimate nightly energy per state; compare against expected duty cycles.
  • Markers: log transitions (deep sleep → sensing → BLE burst → vib) to overlay on current traces.
  • Noise points: measure ripple on AFE rail and reference nodes during worst-case bursts.
Pass/fail: peaks must not cause brownout counters to increment; AFE rail ripple must remain stable during RF/haptics bursts.
F8 — ULP Power Tree: Battery → Protection → PMIC → Rails → Loads + Wake Sources Power tree diagram with battery and protection feeding a PMIC with buck and LDO outputs. Rails for AFE, digital, RF, and haptics feed their loads. Wake sources and measurement points are shown, with coupling risk arrows from burst rails to the AFE rail. F8 — ULP Power Tree & Burst Handling (Night-long Runtime) Battery Li-ion / LiPo Fuel Gauge Protection OVP / OCP / NTC Inrush / UVLO PMIC buck + LDO + sequencing Buck(s) LDO(s) Sequencer EN order Rails (partitioned) AFE Rail quiet supply EEG AFE + Ref probe Digital Rail MCU / DSP MCU + RAM probe RF Rail BLE bursts BLE SoC / PA probe Haptics Rail vib / buzzer Vib Driver probe burst coupling risk Wake Sources + Firmware Markers RTC Alarm Motion Interrupt User Tap / Button Quality Event Markers align power states with current traces and data gap events
Figure F8. Partition rails around bursty loads, validate sequencing, and profile current with markers so brownouts and data gaps become explainable.

H2-9. BLE Link Strategy for Overnight Logging (robustness over peak throughput)

Objective: make BLE auditable, recoverable, and front-end friendly

Overnight sessions need predictable delivery and fast recovery, not peak throughput. Robustness means: the link exposes quality counters, reconnection behavior is deterministic, and RF bursts do not inject noise into sensitive analog domains. Transport success is measured by integrity metrics, not by RSSI alone.

Interval / Latency PHY choice PER / Retry Reconnection windows RF burst coexistence

Connection policy: stable rhythm beats raw bandwidth

Treat BLE as a scheduled delivery channel. Choose connection settings that keep energy and burst activity bounded, while maintaining a consistent reporting cadence that aligns with local buffering and integrity checks.

  • Connection interval: shorter intervals increase airtime and burst frequency; overnight logging often prefers a calmer cadence with periodic bursts.
  • Slave latency: skipping events can save power, but must not hide prolonged delivery stalls; use quality counters to detect “connected but not delivering.”
  • PHY choice: select for robustness vs airtime as a system tradeoff; verify using PER/retry counters rather than assumptions.
  • Application integrity: rely on seq/CRC/gap markers for truth; link-layer retries must not mask gaps in the analysis timeline.
Engineering rule: if a packet arrives late, integrity fields should still prove whether the sampling timeline remained continuous.

Reconnection playbook: advertise in windows, keep sampling locally

Disconnections should be treated as routine. The device continues sampling into local storage and uses windowed advertising to regain the link without draining the battery.

  • Detect: missing connection events and rising retry/PER counters trigger a “quality degraded” state.
  • Recover: use a fast reconnection window first, then shift to low-duty advertising windows to protect battery life.
  • Resume: after reconnect, transmit either buffered data or an integrity summary first (gap count/max gap/seq continuity), then backfill.
  • Never guess continuity: reconnection success does not imply data continuity; gaps must be explicit (H2-7).

Evidence-first logging: record reconnect latency, retries, and buffer high-water marks so “why the night failed” is explainable.

RF burst coexistence: prevent ground bounce from contaminating EEG

BLE transmissions can create pulsed current demand on the RF rail and return paths. If analog reference nodes share these paths, microvolt-level EEG measurements can be contaminated. Coexistence must be engineered across placement, decoupling, and scheduling.

  • Placement: antenna keep-out near sensitive analog nodes; avoid routing RF return loops near AFE reference/inputs.
  • Decoupling: dedicate RF rail decoupling close to the radio; avoid sharing burst return paths with AFE ground islands.
  • Scheduling: when feasible, align transmit bursts with less sensitive analysis windows, while keeping sampling continuous.
Proof target: correlate RF burst markers with EEG noise flags to confirm coupling (or rule it out) using evidence, not theory.

Privacy (high-level): identity rotation without breaking integrity

Keep privacy mechanisms high-level and conservative. Identity rotation can reduce tracking risk, but it must not compromise session integrity, reconnection reliability, or the ability to audit nightly data continuity.

  • Rotate identifiers: limit persistent identity exposure while maintaining stable session accounting on-device.
  • Integrity first: seq/CRC/gap markers and quality counters remain the primary truth source.
Evidence What it proves How to measure Output metric
Reconnection histogram Recovery speed under real use timestamp reconnection events p50 / p95 / p99 reconnect time
PER / retry vs RSSI Link robustness beyond RSSI log counters with RSSI samples retry rate, error bursts
RF burst ↔ EEG noise Coupling / ground bounce presence marker + AFE noise flag noise rise during bursts
F9 — BLE Robustness: Quality Monitor → Policy → Recovery + RF Coexistence Block diagram showing BLE link counters feeding a quality monitor that selects connection policy and reconnection windows, plus antenna keep-out and RF burst arrows indicating coupling risk toward analog rails. F9 — BLE Robustness for Overnight Logging BLE SoC / Radio bursty current events RF Rail (burst) Antenna placement + keep-out Keep-Out Zone Link Counters auditable robustness PER / Retry RSSI / SNR Reconnect Timer Seq / CRC / Gaps Quality Monitor turn counters into actions Quality State: OK / Degraded Triggers: PER↑ / gaps / retry bursts Connection Policy stable rhythm Interval / Latency PHY selection Reconnection windowed advertising Fast Window Low-Power Window RF burst coupling risk Overnight Outputs (evidence) Reconnect p50/p95/p99 PER / Retry bursts EEG noise vs RF markers Quality counters drive policy changes; integrity fields confirm continuity beyond “connected” status.
Figure F9. BLE robustness is a closed loop: counters → quality state → connection/reconnect policy, with RF coexistence engineered via keep-out and rail decoupling.

H2-10. Smart-Wake Algorithms (on-device boundary, features, confidence)

Objective: a quality-controlled decision pipeline, not a black box

Smart wake should be explained as an engineering pipeline: signals are converted into simple features, quality gates prevent decisions on noisy data, and a scheduler enforces a wake window with a hard deadline. The system must always fail safe to a normal alarm.

On-device boundary Features Quality gate Wake window Fail-safe

On-device boundary: what runs locally vs what is post-processed

The headband should make wake decisions locally, using features that are compute- and power-feasible. Post-processing can refine reporting, but the wake action must not depend on unpredictable phone availability.

  • Device-side: sampling → feature extraction → quality gating → lightweight classifier → wake scheduling → haptics.
  • Session logging: store reason codes and quality counters for explainability and validation.
  • Constraints: bounded CPU load, fixed memory, predictable latency, and minimal burst activity.
Design rule: a wake trigger must be reproducible from logged features and reason codes, not from opaque runtime behavior.

Engineering features: simple, stable, and testable

Features should be selected for robustness under real contact variability and limited compute. Keep the feature set small and interpretable so failures can be attributed to signal quality, not to unknown model behavior.

  • EEG: bandpower ratios, band energy stability, and coarse trend indicators over fixed windows.
  • Motion: movement index and event density (micro-movements vs large motion).
  • Optional PPG: HR trend and coarse variability trend, used only when optical quality flags are good.
Input Feature family When it fails Guard flag
EEG bandpower ratio / stability contact noise, mains injection, saturation contact score, saturation count
Accel movement index / density sensor bias drift, loose strap motion motion sanity checks
PPG (opt.) HR trend (coarse) optical coupling loss, LED interference windows optical quality flag

Confidence gating: do not wake on questionable data

A quality gate should block smart wake decisions when signal integrity is compromised. Use quality inputs that are already engineered elsewhere: contact score, noise flags, saturation events, and gap markers.

  • Inputs to the gate: contact score, AFE saturation events, noise rise flags, gap markers (H2-7), and optional optical quality.
  • Outputs: allow smart-wake decision, block and wait, or force fallback to normal alarm scheduling.
  • Explainability: record the gate decision and the top blocking reason as a reason code.
Truth-table mindset: quality flags should deterministically control whether the classifier is allowed to influence wake time.

Scheduler + fail-safe: windowed wake with a hard deadline

The wake scheduler enforces safety and predictability. It allows smart wake only inside a defined window, applies cooldown to avoid repeated triggers, and guarantees a hard deadline fallback to the normal alarm.

  • Wake window: only allow early wake decisions within a bounded time range.
  • Hard deadline: at the scheduled alarm time, wake must occur regardless of smart-wake confidence.
  • Cooldown: prevent repeated wake attempts from transient spikes or unstable contact.
  • Metrics: false-wake count, blocked-by-quality count, and reason-code distribution per night.

Evidence target: false-wake rate should drop sharply as contact score improves; blocked triggers should be explainable by quality flags.

F10 — Smart-Wake Decision Pipeline: Features → Quality Gate → Classifier → Scheduler → Haptics + Fail-safe Decision pipeline diagram showing EEG, accelerometer, and optional PPG feeding feature extraction, a quality gate driven by contact/noise/gap flags, a lightweight classifier, and a wake scheduler with a window and hard deadline, producing haptics output and logging reason codes. F10 — Smart-Wake Pipeline (Quality-Controlled, Fail-Safe) Inputs EEG Accel PPG (optional) Feature Extract small + interpretable Bandpower ratio Movement index HR trend (opt.) Quality Gate block unreliable triggers Contact score Noise / saturation flags Gap markers (H2-7) Lightweight Classifier FSM / threshold / HMM-lite State + confidence Wake Scheduler window + deadline + cooldown Wake window Hard deadline (alarm) Output Haptics / Buzzer Fail-safe alarm Logs (explainability + validation) Reason code Quality counters False-wake / blocked counts Decisions must be reproducible from features, gates, and reason codes. gated features
Figure F10. Smart wake is a quality-controlled pipeline with a scheduler window and a hard fail-safe deadline; logs make every trigger explainable.

H2-11. Validation Test Plan (bench → pilot users → field evidence)

Intent: produce proof, not opinions

Validation is treated as a closed loop: fixtures create controlled stress, the device logs markers/counters, metrics are computed from raw streams, and pass/fail gates generate an actionable result. Bench tests establish hardware limits; pilot sessions quantify real wear variables; field evidence packages make failures reproducible.

Bench: noise & CMRR Bench: impedance & rails Bench: ESD Pilot: overnight completeness Field: evidence package
Deliverables: Test matrix (cases × variables × outputs) + Metrics dashboard definition + Acceptance thresholds + Failure → root-cause mapping.

Bench (1) EEG noise floor & band integrity

Lock the baseline performance using consistent input conditions (impedance emulation + stable references). Measure in the analysis band with a long enough capture to expose 1/f and slow drift behavior.

  • Capture: long record with markers (start/stop, mode transitions, radio bursts, LED pulses if present).
  • Metrics: band-limited noise (0.5–40 Hz), mains peak magnitude (50/60 Hz), saturation event rate, recovery time after saturation.
  • Outputs: a single “noise report” per firmware build, directly comparable across revisions.
Example fixture parts (MPN): precision buffer op-amp TI OPA188 / ADI ADA4522-2; reference DAC ADI AD5686R or TI DAC8568; low-noise instrumentation amp TI INA333 / ADI AD8422.

Bench (2) CMRR injection & common-mode sensitivity

Prove (or rule out) “mains hum” by injecting controlled common-mode disturbance into the electrode-equivalent network. Effective CMRR should be derived from injected amplitude versus observed output response.

  • Injection: sweep frequency around 50/60 Hz and a few harmonics; vary amplitude in safe steps.
  • Readouts: output spectral peak, noise delta in-band, saturation counts, and any quality-flag changes.
  • Interpretation: if output hum scales with injection, prioritize CMRR/return-path/reference strategy; if not, suspect contact instability or rail coupling.
Example injection interface (MPN): analog switch matrix ADI ADG1606 (16:1) / TI TMUX1574 (4:1); isolation for safe stimulus routing ADI ADuM141E / TI ISO7741 (where isolation is required by the setup).

Bench (3) Impedance emulation: contact drift, imbalance, and saturation

Convert “skin contact variability” into a reproducible bench variable. Use a resistance decade box or a programmable network to emulate electrode impedance magnitude, imbalance, and time-varying drift.

  • Variables: impedance level (kΩ to MΩ range), electrode mismatch, step changes, slow ramps (sweat/loosening proxy).
  • Checks: contact score sensitivity, bias-current induced drift, baseline wander, saturation probability under high impedance.
  • Outputs: contact-score vs ground-truth curve; saturation rate vs impedance curve.
Commercial fixture options (MPN examples): IET Labs HARS series resistance decade (e.g., HARS-X family) or IET Labs ohmSOURCE decade boxes; for fully programmable substitution, IET Labs PRS-330 programmable resistor is a common lab option.

Bench (4) Rail-noise injection & RF burst correlation

Quantify how supply ripple and burst current events translate into EEG noise and quality-flag behavior. The goal is a numeric transfer metric (rail disturbance → in-band noise delta), plus correlation evidence between RF markers and analog noise rise.

  • Rail injection: inject ripple/steps onto AFE rail and RF rail independently (same amplitude, separate paths).
  • Correlation: log firmware markers (TX start/stop) and compute EEG noise delta during RF windows.
  • Outputs: “noise delta during burst” and “rail-to-EEG sensitivity” metrics, tracked over time.
Example load-step / injection BOM (MPN): MOSFET for pulsed load Infineon BSC340N08NS3; gate driver TI TPS2829; current sense amplifier TI INA240A1; coulomb counter TI BQ27441-G1 or MAXIM MAX17055 for mAh/night cross-check.

Bench (5) ESD spot checks: “pass in lab, fail in field” prevention

ESD checks at key external touch points (electrode area, buttons, charge contacts, enclosure seams) should validate both survivability and graceful degradation (no latch-up, no silent data corruption, no stuck states).

  • Targets: reset reasons, brownout counters, AFE saturation behavior after discharge, BLE reconnection behavior after event.
  • Evidence: event timestamp + reset reason + integrity report (seq continuity, gap markers, quality counters).
ESD simulator examples (MPN): EM Test esd NX30 (up to 30 kV) or Teseq/Schaffner NSG 435 (up to 16.5 kV; legacy/discontinued in some markets). Use IEC 61000-4-2 compliant setups as applicable.

Pilot users: overnight completeness under real wear variables

Pilot validation converts real-world variability into annotated, analyzable data. Each overnight session should produce a standardized “session package” with synchronized markers, counters, and summary metrics.

  • Completeness: dropout %, max continuous gap, reconnection p50/p95/p99, buffer high-water mark.
  • Motion scenarios: side-sleep, roll-over, sit-up; compare artifact flags vs accelerometer correlation.
  • Sweat/temperature drift: correlate contact score drift and noise rise with temperature/skin coupling changes.
  • Smart-wake reliability: blocked-by-quality count, false-wake count, reason-code distribution.
Session package fields (must-have): session_id, firmware build, config hash, seq/CRC/gap summary, contact score trace, PER/retry trace, reset reasons, wake reason codes.

Metrics dashboard: field names that stay stable across revisions

Define a single dashboard schema and never rename metrics casually. Stable field names enable trend tracking across AFE/layout/firmware iterations.

Metric field Definition Where it comes from Why it matters
EEG_noise_uVrms_0p5_40 Band-limited noise in 0.5–40 Hz bench + overnight segments Sleep features collapse when noise rises silently
mains_peak_50_60 Peak magnitude near 50/60 Hz bench injection + field captures CMRR/return-path issues show up as hum
dropout_percent Missing samples / total expected seq + gap markers Analytics fail quietly when data holes accumulate
max_gap_s Longest continuous missing interval gap markers Single long gaps break staging windows
drift_ppm_per_night Timebase drift estimate per session timestamp model Timing drift corrupts alignment and trend features
battery_mAh_per_night Total charge consumed overnight coulomb counter + markers Battery budget is architecture-driven
false_wake_rate Unwanted early triggers per night reason codes Quality gating and scheduling must protect users
reconnect_p95_s 95th percentile reconnect time link markers Robustness depends on recovery speed
Example lab instrument MPNs (common choices): Keysight 33600A Series waveform generator (stimulus/injection); stable timebase option Keysight 33600U-OCX; resistance decade IET Labs HARS series; ESD gun EM Test esd NX30.

Test matrix + acceptance thresholds (spec-style gates)

Use a matrix so every “fix” maps to a test case. Gates should be separated into hard pass/fail thresholds, soft optimization targets, and alarm triggers that increase logging depth.

Case ID Stress / variable Measurement points Output metrics Gate style
B-01 Noise floor (0.5–40 Hz) AFE output + markers EEG_noise_uVrms_0p5_40 Hard + trend
B-02 CM injection @ 50/60 Hz inject node + output spectrum mains_peak_50_60 Hard
B-03 Impedance ramp / steps impedance truth + contact score contact_score_trace, saturation rate Hard + alarm
B-04 Rail ripple / load steps rail probe + EEG flags noise delta during stress Hard + trend
P-01 Overnight base session log package dropout_percent, max_gap_s, battery_mAh_per_night Hard
P-02 Motion scenarios accel + artifact flags artifact correlation indicators Soft + alarm
F-01 Field failure reproduction evidence package root-cause mapping completeness Hard
Acceptance thresholds (example starting points): define numeric limits for EEG noise, max_gap, reconnect_p95, and false_wake_rate as “Hard Gates”. Keep “Soft Targets” for PER/retry improvements and battery optimization. Add “Alarm Triggers” for sudden noise deltas, repeated resets, or gap bursts.
F11 — Validation Flow: Fixtures → DUT → Logs → Metrics → Pass/Fail Gates Flow diagram showing bench fixtures feeding a headband DUT, producing logs and counters that are converted into dashboard metrics and evaluated by pass/fail gates, with feedback loops to root-cause mapping and test matrix updates. F11 — Validation Test Plan Flow (Proof-Oriented) Fixtures controlled stress Impedance Box (HARS) CM / Signal Injector Rail Noise / Load Step DUT: Headband instrumented firmware EEG AFE BLE PMIC Logs evidence package Raw stream + markers Counters (PER/gap/contact) Reason codes / resets Metric Calculator stable dashboard fields EEG_noise_uVrms_0p5_40 dropout_percent / max_gap_s battery_mAh_per_night Pass/Fail Gates hard / soft / alarm Hard gates (must pass) Soft targets (optimize) Alarm triggers (increase logs) Feedback fail → isolate → re-test Root-Cause Mapping H2-3 / H2-4 / H2-8 / H2-9 / H2-10 compute update matrix
Figure F11. A proof-oriented validation loop: controlled fixtures → instrumented DUT → evidence logs → stable metrics → gated decisions → root-cause mapping and re-test.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12. FAQs ×12 (Evidence-linked; Accordion)

Q1EEG looks fine sitting still but collapses when turning over—contact or CMRR first?

Start with two traces: contact/impedance (or contact score) and the EEG output around the rollover window. If contact score jumps or impedance steps coincide with collapse, treat it as contact mechanics first. If contact remains stable but a strong 50/60 Hz component appears or grows, prioritize common-mode control/CMRR and return-path integrity.

Q250/60 Hz hum suddenly appears after charging—rail noise coupling or impedance drift? What proves it?

Compare three items before/after charging: AFE rail ripple, the 50/60 Hz spectral peak, and electrode impedance/contact score. Rail coupling is proven when hum rise tracks charger connection and rail ripple increases during charge/plug events. Impedance drift is proven when hum correlates with slow impedance rise/instability even without rail ripple change.

Q3Signal saturates periodically every few seconds—DRL loop instability or motion spikes?

Log saturation timestamps and compare them against accelerometer peaks and any DRL/CM control status flags. Motion spikes usually align with acceleration bursts and appear irregular. A loop stability issue tends to produce quasi-periodic behavior with weak accel correlation and consistent recovery patterns. Confirm by temporarily disabling DRL/CM loop (if supported) and checking whether periodicity disappears.

Q4Data gaps only happen in the second half of the night—BLE link or flash buffering issue?

Use sequence counters and gap markers to separate “sampling stopped” from “transport/storage failed.” If seq numbers continue without gaps in local logs but the phone timeline shows holes, focus on BLE. If gaps appear in local records when the buffer nears high-water mark, focus on flash ring-buffer policy, wear leveling, or write-latency spikes. Verify with buffer level telemetry over time.

Q5Battery says 30% but device reboots during wake vibration—brownout or PMIC current limit?

Check reset reason (brownout/UVLO vs watchdog) and measure the battery/PMIC rail dip during the vibration pulse. Brownout is proven when rails cross UVLO thresholds exactly at the haptic burst. Current-limit issues show as a flat-topped rail collapse or repeated short pulses. Reproduce on bench with a controlled load step and compare to field counters.

Q6Smart-wake triggers too early on some nights—feature drift or quality gate missing?

Inspect the trigger window with three signals: quality flags (contact/impedance, artifact score), extracted feature values, and the final decision reason code. A missing gate is proven when triggers occur while quality is degraded (contact score low or artifact flag high). Feature drift is proven when quality remains good but feature baselines shift systematically over the night, pushing thresholds early.

Q7False wake increases in winter—skin impedance change or strap mechanics? How to verify quickly?

Run a fast A/B check: keep the same user and alarm window, then change only strap tension and electrode cleaning/rehydration condition. If impedance/contact score improves and false wakes drop immediately, skin/impedance is the driver. If false wakes persist while impedance stays stable, suspect mechanics (slip/micro-motion) and validate by correlating wake triggers with accelerometer micro-bursts.

Q8Adding PPG makes EEG noisier—LED driver EMI or shared ground return?

Toggle PPG LED pulses while keeping everything else constant and compute EEG noise delta in LED-on windows. EMI is proven when noise spikes time-align with LED edges even if rails are stable. Shared return is proven when AFE rail/ground noise rises during LED current pulses and the EEG baseline shifts accordingly. A scheduling fix is validated when LED windows avoid sensitive EEG epochs.

Q9Two units show different sleep metrics on the same user—clock drift or gain calibration mismatch?

First compare timestamp drift per night and sequence continuity to confirm both devices maintain consistent sampling time. If one unit shows measurable drift or irregular gap patterns, suspect timebase and packetization. If time aligns but signal amplitude/noise statistics differ under similar contact, suspect gain/offset calibration mismatch or AFE configuration differences. Confirm with a known input or impedance-box session.

Q10“Good contact” indicator is green but EEG still noisy—what two measurements reveal the truth?

Measure (1) EEG band-limited noise and mains peak magnitude, and (2) AFE rail ripple/ground noise during normal operation. If noise is dominated by 50/60 Hz despite “green contact,” the contact metric is likely blind to common-mode coupling. If EEG noise tracks rail ripple or RF bursts, the issue is power/return-path coupling rather than electrode impedance alone.

Q11BLE reconnects but data timeline has overlaps—sequence counter/CRC design mistake?

Check whether the sequence counter is strictly monotonic across reconnect boundaries and whether gap markers are emitted when buffers rewind or replay. Overlaps usually happen when the sender resends buffered frames without a clear “replay window” tag or when the receiver merges streams using timestamps that can repeat. Fix is proven when reconnect sessions show monotonic seq + non-overlapping time bins.

Q12Nightly noise slowly rises over weeks—electrode aging, contamination, or bias/leakage?

Separate reversible contact effects from electrical leakage by running a controlled comparison: clean/replace electrodes and repeat the same bench impedance profile. If noise returns to baseline quickly after cleaning/replacement, contamination or electrode aging dominates. If noise remains elevated under known impedance, suspect bias/leakage paths or reference drift in the AFE domain. Confirm with a periodic “known input” validation case.

F12 — FAQ → Chapter Evidence Chain Map Twelve FAQ symptom boxes map via arrows to chapter evidence blocks (contact, AFE, interference, timing, power, BLE, smart-wake, validation). F12 — FAQ → Chapter Evidence Chain Map FAQs (symptoms) Q1–Q12 Q1 Turn-over drop Q2 Hum after charging Q3 Periodic saturation Q4 Gaps late night Q5 Reboot on vibration Q6 Early smart-wake Q7 Winter false wake Q8 PPG makes EEG noisy Q9 Two units disagree Q10 Green contact, noisy Q11 Overlap after reconnect Q12 Noise rises over weeks Evidence chapters H2 blocks Analog & Contact H2-2 Interference H2-3 EEG AFE H2-4 Contact H2-5 CM/Artifact Timing & Integrity H2-7 Seq/CRC/Time Power & Coexistence H2-6 PPG H2-8 Power H2-9 BLE Robustness Decision & Proof H2-10 Smart-wake Gate H2-11 Validation Proof
Figure F12. Each FAQ symptom maps to the chapter blocks that provide measurable evidence (contact, AFE, interference, timing, power, BLE, smart-wake, validation).