123 Main Street, New York, NY 10001

Smart Scale / Health Gadgets: Bridge AFE, BIA, BLE & Power

← Back to: Consumer Electronics

This page focuses on device-side hardware and firmware interfaces that determine measurement integrity (load-cell bridge and consumer BIA), battery life, BLE data reliability, and display/power interaction—written as an evidence-first engineering guide.

Bridge AFE & ADC BIA (I/Q demod) BLE SoC ULP PMIC Display Interface

Quick Answer

Definition

A smart scale / health gadget is a low-power measurement system that converts (1) a Wheatstone-bridge load-cell signal and optionally (2) consumer bio-impedance into stable, repeatable digital results, while maintaining reliable BLE transfer and predictable battery behavior.

3–5 Step Signal Chain
  • Bridge excitation → differential µV/mV signal → bridge AFE/PGA → ADC filtering
  • BIA injection → sense amp → synchronous demod (I/Q) → magnitude/phase + quality score
  • MCU/BLE schedules a quiet measurement window → packages data with timestamps + flags
  • PMIC enforces power states; display refresh is separated from measurement
4 Common Failure Modes (Device-Side)
  • Weight drift: mechanical creep / AFE drift / noisy window coupling
  • BIA jumps: poor electrode contact / phase instability / demod misalignment
  • Random resets: UVLO + pulsed loads (BLE TX / display / DC-DC transitions)
  • Missing records: non-atomic data packets / retry gaps / flash-write issues

H2-1 — Page Promise, Boundaries, and “What Good Looks Like”

Goal: make the page instantly answer-like for engineers and sourcing by defining measurable outcomes, hard boundaries, and the minimum evidence set required to debug accuracy, stability, low-power behavior, and BLE data integrity.

One-sentence promise (engineering-grade)

Design and validate a smart scale / consumer BIA gadget that stays accurate and stable across temperature and time, achieves predictable battery life, and remains debuggable in the field using a small, repeatable evidence set.

Hard boundaries (in-scope vs out-of-scope)

In-scope: bridge/BIA measurement chains, device-side BLE reliability, power states/rails, display interfaces, validation and field evidence.
Out-of-scope: medical diagnosis claims, cloud/app architecture, and unrelated consumer devices (watch/band/CGM/ECG/PPG).

Quality Goal What to Measure Pass/Fail Evidence (device-side) Typical Root-Cause Buckets
Weight repeatability
same user, same condition
repeatability, settling time, short-term noise AFE output noise stats + measurement window timing AFE noise/1/f, DC-DC ripple coupling, display refresh coupling
Stability over time
standing still / long dwell
drift over 10–60 s, creep/hysteresis signature time-series drift + temperature tag + mechanical test record mechanical creep/hysteresis, AFE drift, excitation/reference drift
BIA robustness
consumer contact conditions
magnitude/phase stability + contact pass rate quality score, electrode open/contact flags, I/Q sanity checks contact impedance variability, phase timing error, demod alignment
Battery predictability
no sudden shutdown
UVLO margin, droop during bursts, average current rail min voltage, reset reason, mode durations UVLO threshold, inrush/load step, rail gating sequence
BLE data integrity
no missing/duplicate records
atomic record upload, retry behavior, counters sequence counter, timestamp, error codes, flash-write status non-atomic packet design, reconnect logic gaps, flash endurance

Minimum evidence set (the first things to collect)

A small, repeatable evidence set prevents “guessing”: rail-min voltage during bursts, AFE ripple/noise snapshot, BIA quality score + phase sanity, mode timeline (sleep/measure/sync/display), and record counters (sequence + timestamp).

  • Measurable outcomes are stated before implementation details (prevents feature creep).
  • Boundaries explicitly block medical claims and cloud/app expansions (prevents cross-page overlap).
  • Evidence-first mindset is enforced: each symptom must map to capture-able data (waveform/log).
Page Promise = Outcomes + Evidence Accuracy Stability Low Power Field Debug Noise + Ripple Temp Tags Mode Time Counters
Figure (H2-1): Outcomes are defined in measurable terms and backed by a minimal evidence set (noise/ripple, temperature tags, mode timeline, counters) to keep field debugging deterministic.

H2-2 — System Architecture at a Glance (Signal Chain + Power States)

Purpose: provide a single mental model that explains why smart scales fail in real life: ultra-small analog signals, human-contact variability (BIA), and pulsed digital loads (BLE/display/DC-DC) competing for the same power and ground environment.

Four rails/blocks (and what each one is sensitive to)

Weight chain (load cells → bridge AFE/ADC): sensitive to ripple, ground bounce, and long settling.
BIA chain (electrodes → injection/sense → I/Q demod): sensitive to contact impedance, phase alignment, and coherent timing.
Compute/Comms (MCU/BLE SoC): drives bursts (TX, flash writes) that can corrupt measurement if not scheduled.
Power + Display (battery/PMIC/rails + display): the main source of droop and switching noise; must support quiet windows.

Quiet measurement window (the practical integrity rule)

Measurement integrity improves dramatically when weight/BIA sampling is executed inside a quiet window: BLE TX and display refresh are deferred, and any rail transitions (load-switch or DC-DC mode changes) are avoided until after sampling completes.

Mode Active Blocks Peak-Current Drivers Measurement Risk Evidence to Capture First
Deep Sleep AON rail only (RTC / wake logic) almost none none (baseline) sleep current, wake reason
Advertising BLE RF + MCU duty-cycled TX bursts ground bounce / droop if sharing rails with analog rail-min voltage, TX timing vs sampling
Measurement Burst Analog AFE + ADC (+ BIA injection/demod) AFE + injection current noise/ripple directly shows as jitter/drift AFE noise snapshot, ripple amplitude, window boundaries
Sync/Upload MCU + BLE + storage writes TX bursts + flash writes can corrupt next measurement if overlapped sequence counter, retry stats, flash write status
Display Update display driver + MCU panel surge / refresh pulses couples into analog rails; schedule outside quiet window display update timestamp vs measurement; rail ripple
  • Interfaces are explicit: I²C/SPI/GPIO/ADC clocks are planned at block level to prevent “integration surprises.”
  • Coherent timing is protected: BIA demod is treated as a timing problem (phase alignment + stable sampling relationship).
  • States are debuggable: every mode includes “what runs,” “what spikes current,” and “what evidence proves the cause.”
Signal Chain + Power States (Quiet Window) Weight Chain Load Cell → Bridge AFE → ADC Compute / BLE Scheduler + Logs + Packets BIA Chain Electrode → I/Q Demod → ADC Quiet Measurement Window: no BLE TX • no display refresh • no rail transitions Battery + ULP PMIC Rails + UVLO + Load Switches Power Domains AON ANALOG RF DISPLAY pulsed loads can couple into measurement Modes Sleep Adv Measure Sync Display
Figure (H2-2): The architecture is built around power states and a quiet measurement window. Measurement blocks are scheduled to avoid BLE TX, display refresh, and rail transitions that cause ripple/ground bounce.

H2-3 — Weight Measurement Chain (Load Cell → Wheatstone Bridge → Bridge AFE/ADC)

Vertical depth focus: accuracy physics + AFE realities. The goal is not a generic sensor introduction; it is a practical error-budget view of why a smart scale drifts, jitters, or disagrees between corners—and how to design measurement windows that remain stable in real power/noise environments.

Bridge basics that matter (µV–mV reality)

The bridge output is a tiny differential signal riding on a common-mode level set by excitation and loading. Any excitation ripple, ground bounce, leakage, or input bias currents can translate into apparent weight changes. The measurement chain must be designed around this reality.

AFE architecture options (choose by error distribution)

Instrumentation amplifier + ADC provides flexibility but shifts more responsibility to reference integrity, layout, and filtering.
Dedicated bridge ADC (ΣΔ + PGA) often provides strong 50/60 Hz rejection and a stable digital filter, but introduces latency and requires disciplined timing and windowing.
Ratiometric designs reduce sensitivity to excitation drift by tying the ADC reference to excitation, but only if the full path avoids extra drift/leakage mechanisms.

Error Source How It Shows Up How to Verify (Evidence) Typical Mitigation Calibratable?
Offset + drift
AFE + ADC + leakage
slow drift with time/temperature; unstable zero zero-load time series + temperature tag; short-window AFE input snapshot low-drift AFE/ADC, guarding/clean layout, stable excitation/reference partly
Gain error + gain drift scale reads consistently high/low; span changes with temperature known load points across temperature; compare ratiometric vs absolute paths ratiometric reference, stable PGA/gain path, temperature compensation table yes
Noise density
wideband
jitter; poor repeatability RMS noise over fixed window; PSD around measurement bandwidth bandwidth control, averaging, low-noise front end, supply filtering no
1/f noise
low-frequency
slow wander; long-settling “floaty” readings variance vs averaging length; low-frequency PSD / Allan-like behavior choose low-1/f AFE, reduce impedance/leakage, stabilize temperature gradients no
Input bias currents apparent offset, especially with high source impedance/leakage paths offset shift with humidity/contamination; compare guarded vs unguarded nodes guard rings, clean/short inputs, choose low-IBIAS AFE, input protection strategy no
Reference / excitation drift
non-ratiometric paths
weight changes correlated with VEX or reference movement log VEX/reference; correlate with reading drift ratiometric architecture; low-noise excitation; route/decouple correctly partly
Creep / hysteresis
mechanical
time-dependent drift under constant load; up/down mismatch time-series under constant load; load/unload loops mechanical design + compensation model; define settling time before reporting partly
Corner load
mechanical
same load differs by placement corner matrix test; compare per-foot sensitivity mechanical symmetry; multi-point calibration; mapping compensation yes
Temperature gradients drift during warm-up or ambient change thermal sweep + multiple sensor tags; correlate drift vs gradient temperature sensing + compensation; avoid heat sources near AFE/bridge nodes yes
Design decisions that determine stability

Excitation stability and bandwidth planning define the floor for repeatability. Scheduling (quiet window) defines whether that floor survives real pulsed loads (BLE TX, display refresh, DC-DC transitions). Shield/guard strategy determines whether leakage and humidity become “hidden drift sources.”

Design Checks

  • A quiet measurement window is defined and enforced: no BLE TX, no display refresh, no rail transitions during sampling.
  • Error budget is explicit: each major term has a verification method and a mitigation action.
  • Mechanical vs electrical signatures are separable using time-series evidence, temperature tags, and corner matrix tests.
Bridge Weight Chain + Error Injection Map Load Cell Wheatstone Bridge VEX Bridge AFE INA / PGA / ΣΔ ADC PGA ADC MCU Window • Filter • Logs Quiet Window VEX Ripple Offset/Drift 1/f Noise Leakage/IBIAS GND Bounce Display Noise
Figure (H2-3): Treat the bridge chain as an error-budget system. Ripple, drift, 1/f noise, leakage, and ground bounce are “injection paths” that must be measured and controlled.

H2-4 — Bio-Impedance (BIA) Chain (Electrodes → Injection → Sense → Demod → Magnitude/Phase)

Vertical depth focus: why consumer BIA fails and how to make it robust—without medical claims. BIA is treated as a coherent measurement chain: contact quality, phase/timing alignment, and repeatability gating determine whether results are trustworthy.

Block explanation (device-side, measurement-only)

A current source injects a controlled excitation through electrodes; a sense amplifier captures the response; synchronous demodulation (I/Q) extracts magnitude and phase; and an ADC converts the demodulated signals. The chain behaves like a timing-and-phase system—not a generic ADC sample.

Pain Point Typical Symptom Evidence Hook (Device-Side) Robustness Tactic
Contact impedance variability
dry skin / pressure / sweat
large jumps between consecutive readings; unstable magnitude contact-detect flag; quality score distribution; retry count contact gate + quality metric; accept/reject; controlled retry/backoff
Electrode polarization slow biasing; phase shifts over short time; warm-up anomalies phase drift vs time at fixed posture; I/Q imbalance checks stabilization delay; phase calibration; repeated-burst consistency check
Motion artifacts bursty outliers correlated with movement; repeatability collapses short-window variance spike; outlier signature in magnitude/phase repeatability scoring; reject outliers; prompt re-measure window
Phase/timing error
analog phase shift / skew / jitter
phase noise; frequency-dependent discontinuities; unit-to-unit mismatch I/Q orthogonality test; phase stability vs temperature; clock tag coherent timing discipline; calibration with known R/C networks
Parameter planning (engineering tradeoffs)

Excitation frequency and amplitude are selected to balance SNR, contact sensitivity, and user comfort. Calibration must cover both magnitude and phase, and the measurement window must avoid rail transitions and pulsed digital activity that inject phase error or demod instability.

Robustness Rules

  • Quality gating is mandatory: weak contact or unstable phase is rejected instead of being reported as a “confident number.”
  • Repeatability is measured: consecutive bursts must agree within a defined tolerance before acceptance.
  • Calibration is phase-aware: known impedance fixtures (R/C) verify magnitude and phase continuity across units.
BIA Chain + Quality Gate (Accept / Reject / Retry) Electrodes Contact Path Injection + Sense Current Source • Sense Amp Iinj Sense Demod + ADC I/Q • Phase • Magnitude I/Q ADC Quality Gate: contact OK • phase stable • repeatable bursts ACCEPT RETRY REJECT Contact Z Phase Err Motion
Figure (H2-4): Consumer BIA becomes robust when it is treated as a gated measurement system. Contact and phase stability determine accept/retry/reject rather than reporting unstable values.

H2-5 — Compute + BLE SoC Partitioning (Data Integrity First, Not App Features)

Boundary: device-side only. The objective is to deliver measurement records that remain traceable, verifiable, and replay-safe across resets, packet loss, and reconnections—without producing ghost measurements or silent gaps.

Partition map (who owns correctness)

The partition is defined by data ownership, not by features. If the AFE integrates a digital interface, it should expose status/fault bits and stable sample framing; the MCU/SoC owns the state machine, metadata stamping, and atomic persistence; the BLE stack owns transport, ACK, and resume logic.

AFE/ADC: samples + status MCU/Core: state machine + record builder NVM: atomic commit + replay safety BLE: ACK by seq + resume Logs: reset/brownout/quality
Field Purpose (Integrity) Where It Is Set Validation / Debug Use
seq (monotonic) Detect duplicates, gaps, and out-of-order delivery MCU/Core Gap scan; “no ghost measurements” proof
t0 + sample_count Reconstruct the measurement window and sampling density MCU/Core (from AFE framing) Verify windowing; correlate with rail events
cal_ver (or coeff hash) Trace results to the exact calibration set MCU/Core (factory / field) Unit-to-unit variance; recall root-cause
temp_tag Make drift interpretable and comparable across sessions MCU/Core Thermal correlation; compensation auditing
quality_flags Declare whether the record is valid (BIA contact/phase gate, weight window integrity) MCU/Core (from AFE + algorithms) Reject reasons; repeatability scoring
reset_epoch Prevent silent continuity across resets MCU/Core Explain missing streaks; isolate reboot-induced artifacts
rail_min_mv (optional) Prove or refute power integrity as the root cause PMIC/ADC + MCU/Core Dies-at-30% diagnosis; sync reset correlation
error_code (optional) Make failures actionable, not anecdotal AFE + MCU/Core AFE fault isolation; RMA triage
Atomic upload rules (no ghost measurements)

Upload is performed per record. Each record is segmented (if needed) but only considered delivered when the receiver ACKs the record’s seq. After reconnection, the device resumes from last_acked_seq + 1. Partial segments never become “valid data.”

Debug Hooks

  • Ring buffer logs: reset reason, brownout/UVLO events, AFE fault flags, BIA quality score / reject reason.
  • Transport counters: reconnect count, record retry count, buffer watermarks, last_acked_seq snapshots.
  • Integrity audit: seq continuity check and “gap marker” behavior is defined and testable.
Device-side Data Integrity Pipeline (Atomic Records) AFE / ADC Samples + Status framed samples MCU / Core State Machine • Record Builder seq timestamp quality flags cal_ver Ring Buffer / NVM Atomic Commit • Replay-safe CRC BLE Transport ACK by seq • Resume ACK(seq) reconnect Client Debug Logs reset • brownout • faults BIA quality score
Figure (H2-5): Define a minimal record schema, commit records atomically, and acknowledge delivery by sequence number. Resumption after reconnection uses last_acked_seq to prevent duplicates and ghost records.

H2-6 — Power Tree + Ultra-Low Power Strategy (Battery → PMIC → Rails → State Machine)

Core: battery life + measurement integrity. The power tree is treated as a system that must support pulsed loads (BLE + display) while preserving quiet windows for precision measurement. Failures such as “dies at 30%” or “resets during sync” are mapped to rail droop, UVLO thresholds, and inrush.

Power domains (design for isolation + scheduling)

A practical partition uses four domains: AON (wake/RTC), ANALOG (bridge/BIA), RF (BLE bursts), and DISPLAY (refresh). Domain control enables both low power and reduced noise injection into sensitive analog paths.

State What Is On Peak Load Risk Measurement Integrity Rule Evidence to Log
Deep Sleep AON only none define wake criteria (load / timer) sleep entry/exit count
Wake-on-load AON + minimal core inrush if rails are gated staged rail enable; settle before sampling wake timeline, rail_min_mv
BLE Advertise RF bursts + core pulsed current, ground bounce do not overlap with analog sampling windows TX burst count, reconnect stats
Measurement Burst ANALOG + core DC-DC mode change noise lock or schedule mode changes outside the window quality flags, rail_min_mv
Display Refresh DISPLAY + core rush current, rail ripple refresh after sampling; avoid shared rails with analog refresh timestamp, rail events
Sync/Upload RF + NVM writes peak stacking (RF + flash) sequence upload; throttle; separate from display reset reason, last_acked_seq
PMIC architecture choices (tradeoffs that matter)

Buck/boost improves efficiency but can inject switching ripple or create mode-change events; LDO-only can be quiet but may lose efficiency and margin under pulsed loads. Load switches enable aggressive gating but require disciplined soft-start, sequencing, and inrush control to avoid resets and drift.

Battery measurement (why “dies at 30%” happens)

Voltage-only estimation can be misleading when internal resistance rises (cold, aging, high pulse loads). A fuel gauge improves predictability, but still requires correct sampling windows and filtering. Logging rail minimum during RF and display events quickly separates SOC mis-estimation from true UVLO margin issues.

Common Failures → Root Cause Mapping

  • “Dies at 30%” → battery sag under pulses, conservative UVLO, or measurement-point mismatch; validate with rail_min_mv + reset reason.
  • “Resets during sync” → RF TX + NVM write + display stacking; fix by time-slicing and limiting peak concurrency.
  • “Slow wake” → overly conservative sequencing or unstable PMIC mode; validate with wake timeline and rail settle markers.
Power Tree + State Machine (Quiet Windows) Battery Li-ion / Primary sag under pulses PMIC buck/boost • LDO • switches UVLO inrush Rails / Domains AON ANALOG RF DISPLAY State Machine (time-slice peaks) SLEEP ADV MEASURE DISPLAY SYNC/UPLOAD RF burst → GND bounce display refresh → ripple Quiet Window
Figure (H2-6): Partition rails by domain, then time-slice high-current events away from analog sampling windows. Log rail minimum and reset reasons to close the loop on “dies at 30%” and “resets during sync.”

H2-7 — Display & UI Hardware Interfaces (Segment LCD / E-ink / Small OLED)

Boundary: hardware interface and power only. The display is treated as a pulsed load and a noise/bus contender. The objective is to keep weight and BIA measurement windows quiet while maintaining predictable refresh behavior.

Display options vs power (what matters for measurement integrity)

Segment LCD trends ultra-low average power; E-ink concentrates energy into burst updates; small OLED tends to raise average power and refresh activity. These differences translate directly into rail droop risk, inrush events, and noise coupling into bridge/BIA paths.

Display Type Power Profile Typical Driver/Interface Main Hardware Risk Best Practice
Segment LCD Very low average; steady drive bias Segment driver (I²C/SPI or dedicated) Charge-pump ripple; shared return paths Separate analog return; schedule bus use outside measurement
E-ink Near-zero idle; burst updates (high peak) SPI/I²C driver with update waveforms Long burst + peak stacking (inrush + ripple) Update only outside quiet windows; add settle time before sampling
Small OLED Higher average; frequent refresh activity SPI/I²C display driver Continuous load; ripple correlation with jitter Dedicated display rail/filtering; strict time-slicing with analog

Interfaces and bus contention (I²C/SPI sharing rules)

When the display shares buses with AFEs or sensors, bus occupancy and digital switching can leak into sensitive nodes. A robust design enforces a no-display-transaction policy during measurement windows and uses explicit arbitration (priority + “quiet window” locks) to prevent collisions and latency spikes.

Scheduling Rules

  • Quiet window: no display refresh and no display bus transactions during weight/BIA sampling.
  • Burst separation: avoid stacking display refresh with BLE TX and NVM writes (peak stacking → resets).
  • Post-refresh settle: enforce a short settle interval before analog sampling resumes.
  • Evidence tags: timestamp each refresh event to correlate with weight jitter or BIA quality drops.
Display = Load + Bus User (Scheduling Guards) Display Options Segment LCD E-ink Small OLED low burst avg↑ Display Driver I²C / SPI • Refresh bursts I²C SPI Bus Arbitration Analog Sampling Weight / BIA windows QUIET no refresh • no I²C/SPI Timing Guard MEASURE WINDOW DISPLAY UPDATE
Figure (H2-7): Treat the display as a pulsed load and a bus contender. Enforce “no refresh / no bus traffic” during analog sampling windows, and time-slice bursts to prevent jitter and resets.

H2-8 — Noise, EMI/ESD, and Layout Constraints (Bridge µV + Human Electrodes)

This is the stability layer: bridge outputs live at µV–mV scale, and BIA electrodes are direct user-contact entry points. The objective is to control injection paths, protect high-impedance nodes, and make EMI/ESD issues diagnosable with evidence (timestamps, rail minimums, jitter correlation).

Victim nodes (what must remain quiet)

Focus on bridge differential inputs and references, BIA sense paths (high impedance), ADC reference/excitation nodes (Vref/VEX), and analog return paths. These nodes should not share noisy return currents from BLE bursts, display updates, or DC-DC switching loops.

Grounding and layout discipline (practical rules)

Use controlled analog/digital return coupling (single-point or controlled impedance), keep high-impedance traces short, and implement guard rings around BIA sense nodes. Place switching power loops away from bridge/BIA inputs and prevent display/RF return currents from crossing the analog front-end region.

ESD Entry Point Typical Coupling Path Most Common Outcome Interface-Level Protection Layout Rule
Electrodes Direct to high-Z sense / injection path Offset shift, noise rise, intermittent quality drops Series R + RC, controlled TVS at port (low leakage) Shortest discharge loop; guard/spacing
Metal feet / chassis Capacitive to ground and bridge wiring Weight jitter bursts, rare latch/reset Chassis-to-ground strategy, controlled bleed path Keep return currents out of analog region
User touch points Coupling into digital rails + ground Random resets, BLE drops TVS at exposed pads, series R to GPIO Route to protection first, then to core
Service / USB pads Direct to connector shield/ground Port damage or transient brownout TVS at connector, filtering for VBUS sense Shortest protection loop, solid ground reference

EMI Symptoms (correlate before redesign)

  • Weight jitter that aligns with BLE TX bursts → ground bounce / RF current return crossing analog region.
  • Jitter spikes aligned with display refresh → refresh ripple or inrush stacking into shared rails.
  • BIA quality drops aligned with DC-DC mode changes → ripple/jitter degrading demod accuracy.
  • Best evidence: event timestamps + rail_min_mv + quality_flags plotted for correlation (device-side logs).
Mitigation playbook (layered)

First enforce measurement windowing (quiet windows). Next choose switching frequencies and spread-spectrum options to avoid beating into sampling/demod bands. Then apply RC/LC filtering and domain splits for RF/display rails. Finally, use shielding/guard rings and port-level TVS with short discharge loops.

Injection Paths (EMI/ESD): Sources → Coupling → Victims Sources BLE TX burst Display refresh DC-DC switching User ESD Coupling GND bounce rail ripple capacitive coupling leakage path Victims Bridge input BIA sense (Hi-Z) Vref / VEX Analog return Mitigations windowing domain split RC/LC shield/guard/TVS@ports Evidence: timestamps • rail_min • correlation
Figure (H2-8): Make EMI/ESD problems diagnosable by mapping sources to coupling paths and victim nodes, then applying layered mitigations. Correlate jitter/quality drops with events using timestamps and rail minimum logs.

H2-9 — Calibration, Self-Test, and Production Trim (Make It Manufacturable)

Goal: stable and traceable results across units, time, and temperature. Calibration must produce versioned artifacts (tables/coefficients/thresholds) and self-test must expose fault evidence via enumerated codes and quality flags.

What “manufacturable” means for a smart scale / BIA gadget

The same fixture and the same test flow should yield predictable distributions (not surprises). Every shipped unit should carry a calibration version and coefficients hash, and every measurement record should tag cal_ver, temp_tag, and quality_flags for field traceability.

Calibration Pillars

  • Weight: tare/zero, span points, corner-load compensation, temperature compensation tables.
  • BIA: known impedance fixtures (R / R||C), phase alignment, channel matching, contact-quality thresholds.
  • Self-test: open/short detection, electrode open detection, excitation integrity, reference sanity checks.
  • Versioning: cal_ver and coeff_hash are treated as part of the measurement data model.

Weight calibration flow (tare → span → corner → temperature)

A robust production flow separates mechanical artifacts (corner load, creep, hysteresis) from electrical errors (offset, gain, drift). Tare and span define baseline linear behavior; corner-load compensation addresses platform geometry; temperature tables bind the correction to explicit temperature tags.

  • Tare / zero: require a stable window (no motion) and store zero offset with timestamp and temp_tag.
  • Span points: calibrate gain using controlled loads; consider multiple points if non-linearity is observed.
  • Corner-load matrix: measure center + corners; fit correction terms that are independent from tare/span.
  • Temperature compensation: build a small table across temperature points; interpolate at runtime with temp_tag.
zero_offset gain_coeff corner_terms temp_table cal_ver coeff_hash

BIA calibration flow (fixtures → magnitude/phase → thresholds)

The goal is not “body results” in isolation; it is magnitude/phase repeatability under controlled proxy impedances. Known R and R||C fixture networks provide a reproducible reference for alignment, channel matching, and contact-quality thresholds.

  • Fixture impedances: use known networks to cover the intended operating region (not a single point).
  • Phase alignment: correct demod timing/phase offsets so I/Q maps to consistent magnitude/phase.
  • Channel matching: calibrate gain/phase mismatches across channels/paths if applicable.
  • Contact-quality thresholds: set acceptance/reject limits tied to measurable quality metrics.
mag_coeff phase_offset chan_match contact_thr cal_ver

Self-test hooks (make failures explicit and loggable)

Self-test should target the highest-value failure modes: AFE input open/short, electrode open, excitation integrity, and reference sanity. Each check should emit enumerated error_code and composable quality_flags so that field reports become diagnosable evidence, not anecdotes.

  • AFE short/open detect: verify plausible input range and detect wiring/connector issues.
  • Electrode open detect: detect missing contact or electrode discontinuity before demod.
  • Excitation integrity: validate that excitation/injection is present and within sanity bounds.
  • Reference sanity: validate Vref/VEX health; latch failures for post-mortem analysis.
Artifact Input Asset Output Stored On-Device Version Tag Recommended Record Fields
Weight: zero/span Cal load(s), stable window zero_offset, gain_coeff cal_ver, coeff_hash seq, t0, temp_tag, cal_ver
Weight: corner Corner-load matrix corner_terms cal_ver position_tag, temp_tag, cal_ver
Weight: temperature Thermal points sweep temp_table cal_ver temp_tag, cal_ver, drift_flags
BIA: mag/phase Known R / R||C fixtures mag_coeff, phase_offset cal_ver, coeff_hash freq_tag, fixture_id, cal_ver
Contact thresholds Proxy contact sweeps contact_thr cal_ver quality_score, reject_reason, cal_ver
Calibration + Self-Test (Artifacts + Versioning) Inputs Load fixture Corner matrix Temp points BIA R/C fixture Calibration Blocks Tare Span Corner compensation Temperature table BIA mag/phase align Outputs Coefficients Tables Thresholds cal_ver Self-Test + Evidence Open/Short Electrode open Vref/VEX sanity error_code • quality_flags
Figure (H2-9): Calibration creates versioned artifacts (coefficients/tables/thresholds) tied to cal_ver/coeff_hash. Self-test emits explicit error_code and quality_flags for field traceability.

H2-10 — Validation Test Plan (Bench + Line + Regression)

SOP-like validation is measurable and repeatable: each test defines purpose, setup, stimulus, metrics, and pass/fail gates. Results should be comparable across builds using the same evidence fields (timestamps, rail_min, reset_reason, cal_ver).

Validation structure (three layers)

Bench tests establish analog/mechanical baselines, line tests protect manufacturing drift, and regression tests prevent firmware/build changes from reintroducing noise, drift, or brownout behaviors.

Domain Bench (Engineering) Line (Manufacturing Guard) Regression (Build-to-Build) Evidence Fields
Weight (electrical) Noise PSD, settling, mains rejection, thermal drift sweep Quick span check, zero stability, corner sanity spot-check PSD baseline compare, settling window compare seq, t0, temp_tag, cal_ver, quality_flags
Mechanical Repeatability, hysteresis, creep, corner-load matrix Sampling-based repeatability gate, creep quick screen Corner matrix deltas vs baseline position_tag, load_id, temp_tag, cal_ver
BIA Fixture sweeps, mag/phase stability, proxy contact loads Fixture spot-check at key points, contact threshold gate Mag/phase drift compare, reject+retry rate compare freq_tag, fixture_id, quality_score, reject_reason
Power BLE TX + display peak stacking, UVLO margin, cold battery Brownout screen under scripted load burst Reset rate compare, rail_min compare rail_min_mv, reset_reason, event_ts
EMI/ESD robustness Correlation tests: jitter vs TX/refresh/switching, port stress Basic ESD spot stress on key ports (screen only) Jitter correlation signature compare event_ts, jitter_metric, quality_flags

Bench tests (measurable analog baselines)

Bench validation should produce a baseline report that regression can compare against. Keep the measurement bandwidth and sampling strategy consistent with production settings to avoid “passing the lab” but failing the field.

  • Noise PSD: measure spectral density and integrated noise over defined bands; store baseline signatures.
  • Step/settling: apply load steps and capture time-to-stable; verify quiet-window scheduling assumptions.
  • Mains rejection: test 50/60 Hz coupling; validate filtering and sampling coherence strategies.
  • Thermal drift sweep: sweep temperature; validate temp_table effectiveness using temp_tag alignment.

Mechanical tests (scale-specific)

Mechanical behavior must be treated as first-class validation. Repeatability, hysteresis, creep, and corner-load matrices define the envelope that calibration can realistically correct.

  • Repeatability: repeated measures at the same load and position; track distribution stability.
  • Hysteresis: compare up/down load paths; quantify delta and ensure it remains within an acceptable gate.
  • Creep: hold a constant load and log drift vs time; use fixed time points for comparability.
  • Corner-load matrix: center + corners; validate correction terms and detect platform asymmetry drift.

BIA tests (fixtures + proxy loads + artifact detection)

Use reproducible fixtures and proxy loads to evaluate magnitude/phase stability and contact-quality gating. Avoid human-dependent variability for core validation; focus on controlled impedance and motion proxies.

  • Contact impedance sweeps: step fixture impedances and observe quality_score and stability behavior.
  • Dry/wet proxy loads: emulate contact variability with controlled networks; validate reject+retry logic.
  • Motion artifact tests: introduce controlled disturbance and verify the quality gate detects and tags events.

Power tests (peak stacking and brownout evidence)

The most revealing scenario intentionally stacks bursts: BLE TX + display update + logging. Validate UVLO margin, cold-battery sag, and wake behavior. Require evidence fields to make failures actionable.

  • Battery droop under bursts: quantify rail_min_mv and correlate to event_ts.
  • UVLO margin: validate thresholds and hysteresis; confirm reset_reason reporting.
  • Cold battery behavior: test with increased internal resistance; verify no measurement corruption.
  • Slow wake: validate sequencing and post-wake settle before sampling resumes.
Regression gating (what must never drift)

Freeze baseline signatures for PSD, settling time, corner matrix deltas, BIA mag/phase stability, brownout rate, and correlation signatures (jitter vs TX/refresh). Gate releases when deltas exceed configured thresholds.

Validation Matrix (Bench • Line • Regression) Bench Line Regression Domains Weight Mechanical BIA Power EMI/ESD PSD • settling span • zero baseline Δ creep • corner spot-check matrix Δ fixture sweeps threshold gate reject rate Δ UVLO • cold brownout screen reset Δ correlation ESD screen signature Δ Evidence fields: event_ts • rail_min_mv • reset_reason • cal_ver • temp_tag • quality_flags
Figure (H2-10): A validation plan is a matrix: bench baselines, line guards, and regression gates. Require consistent evidence fields so failures remain traceable across builds.
H2-11

Field Debug Playbook (Symptoms → Evidence → Root Cause)

Debug mindset: “two captures, then decide”

Smart scale failures often look random because multiple subsystems share time and power: bridge sampling windows, BLE transmit bursts, DC-DC switching, and display updates. The fastest triage is to standardize on two captures: one analog (what the rails/sense nodes did) and one digital (what the firmware believed happened).

Capture A: rail minimum + ripple Capture B: timestamped event/log record Goal: correlate → isolate a single class of root cause

Minimum “evidence payload” (store locally, upload later)

Keep logs tiny but forensic. Every measurement record should be self-describing so missing BLE packets cannot create “ghost” readings or silent gaps.

Field Type Why it matters
record_id + seq u32 + u16 Detects missing/duplicated uploads; enables atomic record sync.
rtc_ts u32 seconds (or ms) Correlates with BLE TX, display update, and sleep/wake transitions.
mode enum Deep sleep / measure / advertise / sync / display refresh—helps isolate scheduling bugs.
weight_raw + weight_filt s32 Separates analog drift/noise from DSP/filter decisions.
bia_mag + bia_phase s16/s16 Shows whether instability is magnitude-only (contact) or phase-related (timing/jitter).
quality_flags bitfield Contact quality, electrode open, ADC saturation, out-of-range, retry events.
vbat_min + rail_min mV “Dies at 30%” and random resets often trace to rail minimum during burst loads.
reset_reason enum Separates brownout, watchdog, hard fault, or external reset.
cal_version u16 Prevents mixing data across different trims/compensation tables.
temp_tag °C (s8/s16) Links drift and compensation behavior to temperature.
Recommended atomic record: { record_id, seq, rtc_ts, mode, weight_raw, weight_filt, bia_mag, bia_phase, quality_flags, vbat_min, rail_min, reset_reason, cal_version, temp_tag }

Symptom A: Weight jitter happens “only sometimes”

  • Power Integrity
  • Scheduling
  • Bridge AFE
  • DC-DC Ripple

First 2 captures

CaptureWhat to capturePass/Fail cue
A DC-DC output ripple + analog reference/excitation rail during the sampling window Fail if ripple/steps line up with sample timing; especially near BLE TX or display refresh edges
B Timestamped event log: BLE TX start, display update, ADC conversion start/end Fail if jitter peaks repeat at the same offset after TX/update events

Root-cause shortlist

  1. Sampling window overlaps switching noise (DC-DC, display, RF burst) → reschedule “quiet window”.
  2. Ratiometric path broken (excitation and ADC reference not coupled) → weight scales with supply drift.
  3. Bridge input headroom/CM issue → occasional saturation, then filter “rings” back.

Fix + verify

  • Enforce a strict state machine: quietmeasurecomputeRF/display.
  • Log “measure_start_ts” and “measure_end_ts”; confirm no overlaps in field traces.
  • Verify with an injected ripple test (known ripple amplitude vs measured jitter slope).

Symptom B: Weight reads low/high after temperature change

  • Drift
  • Excitation/Reference
  • Mechanical Creep
  • Temp Compensation

First 2 captures

CaptureWhat to capturePass/Fail cue
A Excitation/reference drift vs temperature (measure VEX/REF and ADC codes together) Fail if weight error tracks VEX/REF change → ratiometric/reference coupling issue
B Short thermal step test + settle trace (weight_raw vs time) Fail if the error decays slowly → mechanical creep/hysteresis dominates

Root-cause shortlist

  1. Mechanical creep/hysteresis → requires time-based settle policy and calibration strategy.
  2. AFE offset/1/f drift → mitigated by auto-zero, lower bandwidth, or better drift specs.
  3. Temperature gradient across load cells → causes corner-load mismatch (shows up as direction-dependent error).

Fix + verify

  • Store temp_tag per record; apply compensation tables only when temperature is stable.
  • Use “settle gates”: do not publish a stable weight until noise + drift slope are below threshold.
  • Run a corner-load matrix at hot/cold; confirm compensation reduces worst-case, not just average.

Symptom C: BIA magnitude/phase “jumps” between repeats

  • Contact Quality
  • Demod Timing
  • Motion Artifact
  • ADC Saturation

First 2 captures

CaptureWhat to capturePass/Fail cue
A Contact metric(s): electrode open detect, contact impedance proxy, saturation flags Fail if jumps correlate with contact metric dips/spikes → contact variability dominates
B Demod phase stability: I/Q raw (or phase accumulator) across repeats Fail if phase drifts without contact change → timing/jitter/skew in demod path

Root-cause shortlist

  1. Contact variability (pressure, dry/wet, micro-motion) → should trigger reject+retry, not silent publish.
  2. Analog path phase error (filter group delay mismatch, skew) → shows as phase drift while magnitude is stable.
  3. Sampling jitter / clock coherence → demod loses coherence under power-state transitions.

Fix + verify

  • Compute a repeatability score (N repeats, variance bound); publish only if score passes.
  • Window BIA acquisition away from BLE TX and display updates.
  • Use fixture impedance networks (R/C) to validate magnitude+phase across temperature.

Symptom D: Random resets / reboots

  • Brownout
  • Inrush
  • UVLO
  • Flash Writes

First 2 captures

CaptureWhat to capturePass/Fail cue
A vbat_min and rail_min during: BLE TX burst, display update, DC-DC enable, AFE start Fail if rail dips below MCU BOR/UVLO threshold or buck-boost current limit hits
B reset_reason + last 32 log events (ring buffer) saved on next boot Fail if resets cluster around a specific state transition (e.g., sync start)

Root-cause shortlist

  1. Inrush + weak battery → transient droop at enable or display burst.
  2. Current limit interactions (buck-boost) → rail collapses under pulsed loads.
  3. Flash write brownout → partial record; causes loops and repeated resets if not handled atomically.

Fix + verify

  • Stagger loads: RF bursts and display updates must not coincide with AFE start or flash commits.
  • Write logs with two-phase commit (header → payload → commit flag).
  • Cold battery sweep + burst test; verify minimum rail stays above BOR with margin.

Symptom E: BLE sync succeeds but records are missing

  • Data Integrity
  • Retry Logic
  • Flash Endurance
  • Sequence Counters

First 2 captures

CaptureWhat to capturePass/Fail cue
A seq counters: last_sent_seq, last_acked_seq, flash_head/tail pointers Fail if ack advances without durable write, or pointers jump after reset
B Packet loss stats + reconnect timeline (connect/disconnect timestamps) Fail if reconnect happens mid-batch without resuming from last_acked_seq

Root-cause shortlist

  1. Non-atomic uploads → “best effort” sending causes silent gaps under packet loss.
  2. Ring buffer corruption → incomplete writes, power loss during erase/program.
  3. Timestamp drift → sorting errors that look like “missing” when actually misordered.

Fix + verify

  • Implement atomic record upload: send [seq..seq+N], require ack range, retry gaps only.
  • Store cal_version and schema_version; reject incompatible uploads rather than mixing data.
  • Regression test: forced packet drop patterns + random resets during sync.
Field Debug Pipeline Field Debug: Symptom → Two Captures → Root Cause Standardize evidence, correlate timing, then isolate one dominant failure class. Symptom Card Weight jitter BIA jumps Resets / missing data Capture A (Analog) vbat_min / rail_min ripple + reference/excitation sample window alignment Capture B (Digital) rtc_ts + mode transitions seq counters + retries reset_reason + flags Correlation Does the failure align with TX / DC-DC / display events? Is data atomic? Root-cause buckets (pick ONE to pursue first) Power droop / inrush / UVLO Timing window overlap / jitter AFE saturation / drift / noise Contact electrode / pressure BLE/Data seq/ack/flash integrity
Figure F11 — A standardized “two-capture” workflow that turns intermittent symptoms into a single, testable root-cause hypothesis.
H2-12

IC Selection Cheatsheet (with Concrete MPN Examples)

Rules before part numbers

The selection target is not peak specs on paper; it is stable, repeatable measurements under shared power, shared buses, and real contact variability. Use the checklists to eliminate classes of failure, then use the MPN examples to anchor BOM planning and second-source searches.

No medical claims: the BIA section focuses on measurement integrity (magnitude/phase repeatability, contact detection, and demod stability), not diagnosis.

1) Weight chain — Bridge AFE / Bridge ADC

Must-have checklist

  • Low noise + low 1/f at the selected data rate (weight is a near-DC measurement).
  • PGA range that covers µV–mV bridge outputs without saturating across temperature/CM range.
  • Ratiometric-friendly: excitation/reference strategy that prevents supply drift from becoming weight drift.
  • 50/60 Hz rejection modes that fit the measurement time budget.
  • Input diagnostics: saturation flags, open/short detection if available, and predictable settling behavior.

Red flags

  • “Great resolution” but unstable zero due to 1/f noise and drift.
  • Sampling overlaps DC-DC / display refresh without any quiet-window control.

Concrete MPN examples (bridge-focused 24-bit ΔΣ)

  • TI ADS1232 / ADS1234 — 2/4-channel ΔΣ ADCs designed for bridge sensors (weigh scales).
  • Nuvoton NAU7802 — low-power 24-bit ADC with PGA and 2-wire control interface.
  • Avia HX711 — widely-used 24-bit ADC interface for weigh scales (cost-focused designs).
  • ADI AD7124-4 / AD7124-8 — precision 24-bit Σ-Δ ADC front end with multiple inputs (higher integration).
  • TI ADS1220 — 24-bit ΔΣ ADC with PGA and strong 50/60 Hz rejection modes (often used in precision sensing).
  • ADI AD7799 — 24-bit Σ-Δ ADC with PGA and reference options (precision sensor front ends).

2) BIA chain — Bio-impedance AFE / Impedance Converter

Must-have checklist

  • Stable excitation (current source accuracy + compliance) over expected contact impedance range.
  • Coherent demod (I/Q or synchronous detection) with controlled phase error.
  • Contact diagnostics: electrode open detect or a robust quality proxy that supports reject+retry.
  • Timing integrity: clocking that survives power-state changes without phase discontinuities.
  • Calibration hooks for magnitude and phase using known R/C fixtures.

Evidence-driven acceptance

  • Define a quality score that predicts repeatability (not just “signal present”).
  • Publish BIA only if N repeats meet variance bounds; otherwise retry and log reason codes.

Concrete MPN examples (impedance-capable AFEs)

  • ADI AD5940 / AD5941 — low-power AFE capable of impedance measurement; commonly used for skin/body impedance measurement setups.
  • ADI AD5933 — impedance converter / network analyzer IC for magnitude/phase extraction (system-level design needed).
  • TI AFE4300 — AFE targeting body composition scales; integrates functions for weight + BIA style measurements in one platform family.

3) Compute + BLE SoC (device-side integrity first)

Must-have checklist

  • Low-power advertising + predictable wake latencies (measurement scheduling depends on it).
  • RTC accuracy and timestamp strategy that keeps records ordered across reconnects.
  • Enough NVM endurance for logs/calibration (or external FRAM/flash strategy).
  • ADC / SPI / I²C bandwidth to service both weight and BIA without bus contention surprises.
  • Device-side OTA safety hooks (rollback, image integrity) without expanding into cloud/app architecture.

Concrete MPN examples (BLE SoCs used in low-power devices)

  • Nordic nRF52832 — Cortex-M4 BLE SoC widely used in low-power products.
  • TI CC2640R2F — SimpleLink BLE low-energy wireless MCU family.
  • Silicon Labs EFR32BG22 — Series 2 BLE SoC with strong low-power focus.

4) Power — ULP PMIC / DC-DC + Charger + Fuel Gauge (as needed)

Must-have checklist

  • Quiescent current aligned with the real standby budget (not only “efficiency at 100 mA”).
  • UVLO + brownout behavior that matches the MCU BOR thresholds and flash write safety.
  • Load switch/gating for display and AFEs to create quiet measurement windows.
  • Transient handling for BLE TX + display bursts (rail_min evidence should remain above BOR).
  • Battery reporting: voltage-only vs fuel gauge; avoid “dies at 30%” user experience by design.

Concrete MPN examples (mix-and-match by architecture)

  • TI TPS62743 — ultra-low IQ buck converter (good for always-on, low-load efficiency).
  • TI TPS63901 — ultra-low IQ buck-boost (helps when battery spans below/above rail target).
  • TI BQ25120A — integrated low-IQ charger + power path + buck/LDO (compact power front-end).
  • ADI LTC3335 — nanopower buck-boost with integrated coulomb counter (long-life battery systems).
  • Maxim/ADI MAX17048 — micropower 1-cell fuel gauge (ModelGauge) for Li-ion packs.

Validation proof points

  • Measure vbat_min and rail_min during worst-case burst events at cold battery.
  • Confirm flash commits are protected: no partial records after forced power cuts.
  • Verify switching frequency + harmonics do not land inside the measurement bandwidth.

5) Display driver (segment LCD / e-paper / small OLED)

Must-have checklist

  • Standby current consistent with battery targets (segment LCD usually wins for ultra-low).
  • Update profile: burst current vs continuous current; schedule updates away from measurement windows.
  • Interface robustness: I²C/SPI contention plan (especially if sharing buses with AFEs).
  • Noise coupling control: ensure refresh edges do not inject into µV bridge/BIA nodes.

Concrete MPN examples (common driver/controller ICs)

  • Holtek HT1621 — LCD segment driver/controller (simple MCU interface, common in small displays).
  • NXP PCF8576D — universal LCD driver for low multiplex rates (segment LCD).
  • Solomon Systech SSD1306 — OLED driver/controller for small dot-matrix displays.
  • Solomon Systech SSD1680 — e-paper (EPD) display driver/controller (burst update style).
Smart Scale IC Selection Map IC Selection Map (Smart Scale / BIA Gadget) Pick blocks that preserve measurement integrity under shared power, shared time, and real contact variability. Weight AFE noise + drift ratiometric 50/60 Hz reject Examples ADS1232/34 NAU7802 / HX711 BIA AFE excitation loop I/Q demod contact diagnostics Examples AD5940/41 AD5933 / AFE4300 BLE SoC timestamp + logs seq/ack integrity low-power states Examples nRF52832 CC2640R2F / BG22 Power IQ + UVLO quiet windows burst droop Examples TPS62743 TPS63901 BQ25120A / MAX17048 Display standby current update bursts noise coupling Examples HT1621 / PCF8576D SSD1306 / SSD1680 Acceptance = evidence: rail_min margin, phase repeatability, contact quality gating, and atomic BLE record sync.
Figure F12 — A block-level selection map that keeps BOM decisions tied to failure prevention: power/timing/coupling first, then specs.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.
H2-13

FAQs (Mapped to H2-2 ~ H2-12, device-side only)

Each answer stays inside the smart scale + consumer BIA gadget boundary: bridge/BIA signal chains, BLE reliability, power states, display coupling, calibration, validation, and field evidence. No app/cloud architecture and no medical claims.

seq / record_id rtc_ts / mode vbat_min / rail_min reset_reason quality_flags cal_version / temp_tag

1) Why does weight “drift” after standing still for 10–20 seconds?

Mapped to: H2-3 / H2-10

A slow drift after “no motion” usually comes from a time-constant effect, not random noise: mechanical creep/hysteresis, temperature gradients, or filter/averaging convergence that keeps integrating low-frequency (1/f) behavior. Capture a 20 s settle trace (weight_raw → weight_filt) with temp_tag, and verify whether drift slope changes with load level or temperature.

See: H2-3 error budget + H2-10 step/settling & thermal sweep.

2) Why are corner-load readings inconsistent even if the center is accurate?

Mapped to: H2-3 / H2-10

Center accuracy can pass while corners fail because corner loads stress different mechanical paths and amplify mismatch (mounting, stiffness, load-cell placement, or assembly stress). A single-point span cannot correct a 2D corner-load matrix. Run a center + 4-corner matrix, then compare repeatability/hysteresis per position; if corner error is direction-dependent, the dominant term is mechanical coupling, not ADC resolution.

See: H2-10 corner-load matrix + repeatability/hysteresis tests.

3) Why does enabling BLE sync make weight noise worse?

Mapped to: H2-5 / H2-8

BLE sync adds pulsed RF current and bus activity that can modulate µV-level bridge readings through rail ripple, ground bounce, or scheduling overlap (TX burst or flash writes landing inside the sampling window). The fastest proof is correlation: log rtc_ts, mode, and a TX marker, then measure rail ripple/rail_min during sampling. If noise peaks align with TX or state transitions, enforce a “quiet window” for measurements.

See: H2-5 data integrity markers + H2-8 coupling/EMI windowing.

4) Why does the scale shut down during a measurement even with “battery remaining”?

Mapped to: H2-6 / H2-11

“Battery remaining” is an estimate; shutdowns happen when vbat_min or rail_min dips below UVLO/BOR during a burst (BLE TX + display update + DC-DC mode shift + flash commit). Capture reset_reason plus vbat_min/rail_min at the event. If brownout is confirmed, stagger loads, add inrush control, or adjust UVLO hysteresis so transient droops do not trip the system.

See: H2-6 power states & burst stacking + H2-11 reset evidence workflow.

5) What’s the fastest way to tell if the error is mechanical vs AFE noise?

Mapped to: H2-3 / H2-11

Use a two-axis A/B test. Mechanical issues change with position and time constants: center vs corners and load-on/off settling will show different signatures. Electrical/AFE issues correlate with rails and digital events: toggle BLE/display activity and check whether jitter aligns with TX/display markers or ripple. The decision is evidence-based: position dependence → mechanical; event/ripple correlation → AFE/power/scheduling.

See: H2-11 “two captures” method (analog + digital correlation).

6) Why does BIA jump wildly between consecutive readings?

Mapped to: H2-4 / H2-11

Wild BIA jumps usually mean the input conditions changed but the system still published a result. The two dominant causes are contact variability (pressure, micro-motion, electrode polarization) and demod coherence loss (phase/timing instability). Log a contact quality score/flags (quality_flags) and compare them against I/Q or phase stability across repeats. If quality is low, reject + retry rather than averaging bad data into a “stable-looking” number.

See: H2-4 contact + demod chain, and H2-11 BIA evidence workflow.

7) How do I detect poor electrode contact reliably (dry skin / pressure)?

Mapped to: H2-4 / H2-9

Reliable contact detection needs more than “signal present.” Combine (1) open/saturation flags, (2) an impedance proxy derived from excitation vs sense behavior, and (3) a repeatability score across N quick repeats. Set thresholds using known R/C fixture impedances during calibration so the quality score predicts repeatability, not just amplitude. Store the result as quality_flags and gate publishing on it.

See: H2-9 calibration fixtures + contact thresholds.

8) Which matters more for BIA accuracy: amplitude stability or phase stability?

Mapped to: H2-4 / H2-9

Phase stability is usually the prerequisite: if phase is not repeatable, complex impedance decompositions and compensation become unstable even when amplitude looks consistent. Amplitude can often be gain-calibrated, but phase errors from path mismatch, timing skew, or sampling jitter are harder to “fix later.” Use R/C fixtures to separate the two: check whether phase dispersion or magnitude dispersion breaks first across repeats and temperature.

See: H2-4 I/Q demod and H2-9 phase alignment + trim strategy.

9) Why does display refresh cause measurement glitches?

Mapped to: H2-7 / H2-8

Display refresh injects burst current and high edge-rate bus toggling that can couple into bridge/BIA sensing through rails and ground return paths. If the display shares I²C/SPI, bus contention can also delay AFE reads and shift sampling timing. The fix is deterministic scheduling: measure in a quiet window, then refresh. Confirm by logging a display_marker and showing alignment with rail ripple or sampling jitter.

See: H2-7 interface timing + H2-8 coupling and windowing.

10) What production calibration steps give the biggest yield improvement?

Mapped to: H2-9 / H2-10

Highest-yield steps are the ones that collapse unit-to-unit spread and tail failures: for weight, do stable zero/tare, span points, and a corner-load matrix (plus temperature compensation if drift is a yield limiter). For BIA, prioritize phase alignment and contact-quality threshold calibration using known R/C fixtures. Add self-test hooks (open/short/diagnostics) to screen assembly faults. Validate by comparing before/after distributions, not single averages.

See: H2-9 trim/self-test + H2-10 measurable regression plan.

11) What two logs/waveforms should I capture first for random resets?

Mapped to: H2-6 / H2-11

First capture the power truth: vbat_min and rail_min around the failure window (especially during BLE TX, display update, and flash commits). Second capture the firmware truth: reset_reason plus a small ring buffer of the last state transitions (mode changes with rtc_ts). Those two pieces usually classify the reset quickly: brownout/UVLO, watchdog timing, or unsafe flash-write under droop. Everything else is secondary until these are known.

See: H2-11 reset playbook and the “two captures” standard.

12) If I must simplify BOM, which block should not be downgraded first?

Mapped to: H2-12

Do not downgrade the blocks that preserve measurement integrity and evidence: (1) a low-drift, low-1/f bridge front end (or bridge ADC), (2) power delivery with predictable UVLO/BOR margin under burst stacking, and (3) BIA contact diagnostics plus coherent demod/phase stability. These determine whether drift/jitter/reset issues are controllable and debuggable. Less critical blocks (e.g., display class) can be simplified if scheduling guarantees quiet measurement windows and logs remain atomic.

See: H2-12 selection rules (Specs → Risks → Proof Tests).
FAQ Coverage Map FAQ Coverage: Question → Evidence → Chapter Fast answers remain device-side by anchoring every claim to logs, counters, and waveforms. Questions (examples) Weight drift after 10–20s BLE sync makes noise worse BIA jumps between repeats Random resets Display refresh glitches Evidence (minimum set) seq / record_id rtc_ts / mode markers vbat_min / rail_min reset_reason quality_flags (contact, sat) cal_version / temp_tag Anchors (chapters) H2-3 Weight Chain H2-4 BIA Chain H2-6 Power Tree H2-8 EMI / Layout H2-10 Validation H2-11 Field Debug H2-12 IC Selection
Figure F13 — FAQs are “fast answers” only when every claim is anchored to evidence fields and mapped back to chapters.