123 Main Street, New York, NY 10001

Low-Power for CAN/LIN/FlexRay: Sleep Iq & Wake Attribution

← Back to: Automotive Fieldbuses: CAN / LIN / FlexRay

Low-power for in-vehicle fieldbuses is not “lowest datasheet Iq”, but a repeatable system baseline: comparable sleep conditions, controlled wake sensitivity, and provable wake-source attribution. The goal is to meet a budgeted Sleep Iq while keeping false-wake and wake-latency within measurable limits across the full {VBAT × Temperature × harness} matrix.

H2-1 Scope guard: definitions + comparable measurement envelope (no PHY waveform details)

Definition: What “Low-Power” means in automotive fieldbuses

“Low-Power” is only comparable when power state, powered domains, and wake monitoring are defined consistently. Otherwise, different “Sleep Iq” numbers describe different systems and cannot be ranked.

This section answers
  • Which power states exist (Sleep / Standby / Wake-ready) and what remains powered in each state.
  • How to convert a datasheet “Sleep Iq” into a comparable spec envelope (conditions + boundaries).
  • How OEM “static budget” thinking decomposes current from vehicle level down to always-on blocks.

1) Power-state vocabulary (use as the page dictionary)

Standby Wake monitoring active; system not “fully asleep”
  • Powered: VBAT path + always-on domain (wake logic, minimal comparators, flag latches).
  • Listening: bus/local/timed monitoring may be enabled depending on design policy.
  • Comparable Iq boundary: define whether the number is component-only or system-at-VBAT.
Sleep Deepest allowed state while preserving the required wake path
  • Powered: minimal always-on domain only (policy-dependent).
  • Listening: only the required wake source(s) remain enabled (e.g., bus wake or local wake).
  • Risk: “lowest Iq” is meaningless if it disables the wake sources required by the vehicle.
Wake-ready (partial wake-ready) Low-power with selective monitoring enabled
  • Powered: always-on domain + selective-wake logic + filters/threshold blocks.
  • Listening: bus activity is monitored with filters to reduce false wakes.
  • Trade-off: lower false-wake risk vs higher standby current vs added latency.

2) Comparable “Sleep Iq” envelope (turn datasheets into apples-to-apples)

A comparable low-power number is defined by conditions + boundaries. A missing condition means the number is not comparable.

Conditions (must be stated)
  • VBAT: nominal + min/max range used for validation.
  • Temperature: at least cold/room/hot corners (numbers drift strongly with temp).
  • Wake monitoring: which sources are enabled (bus/local/timed) and which filters are active.
  • External loads: terminations, pull-ups, leakage paths, indicator networks, sensing dividers.
Boundaries (must be chosen)
  • Component-only: current into the device pins under a specified mode.
  • System-at-VBAT: current at the ECU battery input (includes external networks + leaks).
  • Entry/exit criteria: what qualifies as “in sleep” (timers/flags/rails settled).
Output template (copy into a spec table)
  • State: Sleep / Standby / Wake-ready
  • VBAT & Temp: Vbat = X (min/max), Temp = {cold/room/hot}
  • Wake enabled: {bus, local, timed} + filter mode name
  • Boundary: component-only OR system-at-VBAT
  • Must-log: sleep-entry flag, wake-reason flags, vbat_min, temp, timestamp

3) Static current budget decomposition (method, not numbers)

  1. Vehicle key-off budget is allocated across domains (body/comfort, powertrain, gateway, telematics).
  2. ECU-level budget is split into always-on vs switched domains (only always-on counts in sleep).
  3. Always-on budget is assigned to wake monitors + retention + leakage paths + protection networks.
  4. Verification ownership ties each wake source to a measurable evidence trail (flags/counters/log fields).
Diagram · ECU power domains and what stays alive in sleep
ECU power domains for low-power VBAT always-on rail switched rail Always-on domain Wake logic flags / latches Filter / Threshold Retention min state Bus PHY / SBC sleep / standby control wake reason flags MCU domain off in sleep on after wake Bus harness CAN / LIN / FlexRay Local wake pins / sensors Timed wake RTC / schedule bus wake local wake timed wake wake request

Reading tip: always compare low-power numbers only after confirming state, enabled wake sources, and boundary (component-only vs system-at-VBAT).

Not here (avoid scope overlap)
  • Selective-wake filter-table/protocol details → use the “Selective Wake / Partial Networking” page.
  • CAN FD/SIC/XL waveform, bit timing, loop delay → use each PHY-specific page.
  • Termination/CMC/TVS/layout for EMC → use the “EMC / Protection & Co-Design” page.
H2-2 Wake-source attribution: paths → observation points → decision logic → log schema

Power-state taxonomy & wake paths (bus / local / timed)

Reliable low-power design requires wake-source attribution: every wake event must be classified as bus/local/timed with evidence, so false wakes can be reduced and service diagnostics remain credible.

This section answers
  • How to define bus/local/timed wake paths in a way that survives real harness noise and power disturbances.
  • Which observation points are mandatory (pins, flags, counters, power telemetry) to avoid mis-attribution.
  • A practical attribution order and evidence rules that produce a serviceable “wake black-box”.

1) Wake path taxonomy (minimal definitions)

Bus wake Triggered by bus activity or wake frames
  • Evidence type: bus activity counters, bus-wake flags, transceiver wake status.
  • Typical confusion: noise-induced activity vs true wake frames; boundary requires evidence + timing.
Local wake Triggered by ECU-local pins/sensors
  • Evidence type: pin snapshot, interrupt reason, debounce-window hit.
  • Typical confusion: floating pins, leakage paths, or mis-wired inputs masquerading as wake signals.
Timed wake Triggered by RTC or scheduled policy
  • Evidence type: RTC alarm flag, scheduler record, known periodic tasks.
  • Typical confusion: timed wake coinciding with bus activity; attribution must prioritize evidence ordering.

2) Observation points (mandatory signals to log)

Mis-attribution is a common root cause of “unfixable” false-wake problems. Evidence must come from four layers: pins, flags, counters, and power telemetry.

Pins (hardware evidence)
  • WAKE / INT: capture logic level and edge timing (pre/post wake window).
  • EN / INH: record state transitions that could self-trigger wake.
Flags (reason + state)
  • Sleep entry/exit: confirm the system truly reached the target state.
  • Wake reason: bus/local/timed (or ambiguous) classification flags.
Counters (activity + rate)
  • Bus activity counter: proves whether bus toggling preceded the wake.
  • Wake count: supports false-wake rate statistics over a fixed time window.
Power telemetry (disturbance context)
  • VBAT min / dip: helps separate “power disturbance” from “true wake request”.
  • Temp snapshot: prevents corner-case drift from being misread as random behavior.

3) Attribution decision order (evidence-first logic)

  1. Check timed wake first: if an RTC alarm/scheduler record exists in the wake window, classify as timed.
  2. Then verify local wake: if a stable pin/interrupt reason is present and matches debounce rules, classify as local.
  3. Then validate bus wake: require bus activity evidence that precedes the wake flag and exceeds the activity threshold.
  4. If multiple sources appear: classify as ambiguous and preserve evidence fields; do not force a single cause.
Wake path checklist (minimum record set)
Bus wake
  • Must record: bus activity counter + bus-wake flag + timestamp.
  • Also record: VBAT min around wake window.
  • Pass criteria: activity evidence precedes wake and exceeds threshold X (placeholder).
Local wake
  • Must record: WAKE/INT pin snapshot + debounce result + timestamp.
  • Also record: input configuration state (pull-ups/threshold mode).
  • Pass criteria: pin evidence stable through debounce window Y (placeholder).
Timed wake
  • Must record: RTC alarm flag / scheduler record + timestamp.
  • Also record: policy identifier (which periodic task triggered).
  • Pass criteria: timed evidence exists within window Z (placeholder).
Diagram · Wake-source attribution flow (bus/local/timed) and log fields
Wake-source attribution flow Wake event time window TIMED LOCAL BUS RTC flag scheduler Timestamp window Pin snap debounce INT reason evidence Activity counter Wake flag ordering Decision Timed → Local → Bus Ambiguous Log SRC TS EVD VBAT

Reading tip: classify by evidence ordering (what happened first) and preserve “ambiguous” cases to prevent misleading conclusions.

Not here (avoid scope overlap)
  • Selective-wake filter-table and protocol configuration details → use the “Selective Wake / Partial Networking” page.
  • Noise/EMC network design for reducing wake susceptibility → use the “EMC / Protection & Co-Design” page.
H2-3 Scope guard: decode conditions → comparable fields (no brands/models)

Spec decoding: Sleep Iq and standby current (how to compare apples-to-apples)

Low-power numbers are only meaningful when the test envelope is known: VBAT, temperature, enabled wake sources, mode, external loads, and whether the current is measured component-only or system-at-VBAT.

This section answers
  • What a datasheet “Sleep Iq” actually includes (and what it can quietly exclude).
  • Which condition dimensions dominate the number (and which ones are most often missing).
  • How to convert a datasheet line into a comparable record using a fixed field template.

1) Why “Sleep Iq” is frequently not comparable

  • Different states: “Sleep”, “Standby”, and “Wake-ready” can keep different always-on blocks alive.
  • Different wake monitoring: enabling bus/local/timed monitoring often changes current more than expected.
  • Different boundaries: device-pin current vs ECU-at-VBAT current can differ by external networks and leakage paths.
  • Different external loads: termination, pull-ups, protection, and dividers may dominate system standby current.

2) Condition dimensions that must be stated

Core conditions
  • VBAT: nominal + min/max range used for validation.
  • Temperature: at least cold/room/hot corners (low-power drift is strongly temp-dependent).
  • Mode/state: Sleep / Standby / Wake-ready, plus any sub-mode naming used by the part.
  • Wake enabled: bus/local/timed monitoring on/off and filter mode name.
Boundary + external reality
  • Boundary: component-only OR system-at-VBAT (choose explicitly; do not mix).
  • Pins snapshot: EN/INH/WAKE/INT/TxD/RxD static levels and pull configuration.
  • Loads: termination, pull-ups, protection networks, sensing dividers, indicator paths.
  • Bus state: idle / dominant / recessive / unknown (only for comparability and evidence).

3) Min / Typ / Max: what matters for low-power decisions

  • Max is the budget-driving number: it protects key-off battery drain at VBAT and temperature corners.
  • Typ is suitable for early estimation only; it must not be used as a pass criterion for vehicle-level budgets.
  • Min is often a best-case under limited monitoring; verify that required wake sources remain enabled.
  • Any Min/Typ/Max must be attached to the full envelope (VBAT, temp, mode, wake enabled, loads, boundary).

4) “Sleep Iq comparison template” (copy-ready field list)

Use a fixed template to convert any datasheet statement into a comparable record. Missing fields mean the record is incomplete.

Identity
  • State: Sleep / Standby / Wake-ready
  • Mode name: vendor sub-mode label (if any)
  • Boundary: component-only OR system-at-VBAT
Conditions
  • VBAT: X (min/max)
  • Temp: cold / room / hot
  • Wake enabled: bus / local / timed + filter mode name
  • Bus state: idle / dominant / recessive
Pins & loads
  • Pins: EN / INH / WAKE / INT / TxD / RxD snapshot
  • Loads: terminations, pull-ups, protection, dividers, leakage paths
  • Entry criteria: flag/timer/rails settled definition
Must-log
  • Sleep entry/exit: state confirmation evidence
  • Wake reason flags: if wake occurs during measurement window
  • VBAT min & temp: snapshots around the window
  • Timestamp: to align with wake-source attribution
Diagram · Datasheet → Comparable Spec (field extraction map)
Datasheet to comparable spec mapping Datasheet Sleep Iq VBAT / Temp Mode / Wake Field extractor State Boundary VBAT Temp Mode Wake Pins Loads Comparable record State VBAT Temp Wake Loads normalize

Reading tip: treat every low-power number as a record (state + boundary + conditions). Records with missing fields are not comparable.

Not here (avoid scope overlap)
  • Brand/model comparisons and part recommendations.
  • Protocol filter-table details for selective wake → use the “Selective Wake / Partial Networking” page.
H2-4 Quantified trade-off: sensitivity ↔ false-wake rate (no ISO 11898-6 table details)

Selective-wake metrics: sensitivity vs false-wake rate

Selective wake quality is not a single feature. It is a set of measurable metrics that balance wake sensitivity, false-wake rate, and wake latency under real harness noise and power disturbances.

This section answers
  • How to define sensitivity and false-wake rate in a measurable, reportable way.
  • Which knobs (debounce, windows, filter strictness/capacity, latency) shape the trade-off.
  • A metric card format that supports apples-to-apples comparisons across designs.

1) Metric map (what “good selective wake” really means)

  • Sensitivity: the weakest eligible activity that reliably triggers wake under the defined envelope.
  • False-wake rate: unintended wakes per fixed time window (rate statistics), under a defined noise/disturbance profile.
  • Wake latency: time from eligible trigger to “system-ready” boundary (must meet system constraints).
  • Stability knobs: debounce rules, detection windowing, match strictness/capacity, and evidence ordering.

2) Control knobs (why improving one metric can degrade another)

Debounce
  • Increase debounce → lower false-wake rate, but higher wake latency.
  • Decrease debounce → higher sensitivity, but more noise-triggered wakes.
Detection window
  • Narrow window → fewer random hits, but weaker events may be missed.
  • Wider window → better capture, but increased accidental matches.
Filter strictness / capacity
  • Stricter match → lower false-wake rate, but reduced sensitivity.
  • More capacity → better targeting, but may increase always-on complexity and current.
Wake latency
  • Latency is a hard constraint: it must remain within system limits even when filters are strengthened.
  • Report latency to a clear boundary (trigger → system-ready), not a vague “wake time”.

3) Metric definition cards (how to measure / report / compare)

Use a consistent card format per metric. The goal is comparability: identical envelope, identical rate units, and preserved evidence.

Metric Sensitivity
  • Definition: weakest eligible trigger that wakes reliably under the defined envelope.
  • How to measure: sweep trigger strength while holding envelope constant (VBAT, temp, mode, monitoring).
  • How to report: threshold at which wake succeeds ≥ X% over N trials (placeholders).
  • Decision use: must meet the weakest required wake scenario without excessive false wakes.
Metric False-wake rate
  • Definition: unintended wakes per fixed time window under a defined disturbance profile.
  • How to measure: long-run logging in target states with known noise/disturbance conditions.
  • How to report: wakes/hour or wakes/day + percentile bands; preserve ambiguous attribution.
  • Decision use: battery-protection metric; usually a hard constraint in key-off budget planning.
Metric Debounce rule
  • Definition: minimum stability requirement before a trigger is accepted as eligible.
  • How to measure: inject short transients and confirm rejection; inject valid events and confirm acceptance.
  • How to report: debounce time/logic + acceptance/rejection rates under fixed envelope.
  • Decision use: primary lever to reduce noise-induced wake at the cost of added latency.
Metric Detection window
  • Definition: observation window during which evidence is accumulated for a wake decision.
  • How to measure: vary window size and evaluate sensitivity and false-wake rate under the same envelope.
  • How to report: window parameter + sensitivity threshold + false-wake rate in one record.
  • Decision use: balances capturing weak valid events vs rejecting random disturbances.
Metric Match strictness / capacity
  • Definition: how selective the eligible trigger definition is, and how many target patterns are supported.
  • How to measure: evaluate sensitivity and false-wake rate as strictness and capacity change.
  • How to report: strictness label + capacity label + sensitivity + false-wake rate (same envelope).
  • Decision use: targets specific wake needs while controlling unintended wakes and always-on cost.
Metric Wake latency
  • Definition: trigger accepted → system-ready boundary.
  • How to measure: timestamp trigger acceptance and system-ready assertion across envelope corners.
  • How to report: max/percentiles + boundary definition (do not report vague averages).
  • Decision use: ensures filtering does not violate response requirements for the application.
Diagram · Sensitivity ↔ False-wake trade-off (concept map)
Selective wake trade-off Sensitivity False-wake rate Acceptable region low false wakes high sensitivity Control knobs Debounce Window Strictness Latency

Reading tip: treat false-wake rate as a rate statistic (fixed window, fixed envelope). Use knobs to reach the acceptable region while meeting latency constraints.

Not here (avoid scope overlap)
  • ISO 11898-6 selective-wake filter-table entries and configuration details → use the “Selective Wake / Partial Networking” page.
  • EMC network/layout methods to reduce wake susceptibility → use the “EMC / Protection & Co-Design” page.
H2-5 Goal: answer “who woke it up?” with an audit-ready evidence chain

Wake-event attribution: logging fields and “black-box” design

Wake attribution is an evidence chain, not a single flag. A minimal black-box schema should capture time, wake reason, bus activity, pin states, VBAT dips, and temperature so service teams can answer who woke the ECU with repeatable proof.

This section delivers
  • A fixed attribution taxonomy: bus / local / timed / unknown (unknown is a valid, controlled outcome).
  • A minimal black-box field list that supports service-level root attribution.
  • Logging frequency and retention guidance that respects NVM endurance.

1) Attribution taxonomy (controlled categories)

Bus wake

Evidence must include bus activity counters and a time-aligned snapshot around the wake edge.

Local wake

Evidence must include a pin snapshot (WAKE/INH/EN/INT and key GPIOs) captured as early as possible.

Timed wake

Evidence must include an RTC/timer marker and a windowed record confirming schedule-driven wake.

Unknown (allowed)

Used when evidence is incomplete (reset-before-capture, VBAT brownout, overwritten buffer). It prevents false certainty.

2) Wake attribution schema (field list + how to log)

The schema is designed to be auditable. Every wake record must include a minimum set of fields so attribution is reproducible.

MUST Minimum black-box fields
  • tstamp: monotonic time (and optional wall-clock) for ordering and correlation.
  • wake_src: bus / local / timed / unknown + subcode enum.
  • bus_cnt: bus activity counters (and optional error counters) for the sleep window.
  • pin_snap: WAKE / INH / EN / INT / TxD / RxD + key GPIO level snapshot.
  • vbat_min: minimum VBAT observed around the window (captures dips/brownouts).
  • temp: temperature snapshot for corner explanation and drift correlation.
SHOULD Fields that increase confidence
  • rst_reason: reset/brownout reason register for evidence-chain breaks.
  • sleep_entry_ok: proof of sleep entry (state machine marker + rails settled).
  • lastN: compact history of last N wake records to detect patterns.
  • gw_hint: gateway forwarded hint (upstream port/tag) for remote-wake correlation.
When to log
  • On wake ISR: capture pin snapshot + wake flags immediately (“early capture”).
  • On sleep entry: store entry marker and baseline counters.
  • Periodic in sleep: update VBAT min and counter deltas at a low rate (endurance-aware).
  • On reset: store reset reason and last known state if possible.
Retention & storage strategy
  • Ring buffer first: store events in RAM (depth N) to avoid NVM write storms.
  • Batch commit: write to NVM on controlled triggers (M events / T time / critical class).
  • Cooldown: avoid duplicate writes from repeated wake edges during the same episode.
  • Service export: define a minimal readout format (record list + schema version).
Diagram · Event recording pipeline (flags → ISR → buffer → NVM → service readout)
Wake-event black-box pipeline PHY/SBC flags wake/bus MCU ISR early capture pin snap Ring buffer N events NVM commit batch Svc readout export Reset before capture NVM write limits Evidence chain: flags → early capture → buffer → controlled commit → service export

Implementation rule: the first capture point must happen in the ISR, and the first durable storage must be endurance-aware (buffer before NVM).

Not here (avoid scope overlap)
  • Security gateway policy and authorization flows (only log fields are covered here).
  • Selective-wake filter table configuration details (covered in the Partial Networking page).
H2-6 Goal: reduce always-on domain size (system-level), not only device Iq

System architecture patterns for ultra-low Iq

Ultra-low key-off current is achieved by minimizing the always-on domain and designing a clear role split between SBC/PHY and the MCU domain. A lower-Iq transceiver rarely fixes a system that keeps unnecessary rails alive.

This section delivers
  • A design lens: always-on minimization before “lowest Iq part”.
  • Three architecture patterns: PHY-only, SBC-based, Gateway-managed.
  • A selection checklist: when SBC is needed, when PHY+PMIC is enough, and when a gateway becomes mandatory.

1) Always-on domain minimization (the first-order driver)

  • Define always-on: rails and blocks that remain powered during key-off sleep.
  • Separate necessary vs accidental: wake monitoring is necessary; residual rails and hidden loads are accidental.
  • Track external burden: pull networks, dividers, indicator paths, and leakage can dominate system sleep current.
  • Proof-based design: every always-on block must justify its role in wake and be visible in wake logs.

2) Role split and pin strategy (INH / EN / WAKE)

SBC / PHY side (always-on)
  • Maintain wake monitoring (bus/local/timed) with the smallest required footprint.
  • Expose wake reason flags and bus activity evidence to the MCU at wake entry.
  • Control INH and EN as policy outputs (avoid half-awake states).
MCU domain (switched)
  • Stay off or in deep sleep until a qualified wake event is raised.
  • On wake, capture pin snapshot and evidence immediately (aligns with the attribution schema).
  • After handling the event, return to the smallest stable state (avoid unplanned rail retention).

3) Architecture selection checklist (system-level decision rules)

PHY + PMIC is often enough when
  • wake sources are simple and attribution does not require upstream correlation.
  • the ECU can keep always-on blocks minimal without complex policy logic.
  • service needs are satisfied by local black-box logs only.
SBC-based design is preferred when
  • power policy, watchdog/reset behavior, and wake aggregation must be standardized.
  • INH/EN sequencing must be robust across faults and brownouts.
  • the always-on domain requires a central controller to avoid drift and leakage growth.
Gateway-managed design is needed when
  • multiple buses and domains require coordinated wake and attribution (remote vs local).
  • upstream event hints must be preserved for service correlation.
  • power partitioning is mandatory across domains to meet fleet key-off budgets.
Diagram · Three architecture patterns (always-on highlighted)
Ultra-low Iq architecture patterns Pattern A · PHY-only Always-on VBAT PHY monitor MCU switched ECU loads off in sleep WAKE Pattern B · SBC-based Always-on SBC policy PHY monitor MCU domain switched INH/EN Pattern C · Gateway-managed Always-on Gateway attribution ECU switched Bus links multi-domain INH/EN

Design rule: choose the pattern that minimizes always-on blocks while preserving required wake monitoring and service-grade attribution evidence.

Not here (avoid scope overlap)
  • Device-internal SBC block diagrams and vendor-specific power trees.
  • Security gateway policy; only the architecture role split and evidence preservation are covered.
H2-7 Goal: turn “low-power” into repeatable measurements and defensible statistics

Measurement & validation: measure µA and prove false-wake rate

Low-power claims must be backed by a repeatable test chain: a µA measurement setup that does not perturb the DUT, a strict sleep-entry qualification, controlled wake injections, and a statistics plan that defines windows, deduplication, and environmental variables.

This section delivers
  • A µA measurement chain checklist (static accuracy + dynamic capture without hidden perturbations).
  • A three-layer sleep-entry qualification method (logic + power + current evidence).
  • A false-wake rate experiment design (windowing, event dedup, sample size, and variable logging).
  • A validation plan template: Test matrix and Pass criteria placeholders.

1) µA measurement chain (accuracy without perturbation)

µA-level current measurement fails most often due to range switching artifacts, shunt burden voltage, and insufficient capture of entry transients. The chain must be specified as an experiment, not a screenshot.

Core chain decisions
  • Sensing method: shunt-based vs power analyzer mode (choose by static µA vs dynamic waveform needs).
  • Burden control: ensure the sensing element does not change DUT behavior (entry to sleep must remain valid).
  • Sampling strategy: capture both steady-state sleep and the sleep-entry transient window.
  • Ground discipline: avoid unintended return loops that bias µA readings (single-point return for measurement).
Common failure modes to guard
  • Auto-ranging steps create false “current spikes” or hide short events.
  • Shunt drop shifts VBAT and changes sleep qualification or wake sensitivity.
  • Over-averaging smooths out periodic wake attempts and underestimates false-wake behavior.
  • Cable/contact drift adds slow baseline movement that looks like temperature dependence.

2) Sleep-entry qualification (logic + power + current)

A sleep current number is valid only after a strict sleep-entry confirmation. Use three layers of evidence so “not actually asleep” cannot slip into the dataset.

Evidence Layer A · Logic

Record the sleep-state marker (state machine / status flag) and the timestamp used as the window start.

Evidence Layer B · Power

Confirm the intended rails are switched off (or reach the expected low state). Avoid “half-awake” rails.

Evidence Layer C · Current

Accept the Iq sample only after the current settles into a stable band for a defined dwell time (pass window placeholder).

Acceptance rule

If any evidence layer is missing, the segment is excluded from comparison and cannot be used for “spec compliance”.

3) Proving false-wake rate (windowing, dedup, and variables)

False-wake rate is a statistical metric. It requires a defined observation window, event deduplication rules, and explicit environment and configuration logging.

Define the observation window
  • Start time: only after sleep-entry confirmation (Layer A/B/C).
  • End time: fixed window per run or until a controlled wake injection occurs.
  • Exclude periods with known maintenance or scripted wakes.
Deduplicate wake episodes
  • Create an event-id to merge repeated interrupts from the same wake episode.
  • Apply a cooldown time before counting the next event (placeholder).
  • Store the chosen rule in the report to keep datasets comparable.
Log variables every run
  • VBAT (and VBAT min), Temp, software/hardware version, harness setup.
  • Mode and wake-source configuration, bus activity baseline.
  • Export the same black-box fields used for attribution (wake_src / pin_snap / bus_cnt).

4) Validation plan template (Test matrix + Pass criteria placeholders)

Test matrix (template)
  • VBAT × Temp × Mode × Wake source
  • Fixed run metadata: software version, hardware revision, harness configuration, bus load state.
  • Window definition: sleep-entry dwell + observation time (placeholders).
Pass criteria (placeholders)
  • Sleep Iq window: ≤ X µA (placeholder) under defined VBAT/Temp/Mode.
  • Sleep-entry time: ≤ X s (placeholder) with evidence A/B/C satisfied.
  • False-wake rate: ≤ X per 24 h (placeholder) under defined environment variables.
  • Attribution coverage: ≥ X% (placeholder) wake events classified (bus/local/timed) with evidence retained.
Diagram · Test bench block diagram (VBAT → measurement → DUT → bus stimulus → logger)
Low-power test bench VBAT source Current measurement Option A/B DUT SBC / PHY / MCU SBC PHY MCU Logger Bus stimulus Temp chamber Control PC scripts Repeatable chain: supply → measure → qualify sleep → inject wakes → export evidence

Reporting rule: every dataset must include the test matrix coordinates and the sleep-entry evidence (logic, power, current).

Not here (avoid scope overlap)
  • EMC instrument selection and chamber setup; only low-power measurement and statistics definitions are covered.
  • Selective-wake filter configuration details; only controlled wake injection and proof metrics are covered.
H2-8 Goal: carry low-power behavior from lab to production with consistency gates

Bring-up & production gates (engineering checklist)

Low-power must be validated through three gates: Design, Bring-up, and Production. Each checklist item must specify verification method, required log fields, and a pass criteria placeholder so results remain consistent across teams and builds.

Gate structure (fixed)
  • Entry conditionsVerification methodLog fieldsPass criteria (X placeholders)
  • Main threads in every gate: sleep entry proof, Iq window, attribution consistency, false-wake sampling.

1) Design gate (architecture-level traps blocked early)

Checklist items
Always-on domain minimized
Method: rail/domain review + sleep-state mapping
Log fields: sleep_state, rail_state, pin_snap
Pass: X always-on blocks max (placeholder)
INH/EN/WAKE policy avoids half-awake states
Method: state machine review + fault injection plan
Log fields: pin_snap, wake_src, rst_reason
Pass: no undefined intermediate state (placeholder)
Attribution schema implemented (service-grade)
Method: schema review + export format validation
Log fields: tstamp, wake_src, bus_cnt, vbat_min, temp
Pass: ≥ X% events classified (placeholder)

2) Bring-up gate (sleep, wake, and attribution proven on real hardware)

Sleep entry qualification is strict
Method: verify logic + power + current evidence in every run
Log fields: sleep_state, rail_state, iq_window, tstamp
Pass: entry time ≤ X s; stable band ≥ X s (placeholders)
Controlled wake injection (bus/local/timed)
Method: scripted injections with traceable markers
Log fields: wake_src, bus_cnt, pin_snap, injection_tag
Pass: 100% injected wakes correctly attributed (placeholder)
Iq window and false-wake baseline recorded
Method: run observation windows with event dedup rules
Log fields: event_id, window_start, window_end, vbat_min, temp
Pass: Iq ≤ X µA; false-wake ≤ X/24h (placeholders)

3) Production gate (sampling, regression, and drift control)

Fixed-condition Iq sampling
Method: standardized setup, same matrix coordinates per build
Log fields: vbatt, temp, mode, iq_window, version
Pass: within window X µA; drift ≤ X% (placeholders)
False-wake spot checks with defined window
Method: short-run windowing with dedup + environment logging
Log fields: event_id, wake_src, bus_cnt, vbat_min, temp
Pass: ≤ X per window; unknown rate ≤ X% (placeholders)
Schema/version control prevents regression
Method: export schema versioned; scripts and thresholds tracked
Log fields: schema_ver, sw_ver, hw_rev, fixture_rev
Pass: traceability complete for every batch (placeholder)
Diagram · Gate flow (Design → Bring-up → Production) with inputs/evidence/outputs
Low-power gates Design gate Inputs Evidence fields Outputs Bring-up gate Inputs Evidence fields Outputs Production gate Inputs Evidence fields Outputs Feedback Gate rule: every item must define method + log fields + pass criteria (X placeholders)

Consistency rule: the same evidence fields and schema version must be used in lab, bring-up, and production sampling.

Not here (avoid scope overlap)
  • Material part numbers; this checklist is process- and evidence-driven.
  • EMC/ESD test hardware and chamber procedures; only low-power and false-wake consistency gates are covered.
H2-9 Strict low-power lens: map real in-vehicle scenes to wake sources and architecture classes

Application patterns (strictly low-power lens)

“Low-power” becomes actionable only after placing it back into real vehicle scenes. Each domain differs in node count, wake frequency, false-wake risk, and attribution complexity. This section maps scenes to the right metrics, typical wake sources, and recommended architecture classes (concept-level).

This section delivers
  • A scene mapping table: Scene → Key metrics → Typical wake sources → Recommended architecture.
  • A 4-quadrant scene map: wake frequency vs attribution complexity.
  • Practical do/don’t boundaries: no bus electrical details and no cross-domain topics.

Scene → metrics → wake sources → recommended architecture (concept-level)

Body / Comfort Many nodes • frequent wakes • high false-wake cost
Key metrics (priority)

False-wake rate (window-defined) • wake-source attribution coverage • sleep Iq (comparable conditions)

Typical wake sources

BUS (gateway activity / forwarded events) • LOCAL (switches / handles) • TIMED (health checks)

Recommended architecture class

Gateway-managed wake policy + node-level ultra-low Iq, with black-box attribution fields retained.

Powertrain / HV Isolation domains • correctness-first wake
Key metrics (priority)

Wake correctness & attribution reliability • isolation domain consistency • sleep Iq (defined mode)

Typical wake sources

LOCAL (safety chain / contactor events) • BUS (domain wake) • TIMED (periodic monitoring)

Recommended architecture class

Isolated transceiver class with clear wake/flag hooks; minimize always-on blocks across HV domains.

TCU / Diagnostics Remote wakes • service-grade attribution
Key metrics (priority)

Attribution coverage & traceability • wake latency consistency • false-wake rate baseline

Typical wake sources

BUS (remote wake forwarded by gateway) • LOCAL (service pin / ignition) • TIMED (scheduled check-in)

Recommended architecture class

Black-box logging pipeline retained through sleep; schema versioned for service readout.

Sensor / Actuator nodes Ultra-low Iq • occasional wake • tight budgets
Key metrics (priority)

Sleep Iq (comparable conditions) • wake correctness • spot-check false-wake rate

Typical wake sources

LOCAL (sensor event / pin) • TIMED (rare polling) • BUS (only if policy requires)

Recommended architecture class

Minimum always-on footprint; clear local wake path; validation windows defined for false-wake sampling.

Diagram · 4-quadrant scene map (wake frequency vs attribution complexity)
Low-power scene map Wake frequency (low → high) Attribution complexity (low → high) Powertrain / HV Nodes Target Iq µA Wake LOCAL BUS TIMED TCU / Diagnostics Nodes Target Iq µA Wake BUS LOCAL TIMED Sensor / Actuator Nodes Target Iq µA Wake LOCAL TIMED BUS Body / Comfort Nodes Target Iq µA Wake BUS LOCAL TIMED Indicators: node dots • “µA” tag • wake chips (BUS/LOCAL/TIMED) highlight typical sources

Use the map to choose which metrics dominate: high wake frequency increases false-wake cost; high attribution complexity demands stronger logging hooks.

Not here (scope guard)
  • No bus electrical waveform design, termination details, or EMC hardware procedures.
  • No selective-wake filter table configuration; only wake-source categories and measurable outcomes.
  • No cross-domain topics (e.g., Ethernet/DoIP) and no part numbers.
H2-10 Decision tree + scorecard (no part numbers): PHY vs SBC vs isolated vs PN-capable

IC selection logic (PHY / SBC / isolated / PN-capable) — decision tree

Selection should not start from a typical “Sleep Iq” number. It should start from hard constraints (isolation, wake policy), then converge via attribution hooks and system power integration. Use the decision tree to select a device category, then use the scorecard fields to compare candidates under comparable conditions.

This section delivers
  • A one-page decision tree (If/Then) to pick the category (not a model).
  • A scorecard field list to compare apples-to-apples: Iq conditions, false-wake form, wake latency, flags, INH, temperature.

Scorecard fields (compare apples-to-apples)

A
Sleep Iq comparability (required)
  • Mode definition: Sleep / Standby / PN-ready (must be stated).
  • VBAT & Temp: test points and range corner placeholders.
  • Wake-enabled set: which sources are armed during measurement.
  • Bus state: idle / biased / active baseline.
  • Pin states: INH/EN/WAKE and any strap pins captured.
  • Loads: termination/pull elements present or not (concept-only).
B
Selective-wake outcome fields
  • Sensitivity form: describe weakest bus activity/frame condition that wakes (template form).
  • False-wake form: X per time-window (window definition required).
  • Wake latency: defined from wake event to system-usable state (placeholder).
  • Debounce/windowing: support for stable outcomes (do not list filter rules here).
C
Wake attribution hooks
  • Wake source flags: bus/local/timed distinguishable or not.
  • Bus activity counters: counters/timestamps available for correlation.
  • Pin snapshot support: capture key wake-related pins at event time.
  • Timestamp availability: RTC/monotonic time retention across sleep.
  • Export readiness: ISR → buffer → NVM → service readout (concept-level).
D
System integration & constraints
  • Isolation requirement: domain separation needed (yes/no) and why (concept).
  • SBC integration: LDO / watchdog / reset / wake policy integration needed (yes/no).
  • INH capability: system power enable/hold management required (yes/no).
  • Temperature class: required range and corners for validation plan placeholders.
  • Recovery behavior: predictable wake/recovery state expectations (template form).
Diagram · One-page decision tree (goal → constraints → recommended category)
Low-power IC selection decision tree Goal: Iq budget • wake policy • attribution needs Isolation required? PN wake needed? Need SBC? Strong attribution? PHY-only SBC-based Isolated PN-capable YES NO YES NO YES NO Then compare with the scorecard fields

Rule: do not compare “Sleep Iq” unless the measurement mode, VBAT/Temp, wake-enabled set, bus state, and pin states are all stated.

Not here (scope guard)
  • No device model recommendations and no part numbers.
  • No cross-domain content (e.g., Ethernet) and no PN filter table configuration details.
  • No bus electrical design or EMC hardware procedures; only category logic and measurable comparison fields.
H2-11 Engineering-style pitfalls: symptom → first check → fix → pass criteria

Design hooks & pitfalls (within low-power boundary)

This section closes the most common low-power failure modes using a repeatable troubleshooting structure. The focus stays on sleep entry, Iq stability, false-wake control, and wake-source attribution integrity. If noise/EMC is suspected, treat it as a possible trigger and hand off detailed mitigation to the EMC co-design page.

Scope guard (kept intentionally narrow)
  • No bus waveform/termination/CMC/TVS layout tutorial here; only how such factors show up as symptoms.
  • No selective-wake filter table configuration details; only measurable outcomes and logging hooks.
  • Part numbers are listed as example BOM items; equivalents are acceptable and grade/qualification must be verified.

Pitfalls checklist (fixed 4-line troubleshooting format)

Sleep entry Policy / flags / pending IRQ

Symptom: Sleep request is issued, but the system never reaches the intended sleep state.

First check: Confirm “sleep entry conditions” are met: pending IRQ/flags cleared, wake-enabled set consistent, bus baseline truly idle (counter window defined).

Fix: Create a single “sleep gate” function that snapshots pin states + flags, clears/acknowledges required flags, and only then transitions state.

Pass criteria: Sleep entry success ≥ X% within Y seconds across (VBAT, Temp, mode template) with logs proving “no pending flag/IRQ”.

Sleep entry Floating pins / undefined states

Symptom: Sleep entry is intermittent (works sometimes, fails sometimes) without a clear software pattern.

First check: Check for undefined pin states (WAKE/EN/INH/strap pins) and peripheral rails not fully off; verify pulls exist and are not “floating by design”.

Fix: Add explicit pull resistors and define a safe default pin state; gate external peripherals via a dedicated load switch.

Pass criteria: Sleep entry becomes deterministic: no pin state changes at rest; entry success ≥ X% over N cycles.

Example BOM
  • Pull resistors: Yageo RC0603FR-0710KL, Vishay CRCW060310K0FKEA
  • Load switch (peripheral rail gating): TI TPS22918, TI TPS22965
Sleep entry Vehicle-only failure / bus not actually idle

Symptom: Sleep entry works on bench, but fails on real harness or in-vehicle integration.

First check: Verify bus baseline: define an “idle window” and confirm counters show no periodic traffic (gateway heartbeats/timed events can keep the node awake).

Fix: Align system sleep policy with gateway wake policy; separate “bus-keepalive” from “node awake” so the always-on domain remains minimal.

Pass criteria: Bus baseline is documented and repeatable; node sleep entry meets X µA budget with bus idle confirmed over Y minutes.

Iq stability Periodic wake / timers still active

Symptom: Sleep Iq shows periodic spikes at a regular interval.

First check: Correlate spikes with timed-wake sources (RTC, periodic diagnostics, watchdog servicing); verify time base and windowing are consistent.

Fix: Remove or re-schedule periodic activities into an explicit “timed wake budget”; ensure the system returns to a documented baseline after each event.

Pass criteria: Iq time series stays within (baseline ± X) for Y hours, with timed-wake events explicitly accounted for.

Example BOM
  • RTC crystal (if used): Epson FC-135, Abracon ABS07
  • Supervisor/reset (avoid reset/wake loops): TI TPS3839, Maxim MAX16054
Iq stability Does not return to baseline after wake

Symptom: After a wake event, the node returns “functionally” but sleep Iq remains elevated.

First check: Confirm wake flags are cleared and peripheral rails are actually shut off; audit INH/EN policy for unintended hold states.

Fix: Implement a “return-to-sleep baseline” sequence: clear flags → disable peripherals → gate rails → snapshot final state.

Pass criteria: After any wake source, Iq returns to baseline within X seconds and stays within (baseline ± Y) for Z minutes.

False wake Threshold / debounce / environment

Symptom: False wakes increase under certain environments (temperature, dry air, harness state).

First check: Confirm the false-wake metric definition: window length, denominator, and logged environment fields (VBAT min, Temp, bus counters).

Fix: Define a stable debounce/windowing policy and validate it statistically; if noise is suspected as a trigger, reference the EMC co-design page for mitigation.

Pass criteria: False-wake ≤ X per Y hours across the full environment matrix, with wake-source attribution coverage ≥ Z%.

Example BOM
  • Debounce RC caps (examples): Murata GRM155R71H104KE14D, Murata GRM188R71C104KA01D
  • Series resistor (examples): Yageo RC0402FR-07100KL, Vishay CRCW0402100KFKEA
Low-power budget External leakage dominates sleep current

Symptom: Measured sleep Iq is far above expectation even when the IC’s internal mode appears correct.

First check: Partition the budget: isolate leakage contributors (external clamps, indicator loads, pull networks, sensor modules) by powering/gating segments one at a time.

Fix: Replace high-leakage protection or loads with low-leak options and ensure peripheral rails are truly off during sleep.

Pass criteria: Sleep Iq ≤ X µA under the comparable-condition template, and each “external leakage bucket” is ≤ Y µA.

Example BOM
  • CAN ESD/TVS examples (verify leakage/grade): Nexperia PESD1CAN, Nexperia PESD2CANFD, Littelfuse SM24CANB
  • Rail gating to eliminate module leakage: TI TPS22918, TI TPS22965
Attribution BUS vs LOCAL confusion (sampling point)

Symptom: Logs report BUS wake, but field evidence suggests LOCAL wake (or vice versa).

First check: Check the sampling point order: which is captured first—pin snapshot, bus counter, or status flags? Mis-ordered sampling often flips attribution.

Fix: Standardize an “event capture contract”: snapshot pins + flags + counters in one ISR path with a single timestamp source.

Pass criteria: Attribution matches injected ground-truth wakes ≥ X% across N tests, with consistent event ordering in logs.

Attribution Timestamp domain mismatch

Symptom: Two tools/vehicles disagree on “who woke the ECU” even though both read the same event logs.

First check: Validate time domain: RTC vs monotonic, unit scaling (ms/us), and whether counters are latched at the same event boundary.

Fix: Use one canonical timestamp in the schema; include “time-base id” and define counter reset/latch timing.

Pass criteria: Cross-tool replay reproduces identical attribution on the same dataset ≥ X% with explicit time-base tagging.

Example BOM
  • Event log retention memory (examples): Infineon CY15B104Q, Fujitsu MB85RC256V
  • SPI NOR (if used; verify qualification): Winbond W25Q64JV, Micron MT25QL128
Logging Missing wake source in service readout

Symptom: A wake occurs, but service readout shows “unknown wake source” or missing event record.

First check: Check ring buffer overflow and commit policy: power-loss window, NVM write frequency, and whether events are dropped under burst wakes.

Fix: Add a two-tier pipeline (fast RAM ring + durable checkpoint) and version the schema so decoding cannot silently fail.

Pass criteria: Event capture loss ≤ X per Y wakes under stress injection; “unknown wake source” rate ≤ Z%.

Measurement hook µA measurement lies (resolution / burden)

Symptom: Sleep Iq appears inconsistent across benches, or changes after adding measurement equipment.

First check: Validate measurement burden and sampling: shunt value, instrument range, integration time; ensure sleep entry has completed before recording.

Fix: Add a dedicated measurement footprint (Kelvin shunt + test points) and define a standard measurement recipe used by all benches.

Pass criteria: Bench-to-bench correlation within X% for the same DUT and condition template; no observable measurement-induced mode changes.

Example BOM
  • Shunt resistor (Kelvin; examples): Vishay WSL2512R1000FEA, Bourns CSS2H-2512R-0L100F
  • Current-sense monitor (examples): TI INA226, TI INA219
Consolidated example BOM (equivalents OK)

These are common reference parts used to implement low-power hooks (pulls, gating, logging, measurement). Final selection must verify qualification, temperature range, and leakage budgets.

Pulls / bias
  • Yageo RC0603FR-0710KL (10 kΩ)
  • Vishay CRCW060310K0FKEA (10 kΩ)
  • Yageo RC0402FR-07100KL (100 kΩ)
Debounce / small RC
  • Murata GRM155R71H104KE14D (0.1 µF)
  • Murata GRM188R71C104KA01D (0.1 µF)
  • Vishay CRCW0402100KFKEA (100 kΩ)
Rail gating / always-on minimization
  • TI TPS22918 (load switch)
  • TI TPS22965 (load switch)
  • TI LM66100 (ideal diode / ORing, if needed)
Log retention / service readout
  • Infineon CY15B104Q (F-RAM)
  • Fujitsu MB85RC256V (F-RAM)
  • Winbond W25Q64JV (SPI NOR, if used)
Measurement hooks
  • Vishay WSL2512R1000FEA (0.1 Ω shunt)
  • Bourns CSS2H-2512R-0L100F (0.1 Ω shunt)
  • TI INA226 / TI INA219 (current monitor)
Protection items (leakage-aware)
  • Nexperia PESD1CAN (ESD)
  • Nexperia PESD2CANFD (ESD)
  • Littelfuse SM24CANB (TVS)
Diagram · Symptom → attribution checkpoints → action (kept simple, fault-tree-like)
Low-power troubleshooting flow Symptom Attribution Action Sleep entry fails stuck in standby Iq spikes periodic / sticky high False wake rate increases Attribution mismatch BUS vs LOCAL flags pending? window + baseline snapshot valid? timestamp aligned Define sleep gate clear + snapshot Budget partition isolate leakage Debounce policy validate statistically Schema contract single time base Keywords to enforce: window • baseline • snapshot • timestamp

The flow intentionally enforces four invariants (window/baseline/snapshot/timestamp) before any root-cause conclusion is accepted.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.
H2-12 FAQs · Long-tail troubleshooting only (no scope expansion)

FAQs (Sleep Iq · False wake · Attribution · Wake latency · Measurement traps)

These FAQs intentionally close troubleshooting long tails within the low-power boundary. Each answer is a fixed, data-oriented 4-line structure with measurable pass criteria placeholders. For EMC mitigation details, reference the EMC co-design page.

Scope guard
  • Only: Sleep Iq comparability, false-wake metrics, wake-source attribution, wake latency definition, measurement artifacts.
  • Not covered here: detailed EMC networks/layout or selective-wake filter table configuration.
  • Pass criteria uses placeholders (X, Y, Z, N) and must be filled with program-specific thresholds and test matrices.
Datasheet sleep Iq looks low, but measured Iq is high — definition mismatch or not truly in sleep?

Likely cause: Non-comparable conditions (VBAT/Temp/mode/wake-enabled/bus state) or the system never reaches the documented sleep baseline.

Quick check: Fill a “Sleep Iq template” and verify: (1) sleep-entry complete flag set, (2) wake sources armed as intended, (3) bus idle window defined and truly idle.

Fix: Standardize one comparable-condition recipe and gate measurement only after sleep baseline is confirmed; partition external leakage buckets (pulls/clamps/modules) one-by-one.

Pass criteria: Sleep Iq ≤ X µA under template {VBAT, Temp, mode, wake-enabled, bus state}; bench-to-bench delta ≤ Y% over N repeats.

Sleep Iq is fine on bench, but higher in-vehicle — what is the first accounting check?

Likely cause: Always-on domain held by system policy (gateway keepalive/timed wakes) or additional harness-connected leakage not present on bench.

Quick check: Compare “bus baseline” counters and timed-wake events between bench and vehicle using the same time window; log VBAT-min and temperature for context.

Fix: Separate “network activity allowed” from “node awake”; minimize always-on blocks and explicitly budget timed wakes; isolate added leakage by power-gating segments.

Pass criteria: In-vehicle sleep Iq meets budget ≤ X µA with documented bus-idle window ≥ Y minutes and timed-wake count ≤ Z per hour.

Sleep entry is intermittent — floating WAKE/EN/INH pins or undefined defaults?

Likely cause: Undefined pin states or peripheral rails not truly off cause occasional wake/standby holds or prevent consistent sleep gating.

Quick check: Capture a pin snapshot at sleep request and at “expected sleep”: WAKE/EN/INH/strap pins must be stable; verify no unexpected IRQ/flags remain pending.

Fix: Add explicit pull networks and define a single sleep gate sequence (snapshot → clear/ack flags → disable peripherals → gate rails → verify baseline).

Pass criteria: Sleep entry success ≥ X% across N cycles with no pin state flips at rest; residual pending-flag count = 0 at sleep baseline.

Lowering wake threshold reduces missed wakes but increases false wakes — redefine the window or change the threshold first?

Likely cause: The false-wake metric is not normalized (window/denominator differs), making threshold changes look worse than they are.

Quick check: Freeze a single metric definition: false-wake count per fixed Y hours, with identical environment fields and bus baseline windowing.

Fix: Standardize window/denominator first, then sweep threshold and debounce as one controlled experiment; track wake latency and missed-wake rate concurrently.

Pass criteria: False-wake ≤ X per Y hours and missed-wake ≤ Z per Y hours across {VBAT×Temp×harness}; latency ≤ L ms for the same configuration.

False wakes rise in dry/cold conditions — missing environment fields or a real trigger?

Likely cause: Environmental correlation is not logged (VBAT-min/Temp), or the debounce/window policy is too permissive under shifted noise/baseline conditions.

Quick check: Verify each wake record includes {timestamp, Temp, VBAT-min, bus activity counter, pin snapshot}; re-compute false-wake per fixed window.

Fix: Add missing environment fields to the schema and validate false-wake statistically across the environment matrix; for suspected noise triggers, reference EMC co-design for mitigation.

Pass criteria: False-wake ≤ X per Y hours at each {Temp bin, VBAT bin}; attribution coverage ≥ Z% with complete fields (no missing Temp/VBAT-min).

Logs say BUS wake but field evidence suggests LOCAL — sampling order or counter-window mismatch?

Likely cause: Attribution flips due to mis-ordered sampling (pins/flags/counters captured at different boundaries) or inconsistent counter windows.

Quick check: Compare event capture ordering and window definitions across firmware builds; verify whether pin snapshot is taken before or after bus activity latching.

Fix: Define an “event capture contract”: snapshot pins + latch flags + read counters + stamp time in one ISR path; tag the contract version in the log.

Pass criteria: Attribution matches injected ground truth ≥ X% over N wakes; event ordering is identical across builds (contract version constant).

Two tools disagree on the same dataset — timestamp domain/unit mismatch?

Likely cause: Mixed time bases (RTC vs monotonic) or inconsistent unit scaling (ms/us) causes different ordering or windowing during replay.

Quick check: Confirm each event record includes {time-base id, unit, timestamp}; verify counters are latched/reset at the same boundary used by both tools.

Fix: Use one canonical timestamp for attribution; version the schema and provide explicit unit conversions in the service decoder.

Pass criteria: Cross-tool replay yields identical attribution ≥ X% on N logs; zero unit/time-base ambiguity in decoded reports.

Service readout shows “unknown wake source” — buffer loss or schema/version mismatch?

Likely cause: Wake events are dropped (ring buffer overflow/commit window) or the decoder cannot parse the event schema version used in the field.

Quick check: Stress-inject M wakes and verify event count continuity; confirm schema version field is present and matches the service decoder’s supported versions.

Fix: Implement a 2-tier pipeline (fast RAM ring + durable checkpoint) and enforce schema versioning with backward-compatible decoding.

Pass criteria: Event loss ≤ X per Y injected wakes; “unknown wake source” rate ≤ Z% over N field logs; schema decode success = 100%.

Wake latency numbers don’t match across teams — t0/t1/t2 definition not standardized?

Likely cause: Different latency definitions are being compared (e.g., event detected t0, MCU ISR t1, application-ready t2).

Quick check: Require each latency report to state (t0, t1, t2) and time base; split by wake path (bus/local/timed) before comparing.

Fix: Adopt a single latency schema: {t0 detected, t1 ISR, t2 ready} with instrumentation points and uniform timestamp domain.

Pass criteria: For each path, latency percentile P99 ≤ X ms and mean ≤ Y ms across N trials, with explicit (t0,t1,t2) reporting.

BUS wake is fast but LOCAL wake is slow — ISR path issue or power-domain resume order?

Likely cause: Local wake path traverses additional rails/peripherals or uses a different interrupt chain, delaying t1/t2 even if t0 is timely.

Quick check: Instrument t0/t1/t2 for both paths; identify which segment dominates (ISR latency vs rail ramp vs initialization).

Fix: Optimize the dominant segment only: shorten ISR chain, defer non-critical initialization, and ensure the always-on domain contains only required wake logic.

Pass criteria: LOCAL wake P99(t2−t0) ≤ X ms and BUS wake P99(t2−t0) ≤ Y ms over N trials, with segment breakdown logged.

Switching instruments changes measured Iq — burden/range/integration artifact?

Likely cause: Measurement burden voltage or auto-ranging/integration settings perturb the DUT or distort µA-level readings.

Quick check: Record burden voltage and sampling/integration time; repeat using the same sleep-entry confirmation point and a fixed measurement recipe.

Fix: Standardize a measurement setup (Kelvin shunt footprint + defined ranges) and forbid recipe changes without correlation tests.

Pass criteria: Tool-to-tool correlation within X% under identical template conditions; burden voltage ≤ Y mV at sleep baseline.

Iq “spikes” appear but no wake is captured — is the sleep-entry detection criterion wrong, or is the logging window too short?

Likely cause: The system is not actually in the sleep baseline when sampling starts, or the logging window misses short wake/flag pulses.

Quick check: Add a “sleep-baseline reached” marker and extend the capture window; compare current trace timestamps to ISR/event timestamps.

Fix: Gate measurement on the baseline marker, and implement a fast in-RAM event trace with periodic durable checkpoints to catch brief events.

Pass criteria: Spike attribution coverage ≥ X% over N spikes; no “unattributed spike” remains after window extension and baseline gating.