123 Main Street, New York, NY 10001

SBC Integrating LIN: LIN + Watchdog/Reset + Low-Power Policy

← Back to: Automotive Fieldbuses: CAN / LIN / FlexRay

A LIN SBC is the “policy engine” for body/comfort ECUs: it enforces low-power modes, reliable wake attribution, and deterministic watchdog/reset behavior as one evidence chain.

The goal is simple: reduce overnight drain without losing events—every wake and reset must be attributable, logged, and verifiable against clear pass criteria.

Definition & Where a LIN SBC Fits

Intent Make it immediately clear that a LIN SBC is a system policy component: LIN connectivity plus power, wake, reset, and watchdog control.

Scope guard: This page focuses on low-power policy, wake attribution, reset/watchdog strategy, and diagnostics hooks. Electrical waveform/EMC details are intentionally not expanded here.

One-line definition A LIN System Basis Chip (LIN SBC) combines LIN interface with power-mode policy, wake management, and reset/watchdog supervision—so the ECU’s power and recovery behavior becomes repeatable and diagnosable.

Boundary vs. a standalone LIN transceiver A standalone LIN transceiver mainly provides physical-layer connectivity. A LIN SBC is responsible for system-level guarantees: sleep modes, wake-source control, reset causes, and watchdog enforcement—the pieces that determine whether a body ECU is stable after months in the field.

Typical fit Best suited for body/comfort domains with many small ECUs (doors, lighting, seats, HVAC actuators), where the dominant risks are quiescent current leakage, false wakes, and unclear reset causes—not bus bandwidth.

Practical takeaways

  • Think “policy enforcer.” The SBC makes power and wake behavior deterministic and testable.
  • Think “serviceability.” Wake history, fault counters, and reset reasons reduce “no trouble found.”
  • Think “scale.” Many small ECUs stay consistent when the same low-power and recovery policy is reused.
Body domain topology (LIN SBC nodes) CAN/LIN Gateway Policy & routing LIN trunk Small ECU Node LIN PHY Wake + Power mode policy WD + Reset Small ECU Node LIN PHY Wake + Power mode policy WD + Reset Small ECU Node LIN PHY Policy core WD/Reset Focus: sleep current • wake source • reset reason • logs
System view: a LIN SBC anchors low-power policy, wake management, and recovery behavior inside each small body ECU node.

System Job-to-be-done: What Problems It Must Solve

Intent Convert “why a LIN SBC is needed” into measurable requirements that drive selection and verification.

The highest field-risk in body/comfort nodes is rarely LIN throughput. It is usually a combination of low-power leakage, false wake patterns, and unclear reset/recovery causes.

1) Sleep current target (Iq budget)

  • Requirement: ECU sleep current ≤ X µA, measured after a stable window of ≥ Y seconds.
  • Why it fails in practice: fixture leakage, port protection leakage, and “not really asleep” periodic wake cycles.
  • What the SBC must provide: explicit mode entry/exit, wake gating, and a way to prove the node stayed in sleep.

2) Wake attribution (bus / local / timed)

  • Requirement: every wake must be attributed to a source: bus, local, or timed.
  • Field reality: “false wakes” drain the battery and are hard to reproduce without attribution and event history.
  • What the SBC must provide: latched wake reason + debounced wake inputs + rate limiting to reduce re-wake loops.

3) Reset chain integrity (brownout / watchdog / thermal)

  • Requirement: reset causes must be distinguishable (POR/BOD/WD/thermal/communication fault), not a single “generic reset.”
  • Why it matters: without cause, recovery becomes guesswork; intermittent faults become “no trouble found.”
  • What the SBC must provide: watchdog enforcement + reset output behavior + reset-reason registers that survive transitions.

4) Serviceability (history & counters)

  • Requirement: retain an event history of the last N wakes/resets plus fault counters (LIN errors, undervoltage hits).
  • Why it saves time: service can distinguish “environmental/installation” vs. “system design” vs. “software timing.”
  • What the SBC must provide: readable counters/log hooks that software can export to diagnostics without heavy overhead.

Requirement checklist to lock before selecting parts

  • Sleep budget: Iq ≤ X µA after Y seconds; define fixture/leakage control rules.
  • False wakes: define the metric (per hour/day), and require wake attribution for every event.
  • Reset causes: require readable reset reasons and define the “safe state” for key loads after reset.
  • Logs: define N-event history depth, which counters are mandatory, and how logs are exported.
Problem What to measure SBC hooks Sleep current (Iq) stable window, leakage µA over Y seconds mode confirmation sleep modes wake gating Wake attribution bus / local / timed event source + rate false wakes / day wake reason latch debounce / filters Reset chain BOD / WD / thermal reset reason stable safe-state defaults watchdog + reset out reason registers Serviceability history + counters N-event log depth export path defined fault counters wake/reset history
Turn “why use a LIN SBC” into measurable requirements: define metrics, log paths, and which SBC hooks enforce policy.

Internal Blocks: Reference Architecture of a LIN SBC

Intent Break down a LIN SBC into modules and signal paths so each block is understood by its system meaning: policy, wake control, recovery, and diagnostics.

Scope guard: This section focuses on functional paths (power, wake/policy, diagnostics). PHY waveform/EMC component details are not expanded here.

Three backbone paths (how the chip “runs the ECU”)

  • Power path: VBAT → pre-reg/LDO/Tracker → rails/loads, with mode gating and undervoltage supervision.
  • Wake/policy path: bus/local/timer wake sources → wake latch → mode controller → rail enable and sequencing.
  • Diagnostics path: monitors/counters → status/interrupt → host interface, enabling event history and serviceability.

Module roles (what each block must guarantee)

  • LDO/Tracker: stable rails and predictable brownout behavior across sleep ↔ wake transitions.
  • Wake logic: source attribution (bus/local/timed) and false-wake mitigation through latches and filters.
  • WD/Reset/Fail-safe: deterministic recovery and safe-state outputs after faults or software stalls.
  • Monitors/Counters: measurable evidence (events, hits, flags) to avoid “no trouble found.”

Key blocks and the system questions they answer

LDO / Tracker / Pre-reg

Ensures rails behave consistently during mode changes; exposes brownout reasons and rail-good evidence. The primary coupling point with low-power policy is mode gating and sequencing.

Wake inputs + Wake reason latch

Separates who woke the ECU (bus/local/timed) from what the software does next. A latched reason and read/clear discipline is the foundation of “no lost events.”

Watchdog + Reset + Fail-safe outputs

Converts “software stuck” into a deterministic recovery path. Reset reason registers plus fail-safe defaults prevent ambiguous field behavior after resets.

LIN PHY (system-facing view)

The LIN pin is not only a communications port; it is also a wake channel and a potential false-wake source. The system focus is sleep ↔ wake behavior, not waveform tuning.

Host interface (SPI/I²C/Config)

Determines whether policy is consistent across variants. Readback, lock/commit, and configuration signatures reduce “same software, different behavior.”

ADC / Monitors / Counters (optional)

Adds evidence: undervoltage hits, thermal events, LIN error counters, wake history depth. These hooks are the fastest route to serviceability and reproducibility.

NVM / OTP options (if present)

Stabilizes default policy and traceability (configuration signature/version). Useful when production consistency matters more than runtime flexibility.

Bring-up mini checklist (architecture-level)

  • Config readback: read and log mode/wake/WD policy registers after power-up.
  • Mode loop test: run sleep → wake → sleep for X cycles; require stable Iq and consistent wake attribution.
  • Reset cause sanity: induce WD and undervoltage; ensure reset reasons are distinct and retained long enough to be logged.
  • Wake latch discipline: verify read/clear sequencing does not lose multi-source wake events.
  • Counter usefulness: confirm that error/wake counters move with injected faults and are exportable.
LIN SBC reference architecture (policy paths highlighted) Power path Policy/Wake VBAT vehicle supply LIN SBC core Pre-reg / LDO Tracker (opt.) mode gating Mode control Wake latch entry/exit HS/LS switch load policy WD + Reset fail-safe out LIN PHY wake channel error flags SPI / I²C config readback Monitors UV / Temp Counters NVM / OTP defaults MCU logs / diag LIN bus pin Wake sources local pin timer Focus on policy paths (power/wake/diag), not PHY waveform tuning
A LIN SBC should be read as three backbones: power delivery, wake/policy enforcement, and diagnostics evidence.

Power & Low-Power Policy: Sleep, Standby, and Mode Transitions

Intent Define low-power behavior as a verifiable state machine: which modes exist, who can wake the ECU, and how events are retained.

The goal is not only low Iq, but predictable wake attribution, false-wake control, and no lost events across repeated sleep↔wake cycles.

Mode taxonomy (treat modes as a system contract)

  • Normal: communications and loads active; full logging and counters enabled.
  • Standby: partial rails kept for fast resume; only essential wake sources enabled.
  • Sleep: lowest Iq target; wake sources are strictly gated and debounced.
  • Forced-sleep: an extra-restrictive low-power state (e.g., transport/long-park or fault containment).

Entry / exit triggers (define priorities explicitly)

  • Ignition/ACC: the primary mode authority for many body ECUs (sleep entry vs resume).
  • Bus activity: LIN wake patterns and valid frames (avoid treating noise as wake).
  • Timer: periodic self-check or gateway-coordinated timed wake windows.
  • Local wake pin: door handle/switch/sensor interrupts, with debounce and rate limiting.

False-wake control and “no lost events” discipline

False-wake control (layered)

  • Filter/pattern: require a valid bus wake pattern, not any edge.
  • Debounce: gate local wakes with time and level stability checks.
  • Rate limit/backoff: suppress repeated wakes in a short window to avoid re-wake loops.
  • Attribution + counters: record wake source and rate (X/day) to expose root causes.

No lost events (latch → log → clear)

  • Latch: wake reason must be latched in hardware across the transition.
  • Log: software must read and persist the reason (history depth N).
  • Clear: clear only after persistence; handle multi-source races deterministically.
  • Pass criteria: repeated sleep↔wake cycles maintain consistent attribution and complete history.

Verification gates (define before implementation)

  • Iq gate: sleep current ≤ X µA after Y seconds; measurement setup leakage controlled.
  • False wake gate: false wake rate ≤ X/day, with each wake attributed (bus/local/timed).
  • Event retention gate: wake reason and reset reasons persist long enough to be logged and exported.
  • Mode loop gate: sleep→wake→sleep repeated X cycles with stable distributions and no missing entries.
Low-power policy as a verifiable state machine Normal Comms + loads on Full logs/counters Standby Partial rails on Fast resume Sleep Iq target (X) Wake gated + debounced Forced-sleep Extra strict wakes Transport / containment ignition off ignition on timer / policy bus/local/timed wake transport / fault service enable No lost events Latch reason Read + log Persist Clear Wake sources: Bus Local Timed
The low-power plan should be expressed as modes + triggers + retention discipline, so behavior remains consistent and measurable across repeated cycles.

Wake Sources & Attribution: Bus Wake, Local Wake, Timed Wake

Intent Make the trade-off explicit: lower battery drain versus availability and serviceability. A wake policy is complete only when each wake is qualified, attributed, and logged without losing evidence.

Scope guard: This section covers system policy paths (filter/debounce/rate-limit, latch→log→clear). It does not expand PHY waveform tuning or EMC component details.

The wake “evidence chain” (policy, not edges)

  1. Trigger (bus/local/timed) →
  2. Qualify (filter, debounce, minimum window) →
  3. Latch reason (source attribution survives the transition) →
  4. Read → log → clear (persistent history, then safe clear discipline).

Bus wake (LIN activity / wake pattern)

  • Problem: noise and valid activity can look similar; treating any edge as wake inflates false-wake rate.
  • Qualify: require a valid pattern/header + minimum activity window (X ms) before declaring wake.
  • Stability: apply backoff/rate-limit after repeated triggers in a short window.
  • Service hook: counters for bus-wake hits and bus-error correlation; capture “last bus-wake” timestamp/sequence.

Local wake (switch / door handle / sensor interrupt)

  • Problem: bounce, sensor glitches, and aging can create sporadic triggers that drain the battery.
  • Qualify: debounce (time + level stability), optional threshold/hysteresis for analog monitors.
  • Containment: rate-limit per source (per pin/sensor) to prevent re-wake loops.
  • Service hook: per-source attribution (which pin/sensor), counts, and distribution over time.

Timed wake (periodic self-check / keep-alive / scheduled windows)

  • Problem: planned wake is easy to underestimate; the wrong interval silently dominates battery drain.
  • Qualify: define a wake window and “deferred work” rules (heavy tasks only inside windows).
  • Race rules: define attribution when timed wake coincides with bus/local triggers.
  • Service hook: planned-vs-actual execution counters and skip/fail reasons (policy traceability).

Attribution discipline (wake reason that can be trusted)

Read → log → clear (never reorder)

  • Latch survives transition: the reason must persist through the wake process.
  • Log before clear: persist reason + sequence + mode to history, then clear.
  • Multi-source race: define priority (configurable) or allow multi-tag logging.

Policy KPIs (make trade-offs measurable)

  • Attribution completeness:X% of wakes have a valid source (bus/local/timed).
  • False-wake rate:X/day, tracked per source.
  • Lost-event count:X/day (events that woke the ECU but never appeared in history).

Verification gates (policy-level, repeatable)

  • Mode-loop: sleep↔wake repeated X cycles; wake source distribution stays stable.
  • Noise/provocation: stimulate bus/local glitches; filter + debounce + backoff prevents repeated wakes.
  • Race test: timed wake overlaps bus/local; attribution rules remain deterministic.
  • Pass criteria: Iq ≤ X µA after Y s, false-wake ≤ X/day, completeness ≥ X%.
Wake sources → qualify → latch → MCU log → clear Priority: configurable (example: Local > Bus > Timed) Bus wake LIN pattern activity flag Local wake switch sensor IRQ Timed wake RTC / timer Filter / qualify min window (X) Debounce stability gate Rate limit backoff Wake reason latch bus / local / timed survives transition MCU Read reason Log history (N) Clear latch Mode gating: Sleep/Standby enable only whitelisted sources (policy-controlled)
Wake strategy is “source + qualification + attribution + persistence.” Without the latch→log→clear discipline, serviceability collapses even if Iq is low.

Watchdog & Reset Strategy: Window WD, Reset Causes, Fail-Safe Outputs

Intent Turn watchdog/reset from “features” into a reliability contract: detect software stalls, reset deterministically, drive loads to safe states, and preserve reset evidence for service.

Window watchdog vs simple watchdog (selection logic)

  • Simple WD: detects “missing kick” only. Suitable when scheduling and clocking are stable and the main concern is total stalls.
  • Window WD: detects early, late, and missing kicks. Better for catching timing anomalies and “fake alive” loops that still kick.
  • Risk to manage: mode transitions and clock changes can shift kick timing; define a safe service window after wake or reset.
  • Pass criteria: no unintended resets across repeated sleep↔wake cycles under worst-case scheduling jitter.

Reset causes (evidence that enables fast diagnosis)

  • POR: power-on reset path and initialization sequence control.
  • BOD/UV: undervoltage/brownout indicates supply/rail instability or load transients.
  • WD reset: software stall, scheduling violation, or policy misconfiguration (window too tight).
  • Thermal: overheating or overload conditions requiring containment and safe recovery.
  • LIN-fault recovery (typical): system-defined degradation/reset strategy triggered by repeated fault patterns (policy-level).

Fail-safe outputs (keep loads safe during reset and recovery)

Define default load states

  • During reset: critical loads must be forced to a known safe state (off/hold/tri-state per system design).
  • After reset: recovery sequencing should wait for rail stability and policy re-application.
  • Service hook: log both the reset reason and the post-reset mode so repeated patterns are visible.

Verification gates (fault injection)

  • WD injection: early/late/missing kick → distinct reset evidence.
  • UV injection: controlled VBAT dip → BOD evidence + safe outputs asserted.
  • Thermal injection: trigger containment → safe state + defined recovery behavior.
  • Pass criteria: reset reason read success ≥ X%, recovery time ≤ X ms, unintended reset rate ≤ X/day.
Window watchdog timing: early / late / missing kick → reset + evidence time t0 t1 t2 t3 Allowed kick window (open → close) Kick OK Early fault Late fault Missing kick (timeout) Reset Reset reason latch → read & log Fail-safe outputs: safe load states
Window watchdogs add timing semantics (early/late) to detect “fake alive” behavior, while reset reasons and fail-safe outputs preserve both safety and diagnosability.

LIN Integration Specifics (Kept On-Scope for SBC Integration)

Intent Focus on system-level LIN behaviors unique to integrated SBCs: how LIN interacts with low-power modes, how the MCU controls/observes it (pins vs register control), and how fail-safe receive and bus-fault policy prevent “silent dead” ECUs.

Scope guard: PHY electrical tuning (slew rate, waveform margins, EMC component details) stays in the LIN Transceiver page. This section only covers policy paths and control/diagnostic interactions.

Control plane vs data plane (why LIN looks different inside an SBC)

  • Data plane: TX/RX traffic between MCU and LIN bus (payload and scheduling).
  • Control plane: wake detection, mode gating, fail-safe behavior, and status/IRQ evidence.
  • Key implication: in low-power modes, the control plane often remains active while the data plane is constrained by policy.

LIN vs low-power modes (policy table mindset)

  • Sleep: keep only whitelisted wake detection + reason latch; constrain active TX/RX to avoid battery drain and retry storms.
  • Standby: allow faster recovery and more monitoring while respecting an Iq budget (≤ X µA after Y s).
  • Normal: full communication allowed, but bus-fault policy must still protect against “stuck dominant” or repeated error loops.
  • Verification hook: repeated sleep↔wake cycles must preserve attribution and never lose the first post-wake evidence.

MCU interaction: TXD/RXD pins vs register-controlled LIN

  • Pin-driven (TXD/RXD): simpler bring-up, but mode transitions are sensitive to MCU pin states during reset/sleep entry.
  • Register-driven (SPI/I²C control): richer policy (filters, counters, locks), but requires strict initialization order.
  • Init discipline: read evidence (wake/reset) → apply policy config → enable data plane, preventing “awake but unmanaged” states.

Fail-safe receive and bus-fault policy (avoid “silent dead” ECUs)

Fail-safe receive (minimum visibility)

  • Goal: preserve wake detection and critical status even when MCU is resetting or stalled.
  • Evidence: wake reason latch and status flags must remain readable after recovery.
  • Containment: prevent uncontrolled TX behavior during reset or degraded states.

Bus fault response (system-level, policy-driven)

  • Symptoms to contain: retry storms, repeated wakes, and Iq oscillation.
  • Policy actions: inhibit TX, enter restricted mode, latch fault evidence, and notify MCU via status/IRQ.
  • Serviceability: counters/timeouts must persist long enough to be exported to DTC snapshots.

See also: PHY electrical layer details (slew control, waveform margins, EMC sensitivity) belong in the LIN Transceiver subpage. Keep this page focused on policy + evidence + recovery behavior.

LIN inside an SBC: data plane + control plane LIN bus node ECU node ECU node ECU LIN SBC LIN PHY Wake latch Mode gating Fail-safe RX Status / IRQ Counters SPI / I²C config MCU TX/RX Read status Apply policy Log evidence data control keep on-scope
Integrated SBCs add a control plane: wake latching, mode gating, and fail-safe receive. This is where low-power policy and service evidence are enforced.

Diagnostics Hooks: Counters, Black-Box Logging, Serviceability

Intent Provide a minimum service dataset that survives real-world intermittency: wake history, LIN error evidence, power/thermal events, and export hooks into DTC snapshots. This converts “sporadic” into reproducible evidence.

Scope guard: Only the data flow and mapping concept are covered here. Diagnostic protocol details are intentionally excluded.

Minimum Service Dataset (MSD): the fields that unlock fast root-cause

  • Wake event log: last N wakes (source + timestamp + mode before/after + sequence).
  • LIN counters/timeouts: error counts and link-health evidence aligned with mode.
  • Power/thermal evidence: brownout occurrences, thermal events, and reset-cause histogram.
  • DTC mapping hooks: which fields become DTC summary vs extended snapshot (no protocol expansion).

Wake event log (black-box ring buffer)

  • Schema: seq, timestamp, source (bus/local/timed), mode_before, mode_after.
  • Discipline: read wake reason latch → persist record → clear latch (no reordering).
  • Audit: completeness ≥ X% (every wake has a source) and depth N sufficient for field reproduction.

LIN error evidence (counters + timeouts, aligned to mode)

  • Counters: RX/TX errors, timeouts, and bus-fault flags (short/open indications).
  • Mode association: record whether evidence occurred in Sleep/Standby/Normal to prevent misinterpretation.
  • Snapshot rule: export counters at wake and before sleep entry to detect trends.

Brownout and thermal evidence (power events that mimic software faults)

  • Brownout occurrences: count + last occurrence timestamp; correlate with reset causes.
  • Thermal events: count + containment entry/exit markers (policy-level).
  • Reset histogram: last M reset causes distribution for fast triage.

Export to DTC/UDS (mapping concept only)

  • DTC summary: reset cause + wake summary + top-level counter flags.
  • Extended snapshot: wake log tail (N), counter snapshot, and mode context.
  • Service readout: ensure evidence is accessible in workshop mode without requiring long reproduction runs.

Verification gates (service evidence must survive intermittency)

  • Lost-evidence test: repeated sleep↔wake with forced resets; wake reason must still be recoverable.
  • Counter integrity: counters monotonicity and snapshot consistency across mode changes.
  • Export check: DTC summary and extended snapshot contain the MSD fields; readout succeeds ≥ X%.
Evidence pipeline: SBC → MCU → DTC store → service tool SBC Wake log (N) LIN counters BOD / thermal Reset causes MCU Ring buffer DTC mapping Export iface DTC store Summary Snapshot Service tool Readout MSD Wake N LIN counters BOD/thermal
Treat diagnostics as an evidence pipeline. When wake and reset policies are measurable, field “sporadic” issues become reproducible and actionable.

EMC / Protection Co-Design for LIN SBC (Hooks Only)

Intent Provide layout hooks that matter specifically for low-power + wake-sensitive LIN SBC designs: parasitics that distort edges and trigger false wakes, placements that worsen common-mode radiation, and leakage/ground-bounce paths that break sleep Iq budgets.

Scope guard: This section avoids full component selection and EMC test methodology. For detailed device parameters and sizing, use the EMC/Protection subpage.

Hook 1 — TVS/ESD array parasitics: edges, thresholds, and false triggers (principles only)

  • Edge impact: added capacitance and long stubs slow or reshape edge crossings, shrinking noise margin and increasing mis-detect risk.
  • Wake sensitivity: during Sleep/Standby, wake detectors can interpret threshold “jitter crossings” as activity if parasitics create ringing near the decision level.
  • Quick check: if false-wake rate increases after adding/changing TVS, first inspect stub length, cumulative protection capacitance (multi-point stacking), and the closest return path.

Hook 2 — CMC and return paths: wrong placement can be worse than none

  • Core principle: a CMC reduces common-mode current only when the reference/return path is continuous and the current loop remains small.
  • Failure pattern: placing the CMC far from the connector or across a broken reference plane forces common-mode currents into larger loops, increasing radiation.
  • Quick check: verify plane continuity under the CMC, keep it near the port, and ensure return is not rerouted through long detours.

Hook 3 — Low-power leakage and ground bounce: the silent killers of Iq and wake integrity

  • Leakage stacking: multiple protection parts and contaminated interfaces can create a non-obvious sleep-current overrun.
  • Ground bounce: poor return planning can move local ground reference, creating repeated near-threshold crossings that appear as activity.
  • Quick check: measure stable sleep Iq after Y seconds and correlate with wake counters/flags to detect “current + false activity” coupling.

Hook checklist (layout review, on-scope)

  • TVS near the port: minimize stub and keep return short (≤ X mm placeholder).
  • Avoid stacked capacitance: do not scatter multiple “helpful” protectors along the LIN line without a budget.
  • CMC placement: close to connector, with continuous reference plane under the part.
  • Return discipline: prevent long detours; keep common-mode loops small.
  • Sleep leakage audit: sum leakage paths (port + protection + contamination risk) against Iq budget.
  • Wake integrity: confirm wake detect path has stable reference in Sleep/Standby and does not float through noisy returns.

See also: Detailed component sizing, parameter trade-offs, and EMC test workflows belong in the EMC/Protection subpage. Keep this page focused on hooks that affect wake + Iq.

Port protection placement: correct vs wrong (hooks) Correct Connector TVS CMC LIN SBC pin small loop Wrong Connector TVS long stub CMC gap LIN SBC pin large loop
Hooks focus on wake integrity and sleep Iq: minimize stubs, preserve return continuity, and avoid placements that enlarge common-mode loops.

Engineering Checklist: Design → Bring-up → Production Gates

Intent Convert strategy into gates: design inputs, bring-up evidence, and production stability. Each item is structured as CheckHow to verifyPass criteria (placeholders).

Design gate (policy and safety become design inputs)

Power tree and rails

Check: rail grouping and dependency map are defined.
Verify: power-up/down sequence documented and reviewed.
Pass: no undefined rail coupling; sequence reproducible within X ms.

Mode taxonomy

Check: Normal/Standby/Sleep allowed actions table exists.
Verify: each mode lists enabled wake sources and evidence retention.
Pass: policy table covers all transitions and veto rules.

Wake sources + priority

Check: bus/local/timed whitelist and priority defined.
Verify: multi-source race rules documented (deterministic).
Pass: attribution rules yield a single consistent outcome.

False-wake controls

Check: filter/debounce/rate-limit/backoff policy exists per source.
Verify: control parameters budgeted and versioned.
Pass: expected false-wake ≤ X/day (target set).

Watchdog and reset evidence

Check: WD type and window budget defined; reset causes enumerated.
Verify: read→log→clear discipline is specified.
Pass: reset-cause recoverability ≥ X%.

Fail-safe outputs

Check: reset-time safe states for critical loads defined.
Verify: default state and recovery sequence reviewed.
Pass: fault injection drives outputs to the defined safe state.

Minimum Service Dataset (MSD)

Check: wake log N + counters + power/thermal evidence defined.
Verify: export mapping (summary vs snapshot) planned.
Pass: evidence supports field triage within X minutes.

EMC hooks enforced

Check: TVS/CMC placement and return rules applied to layout review.
Verify: stub length and plane continuity inspected.
Pass: no long stubs; return continuity maintained (hook compliance).

Bring-up gate (close the evidence loop)

Sleep Iq measurement discipline

Check: measure stable Iq after Y seconds from sleep entry.
Verify: correlate with wake flags/counters to detect “current + activity” coupling.
Pass: Iq ≤ X µA and activity flags remain stable.

Wake attribution sanity

Check: trigger bus/local/timed wakes independently.
Verify: reason latch and log show correct source every time.
Pass: attribution correctness ≥ X%.

Wake history integrity

Check: generate ≥ N wake events and confirm ordering.
Verify: ring buffer wrap behavior and readout path.
Pass: lost-event count ≤ X.

Mode transition loop

Check: sleep↔wake loop executed X cycles.
Verify: consistent behavior, no “awake but unmanaged” states.
Pass: zero hang; consistent evidence retained across cycles.

Watchdog injection

Check: early/late/missing kick scenarios injected.
Verify: reset causes are distinguishable and logged.
Pass: cause decoding success ≥ X%.

Brownout and thermal containment

Check: controlled VBAT dips and thermal triggers (policy-level).
Verify: containment entry + evidence logging works.
Pass: recovery is deterministic; evidence persisted.

LIN fault policy behavior

Check: disconnect/short fault injection at system level.
Verify: restricted mode + counters + IRQ evidence.
Pass: no retry storm; evidence exportable.

Export path check

Check: export MSD fields into DTC summary + snapshot.
Verify: readout success across resets and power cycles.
Pass: readout success ≥ X%.

Production gate (stability, drift, and service readiness)

False-wake rate

Check: measure false-wake per source over X hours/day.
Verify: distribution stable and within budget.
Pass:X/day per vehicle (by source).

Evidence completeness

Check: audit wake log, counters, reset histogram in sampling.
Verify: fields present and consistent after resets.
Pass: completeness ≥ X%.

Temperature drift

Check: sweep temperature and re-measure Iq + false-wake rate.
Verify: drift remains within policy budget.
Pass: drift ≤ X% (or ≤ X µA).

Aging and contamination sensitivity

Check: evaluate leakage-sensitive conditions (policy-level).
Verify: Iq does not escalate; wake integrity remains stable.
Pass: no sustained Iq overrun beyond X.

Station-to-station correlation

Check: ensure evidence readouts match across test stations.
Verify: key counters and IDs consistent within tolerance.
Pass: delta ≤ X.

Policy regression suite

Check: re-run gates after software or configuration changes.
Verify: no new false-wake or reset oscillation introduced.
Pass: gate pass rate ≥ X%.

Service playbook readiness

Check: readout order defined (reset cause → wake log → counters).
Verify: workshop mode readout succeeds without long reproduction.
Pass: triage completion ≤ X minutes.

Evidence retention across resets

Check: verify retention rules for critical evidence fields.
Verify: power-cycle and reset tests preserve evidence as defined.
Pass: retention success ≥ X%.

Project Bible gates: Design → Bring-up → Production Design gate Mode Wake WD Fail-safe MSD Reset EMC hooks Bring-up gate Iq Attribution Loop Inject Wake log BOD Fault policy Production gate False-wake Completeness Drift Pass X/Y
Gate structure prevents “feature lists” from slipping into production. Evidence (wake/reset/log/counters) is verified early and audited through manufacturing.

Verification & measurement: logs, probes, and pass criteria

What must be proven (repeatably)

  • Low-power: Sleep/standby current meets the target after the mode is stable (Iq ≤ X µA placeholder).
  • Availability: Wake works for each intended source, with correct attribution (false-wake ≤ X/day placeholder, bucketed by source).
  • Reliability: Reset causes remain consistent and explainable across repeats (Top-1 cause share ≥ X% placeholder).
  • Evidence chain: Each event produces retrievable records (reason latch/log + counters snapshot) for production and service.

Iq measurement pitfalls (fixture, leakage, range)

  • Fixture leakage dominates: humid/dirty fixtures, probe adapters, and cable insulation can add µA–mA-level leakage at 12 V.
  • Protection leakage is real: TVS/ESD parts and contamination can elevate standby current; compare with/without the protection populated.
  • Auto-ranging artifacts: range switching can be mistaken as mode transitions; lock the range when possible.
  • Stabilization time: measure only after the SBC declares the target mode stable and after a fixed wait (Y s placeholder).

Concrete parts that can contribute to leakage (examples)

  • LIN ESD/TVS: Nexperia PESD1LIN,115 (note lifecycle/NRND status may apply), Littelfuse SM24-02HTG (AEC-Q101 TVS array).
  • Rule: treat standby current as a system number; evaluate protection leakage at temperature corners and VBAT corners.

False-wake rate: define the metric before tuning

  • Denominator: fixed observation window (hours/day), ignition-off state, and temperature band.
  • Numerator: wake events validated by reason latch/log (exclude deliberate test wakes).
  • Bucket: bus / local / timed; track each independently to avoid “mixed” conclusions.
  • Optional noise indicator: “vetoed wakes” (attempted but filtered) per day to quantify environment stress.

Reset cause consistency: read → log → clear discipline

  • Read early: capture reset reason immediately after boot, before mode/policy changes overwrite context.
  • Log with conditions: store VBAT-min, temperature, and mode-before/mode-after together with the cause.
  • Repeatability check: over K resets, histogram the causes; drift implies measurement or policy instability.

What to probe & what to log (minimum evidence set)

  • Probe: VBAT, VREG output(s), RESET/NRES, WAKE pins, LIN pin activity (presence/absence, not waveform deep-dive).
  • Log: mode transitions, wake source (bus/local/timed), wake log depth N, LIN error counters/timeouts, brownout/thermal counters.
  • Snapshot rule: on every wake/reset, store “counters + reason + conditions” as one atomic record.

Pass criteria matrix (placeholders)

  • Sleep Iq: ≤ X µA @ (Temp band) and (VBAT band) after Y s stabilization.
  • False wake: ≤ X/day overall, plus per-source limits (bus/local/timed).
  • Reset causes: Top-1 share ≥ X% across K repeats; anomalies require correlated evidence fields.
  • Evidence completeness: ≥ X% of wakes/resets produce a complete snapshot record.
Verification matrix (conditions → metrics → pass gates)
Conditions Metrics Pass gates Mode Sleep / Standby Temp Cold / Room / Hot VBAT Nominal / Dip Window Hours / Day Iq stable after Y s False wake /day by source Reset causes histogram Evidence snapshot completeness Iq ≤ X µA False wake ≤ X/day Top-1 ≥ X% Complete ≥ X% Rule: treat Iq + wake + reset as one evidence chain; snapshot every wake/reset with conditions.

Applications + IC selection logic (near end, before FAQ)

Where LIN SBCs fit best (body/comfort small ECUs)

  • Many small nodes: long ignition-off time, aggressive Iq budgets, and frequent wake/sleep transitions.
  • Complex wake sources: bus + door handle/switch + sensor IRQ + timed health checks.
  • Serviceability matters: wake history + reset cause + counters enable reproducible diagnosis of intermittent issues.

Selection decision tree (requirements → key specs)

  1. Iq target strict? If yes, prioritize sleep/standby taxonomy + stable mode transitions + cyclic sensing support (if used).
  2. Wake attribution required? If yes, require reason latch + configurable wake filters + wake log depth N.
  3. Watchdog type needed? If scheduling jitter is possible, prefer window WD with controlled startup grace behavior.
  4. Regulator & thermal headroom? Confirm LDO/buck capability and the ability to record/flag brownout or thermal events.
  5. Diagnostics hooks needed? Require counters + export path (MCU snapshot → DTC/service), not just a raw status pin.

Common failure patterns that selection must prevent

  • Sleep current “mystery high”: policy not fully entering sleep, cyclic sensing misconfigured, or protection leakage not accounted.
  • Wakes but no evidence: reason latch not read early, log not retained, or counters not snapshotted per event.
  • Reset storms: watchdog choice mismatched to software timing, or brownout events not handled as first-class evidence.

Concrete material numbers (examples — verify package/suffix/availability)

LIN-focused mini SBCs (LIN + VREG + supervision)

  • NXP TJA1128 (LIN mini SBC; LDO + WAKE + window watchdog option).
  • Texas Instruments TLIN1431-Q1 (LIN SBC; watchdog + high-side switch; pin/SPI control variants exist).
  • Microchip ATA663431/ATA663454 family (LIN SBC; VREG + watchdog + high-side switch variants).
  • Microchip ATA663231 (LIN SBC family variant; VREG + NRES output options by device).
  • NXP MC33689 (legacy LIN SBC with watchdog/reset behavior; validate lifecycle and sourcing).

Body-domain SBCs that also integrate LIN (often with CAN and richer power)

  • STMicroelectronics L99DZ200G (door-zone IC with LIN + HS CAN and standby modes; SPI control/diagnostics).
  • Infineon TLE9271QX-V33 (SBC family including LIN transceiver + watchdog/reset/fail-safe features; SPI control).
  • Infineon TLE9266-2QX (SBC family including LIN transceiver + watchdog/reset; validate exact variant set).
  • Texas Instruments TCAN2847-Q1 / TCAN2857-Q1 (CAN FD + LIN SBC variants; confirm which variant matches LIN requirement).
  • Microchip ATA6586 (CAN-LIN SBC family; confirm regulator/current and wake features per variant).

LIN line protection examples (can impact standby/leakage)

  • Nexperia PESD1LIN,115 (LIN ESD protection diode; validate lifecycle status and leakage at temperature).
  • Littelfuse SM24-02HTG (AEC-Q101 TVS diode array example; validate capacitance/leakage vs waveform and Iq).
Selection decision tree (Yes/No) → key specs
Start: requirements Iq target strict? sleep/standby policy + stable entry Need wake attribution? reason latch + log depth N Need window watchdog? startup grace + jitter tolerance Need service hooks? counters + export snapshot Output: key specs = Modes + Wake latch/log + WD + VREG/thermal + Counters/export Body ECUs Serviceability

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs: troubleshooting (on-scope, evidence-driven)

Scope: only system-level debugging for LIN SBC low-power policy, wake attribution, watchdog/reset evidence, and service logs. LIN electrical waveform details and component-level EMC sizing are intentionally excluded.

Sleep Iq is low, but the vehicle still over-discharges overnight—first check “who woke it” or “leakage”?

Likely cause: wake events (bus/local/timed) are occurring, or the measurement setup hides a leakage path that only appears over long time.

Quick check: read wake reason latch + wake log (last N events) and correlate with time window; repeat Iq measurement with a known-clean fixture and record VBAT/Temp.

Fix: tighten wake filters/debounce and rate-limit wake retries; add “snapshot-on-wake” logging; validate protection/fixture leakage by A/B build (with/without suspect parts).

Pass criteria: Sleep Iq ≤ X µA after Y s stabilization AND false-wake ≤ X/day (bucketed) AND wake log shows ≤ X unintended events overnight.

False wakes cluster on rainy/humid days—first check wake-pin debounce or harness leakage paths?

Likely cause: humidity-driven leakage/contamination shifts thresholds or injects micro-currents; debounce/filter settings are too permissive for that environment.

Quick check: compare false-wake/day vs humidity; inspect whether “vetoed wakes” increase; read wake source bucket (local vs bus) to identify the entry point.

Fix: raise debounce time / add multi-sample qualification; harden wake input policy (rate-limit + backoff); add contamination/leakage screening in service or production gates.

Pass criteria: false-wake ≤ X/day across humidity band AND vetoed-wake ≤ X/day AND wake attribution remains consistent (≥ X% correct bucket).

LIN bus looks “quiet,” but the ECU still wakes periodically—timed wake or header-noise trigger?

Likely cause: timed wake/health-check policy is active, or bus-wake detection is too sensitive and interprets sporadic activity as a wake pattern.

Quick check: read wake attribution (timed vs bus) from latch/log; temporarily disable timed wake and observe whether wake cadence disappears over a full window (hours).

Fix: correct timed-wake schedule and retention rules; tighten bus-wake qualification (pattern + debounce) and ensure logs snapshot counters per wake.

Pass criteria: timed wake occurs only at defined intervals (±X%) OR bus-wake rate ≤ X/day; wake log shows consistent source for ≥ X% of events.

Watchdog resets happen occasionally, but application logs look clean—window config or kick-timing jitter?

Likely cause: window watchdog margins are too tight for real scheduling jitter (especially after wake), or the startup grace period is not aligned with boot timing.

Quick check: log early/late/missed-kick counters (or equivalent status) around resets; measure kick interval histogram and compare against window bounds.

Fix: widen the watchdog window or move kick to a deterministic scheduler slot; add post-wake grace and “read-reset-cause-first” discipline before policy changes.

Pass criteria: WD resets ≤ X per Y hours under worst-case load AND kick jitter stays inside [window_min, window_max] with ≥ X% margin.

Reset reason is always reported as “POR”—latch not retained or register read timing wrong?

Likely cause: reset-cause latch is cleared too early, overwritten by a second reset, or read occurs after the system reconfigures the SBC state.

Quick check: enforce a boot rule: read reset cause as the first operation, log it with VBAT/Temp, then clear; compare results across repeated reset injections.

Fix: reorder initialization (read→log→clear before mode transitions); add “double-reset detection” (count resets within T seconds) to avoid mislabeling as POR.

Pass criteria: reset cause matches injected fault (POR/BOD/WD/thermal) with ≥ X% accuracy across K repeats; “POR-only” rate ≤ X%.

After waking from sleep, LIN communication occasionally times out—mode transition delay or MCU init order?

Likely cause: LIN interface is not fully enabled when the MCU starts messaging, or the MCU config sequence enables traffic before applying policy and clearing stale status.

Quick check: timestamp mode transitions and first LIN activity; verify a deterministic sequence: read status → configure policy → enable LIN → start traffic.

Fix: add a post-wake settle delay (T ms placeholder) before bus activity; gate application traffic on “LIN-ready” status; snapshot counters on every timeout.

Pass criteria: wake-to-first-valid-LIN ≤ X ms with 0 timeouts over Y wake cycles; timeout counter growth ≤ X per day.

At cold temperature, Iq rises and false wakes increase—how to prove which pin/module drifted?

Likely cause: temperature changes leakage paths, wake-input thresholds, or mode stability; a specific wake source becomes more sensitive and triggers extra wake cycles.

Quick check: run a cold vs room A/B: record Iq, wake source histogram, and vetoed-wake counts; disable one wake source at a time to isolate the trigger.

Fix: tune debounce/filters for cold; enforce stricter wake qualification; add per-source backoff; verify mode entry/retention timing at cold VBAT.

Pass criteria: cold Iq ≤ X µA and false-wake ≤ X/day AND wake histogram remains stable (±X%) across cold/room corners.

Production tests pass, but the customer sees intermittent “hangs”—which three event classes must the black box record first?

Likely cause: the field issue is a rare interaction among wake events, resets, and bus faults that production windows did not cover.

Quick check: ensure the system records (1) wake events with attribution, (2) reset cause histogram, (3) LIN counters/timeouts snapshots—each with VBAT/Temp/mode tags.

Fix: implement “snapshot-on-wake/reset” atomic records; export to DTC/service readout; add a watchdog-safe-mode path for recovery.

Pass criteria: evidence completeness ≥ X% over Y days AND field reproduction yields a consistent root-cause bucket within N events.

Adding a TVS makes false wakes worse—is it leakage or threshold-crossing jitter?

Likely cause: added leakage (temperature/humidity dependent) elevates standby current and shifts wake detection, or added parasitics increase sensitivity near thresholds.

Quick check: A/B compare with/without TVS: (a) Sleep Iq delta, (b) wake histogram by source, (c) humidity/temperature correlation of wake attempts (vetoed-wake).

Fix: tighten wake qualification/filters; move policy to prefer local wake validation; validate leakage across corners and update the production gate to screen it.

Pass criteria: TVS-added Iq delta ≤ X µA AND false-wake delta ≤ X/day across corners; wake attribution remains stable (≥ X%).

Wake attribution is inconsistent (bus/local mixed)—latch clear strategy or interrupt race?

Likely cause: multi-source wake arrives close in time and software clears/reads latches in the wrong order; concurrent interrupts overwrite a single “last-cause” field.

Quick check: enforce “read-all-sources then clear” and log timestamps; verify that latch read happens before enabling secondary wake sources post-boot.

Fix: implement a deterministic priority rule (bus/local/timed) and store a bitmask of sources; clear latches only after snapshot is committed.

Pass criteria: attribution accuracy ≥ X% across Y injected multi-source tests; “unknown/mixed” bucket ≤ X%.

After ECU reset, a critical load state is wrong—how to check fail-safe default and power-up sequencing?

Likely cause: fail-safe output default state does not match the safety assumption, or the load is enabled before MCU policy config completes after reset.

Quick check: capture reset cause + timestamp, then probe reset pin and load-enable signal ordering; confirm which state outputs assume during reset and early boot.

Fix: enforce safe defaults in hardware and lock the load until policy and diagnostics are configured; add a post-reset sequencing checklist and regression test.

Pass criteria: for K resets, load state always enters defined safe state within X ms and transitions only after policy-ready (0 violations).

Same software, different SBC model: false wakes differ a lot—what 3 mode-related configs must be aligned first?

Likely cause: default policy differs by SBC: wake filters, latch/clear behavior, and mode entry/retention differ even if the LIN function appears compatible.

Quick check: compare (1) mode taxonomy/entry conditions, (2) wake qualification/filter/debounce, (3) latch/log clear + snapshot timing; validate with identical test window.

Fix: standardize a policy profile across SBCs; enforce read→log→clear order; gate wake enablement until policy is applied; add a cross-SBC regression suite.

Pass criteria: false-wake difference between SBC variants ≤ X/day under the same conditions; attribution accuracy ≥ X%; evidence completeness ≥ X%.