123 Main Street, New York, NY 10001

Sensors / Actuators Nodes: Compact LIN/CAN with Selective Wake

← Back to: Automotive Fieldbuses: CAN / LIN / FlexRay

A sensor/actuator node succeeds when it can sleep with truly low Iq, wake only for intended reasons (with attribution), and leave diagnosable evidence that survives real harness noise and protection parasitics.

The practical recipe is: design explicit power domains and wake tables, verify false-wake and bus recovery on worst harness buckets, and ship a minimal black-box log that makes field failures explainable.

H2-1 · Overview: What a “Sensor/Actuator Node” Means (and what it doesn’t)

A sensor/actuator node is a remote micro-ECU on a vehicle harness: it combines a bus interface (LIN/CAN), local logic/state control, I/O or power drivers, and a disciplined sleep/wake strategy. The engineering goal is not a single part choice — it is a system that sleeps low, wakes correctly, and stays diagnosable.

Scope Guard
Node integration only
In scope (this page)
  • Node boundary, power domains (always-on vs switched), and sleep architecture.
  • Selective wake strategy at node level: filtering, false-wake control, wake attribution.
  • Minimum protection/EMC for small nodes (placement + parasitics impact).
  • Diagnostics and serviceability: counters, snapshots, and a small black-box log.
  • Bring-up and production gates: what to measure and what “pass” looks like.
Out of scope (link out)
  • PHY waveform details and protocol deep dives (Classic/FD/SIC/XL timing formulas).
  • Full ISO 11898-6 rule text and complete compliance interpretation.
  • General EMC theory; this page only covers node-minimum placement constraints.
Node KPIs (measurable targets)
Sleep current (Iq) — measured, not assumed

Track Iq by mode × temperature × VBAT. Include not only IC standby currents but also leakage from protection parts, pull-ups, dividers, sensor standby, and GPIO clamp paths.

False-wake rate — controlled and attributed

Define a KPI such as X wakes/hour under specified harness noise, EMC stress, and environmental conditions. Always log wake cause (bus/local/timer) to prevent “mystery wakes”.

Recovery behavior — stable, not a reboot storm

After bus faults, undervoltage, or thermal events, the node must recover with bounded retries and clear state transitions. Avoid oscillation between wake/sleep due to brownout or racing state machines.

Serviceability — minimal black box, maximum clarity

Maintain a small ring buffer of event snapshots: wake cause, counters, VBAT, temperature, and key driver/sensor flags. Without this, field returns become “cannot reproduce”.

Typical node-level failure modes (what this page prevents)
  • “Datasheet Iq looks great, system Iq is high” — leakage, GPIO clamps, protection parasitics, or sensor standby dominates.
  • “Wakes up randomly” — filter tables too permissive, noise coupling on harness, or poor return path triggers false wakes.
  • “Wakes but never comes online” — brownout chatter, incorrect power-domain sequencing, or missing state-machine debouncing.
  • “Field failure with no evidence” — no attribution log, no counters, no event snapshot at the moment of failure.
Diagram · Node System Boundary (domains, wake path, and logging)
Harness Connector Sensor / Actuator Node Always-on Wake + PHY Switched MCU + IO Wake Filter LIN/CAN PHY LDO / PMIC MCU Driver / HSS Sensor / IO Event Log Wake → Power-up VBAT Protection TVS R CMC ECU Gateway

The diagram separates always-on blocks (wake filtering + PHY) from switched blocks (MCU/driver/sensor) and shows where wake attribution and event logging must tap.

H2-2 · Node Archetypes & Topologies (LIN-only / CAN / Dual-bus / Isolated)

Node architectures are easiest to control when the type is identified first. Each archetype has a different dominant risk: leakage-driven Iq, false-wake sensitivity, fault recovery, or ground potential differences. The sections below define a must-have set for every archetype: wake source, Iq budget, minimum protection, and minimum logging.

Must-have set (applies to all node types)
  • Wake source: bus wake (filter-based), local wake (sensor/trigger), and optional timed wake.
  • Iq budget: component-by-component accounting across modes and temperatures.
  • Minimum protection: node-specific placement and parasitic control (avoid protection-induced false wakes).
  • Minimum logging: wake attribution + counters + event snapshot (field-service ready).
Four common archetypes (node-level focus)
LIN Sensor Node
Use when: many low-bandwidth sensors, ultra-low standby, simple service readout.
Dominant risks: leakage-driven Iq, wake threshold sensitivity, missing attribution logs.
Minimum tests: Iq by mode/temperature, false-wake rate under harness noise, wake-cause consistency.
LIN Actuator Node
Use when: local power switching (HSS/LSS), actuator diagnostics and protection required.
Dominant risks: driver fault recovery, thermal/overcurrent events, log gaps at the moment of fault.
Minimum tests: protection trips + recovery, event snapshot coverage, stable sleep after fault clearing.
CAN / CAN FD Smart Node
Use when: higher diagnostics density, event-driven reporting, more bandwidth headroom.
Dominant risks: bus-off handling, retry storms, excessive utilization and priority mis-design.
Minimum tests: fault injection + bounded recovery, counter integrity, utilization limits under worst-case traffic.
Isolated CAN Node
Use when: large ground potential differences (HV domains, e-drive boundaries).
Dominant risks: return-path surprises, isolation power sequencing, disturbance-induced false wakes.
Minimum tests: disturbance/burst tolerance at system level, wake attribution under stress, stable domain power-up order.
Deliverable · “Node Type → Required Modules” (field-ready checklist)
  • Bus layer: LIN or CAN interface + wake-qualified receive path.
  • Power domains: always-on wake/PHY domain + switched compute/IO domain with deterministic sequencing.
  • Protection minimum: connector-adjacent parts placed to preserve edges and avoid leakage dominance.
  • Logging minimum: wake cause + counters + event snapshot (VBAT/T/flags) stored before heavy software runs.
  • Validation minimum: Iq matrix + false-wake statistics + recovery boundedness under injected faults.
Diagram · Four Archetypes (consistent style, different blocks)
Node Archetypes (Always-on vs Switched) LIN Sensor LIN Actuator CAN / CAN FD Isolated CAN Always-on Switched Wake LIN PHY MCU Sensor Always-on Switched Wake LIN PHY MCU HSS/LSS Always-on Switched Wake CAN PHY MCU IO/Diag Always-on Switched Wake CAN PHY ISO MCU Driver Always-on = wake-qualified receive path + logging • Switched = compute/IO only when needed

Each archetype keeps the same power-domain structure. Differences are confined to the switched blocks (sensor vs driver vs diagnostics) and, when required, an isolation barrier to tolerate large ground potential differences.

H2-3 · Power Domains & Sleep Architecture (why nodes fail low-Iq)

Low sleep current is a system accounting problem. A node reaches low-Iq only when the power tree is split into always-on and switched domains, sleep modes are defined with deterministic sequencing, and every microamp has an owner in an Iq budget row.

Power tree & domain split (node viewpoint)
  • VBAT → protection → pre-reg/LDO → loads. The domain boundary defines the achievable Iq.
  • Always-on domain keeps wake-qualified receive and minimal logging alive.
  • Switched domain powers MCU/driver/sensor only when needed, then returns to a stable sleep state.
  • Leakage paths (TVS, dividers, pull-ups, GPIO clamps, driver backfeed) often dominate at microamp targets.
Sleep mode layering
Node-level only
Standby

Always-on keeps wake-qualified receive. Switched domain is off. Wake cause is logged first, then the node powers up deterministically.

Deep Sleep

The always-on set is reduced to the smallest reliable wake path. This mode requires strict sequencing and guardrails against oscillation (wake → brownout → sleep → wake).

Shipping Mode

The node enters a near-off state. Entry and exit conditions must be explicit. Wake sources are intentionally limited, and recovery behavior is validated as a production gate.

Common traps (high-probability order)
  1. TVS / protection leakage: temperature-dependent leakage silently dominates microamp budgets.
  2. GPIO floating / wrong bias: undefined pins create clamp currents or keep blocks partially alive.
  3. Pull-ups / dividers: “small” static resistors become the largest term at low-Iq targets.
  4. MCU pin clamps: external voltage on an I/O while rails are off backfeeds the domain.
  5. Driver backfeed: actuator loads inject energy through ESD diodes or body paths.
Deliverable · Iq Budget Row (copy-ready fields)
  • VBAT: voltage point (min/nom/max).
  • Temperature: test temperature bucket.
  • Mode: Standby / Deep sleep / Shipping.
  • Measurement point: where current is measured (supply node / domain).
  • Contributors: IC Iq, leakage, pull-ups/dividers, sensor standby, backfeed paths.
  • Pass criteria: Iq < X (and stable over Y minutes) with defined test setup.

Each row assigns ownership to every current contributor. Any “unknown” term is treated as a defect until localized.

Diagram · Power Domain Partition (Always-on vs Switched + leakage paths)
Node Power Tree & Domains VBAT Protection Pre-reg / LDO Domain split Always-on Domain Wake-qualified + log Switched Domain Compute/IO on demand Wake Filter LIN/CAN PHY Tiny LDO Event Log Wake cause MCU Sensor Driver IO / ADC Sequencing Stable sleep Leakage: TVS Pull-up Divider GPIO clamp Backfeed

The key design control is the domain boundary. The key debug control is the Iq budget row with explicit leakage owners.

H2-4 · Selective Wake Strategy for Nodes (filter, false-wake, attribution)

Selective wake is a closed loop: define wake sources, implement a deterministic filter table, control false wakes under stress, and log attribution so the strategy can be tightened without losing required events.

Wake sources (node-level classification)
  • Bus wake: wake only when a rule matches (ID/mask + payload condition + window/debounce).
  • Local wake: physical trigger (button, threshold, edge) with debouncing and rate limiting.
  • Timed wake: periodic maintenance checks with bounded duration and explicit return-to-sleep.
Deliverable · Wake Table structure (implementation-friendly fields)
  • Rule ID: stable identifier for attribution logs.
  • CAN/LIN ID + Mask: match scope (exact / range-like by mask).
  • Payload condition: byte/bit match for the minimum semantic trigger.
  • Window: N-of-M counters and time bounds.
  • Debounce / Min interval: block repeats and burst noise triggers.
  • Action: wake / partial wake / ignore / log-only.
  • Priority: resolve multi-hit deterministically.
  • Log policy: which snapshot fields must be captured on hit.
False-wake control (define → bucket → tighten)
  • Stress drivers: harness coupling, EMC events, ground bounce, supply disturbance.
  • Bucket dimensions: temperature, VBAT, harness variant, operating state (park/drive), event class.
  • Primary lever: move from permissive rules to minimal semantic triggers (payload conditions + windows).
Deliverable · False-wake rate KPI (explicit and testable)

Define X wakes/hour (or X wakes/parking cycle) under specified buckets (temperature/VBAT/harness/state). Split statistics by wake source and require bus-wake attribution to include rule ID and frame summary.

Wake attribution (log first, then do work)
  • Record wake source: bus / local / timer.
  • If bus wake: record rule ID + minimal frame summary (ID, DLC, key bytes).
  • Snapshot: VBAT, temperature, error counters, last sleep mode.
  • Purpose: convert “mystery wakes” into actionable buckets for table tightening.
Diagram · Wake Decision Flow (Sleep → Monitor → Filter → Wake → Attribution → Update)
Selective Wake Closed Loop Sleep Monitor Bus wake Local wake Timed wake Filter match ID/Mask Payload Window + Debounce Wake action Attribution log source (bus/local/timer) rule ID + frame summary VBAT + T + counters KPI bucket temp / VBAT harness / state false-wake rate Update wake table tighten rules adjust window debounce / log-only Closed loop

The loop prevents two extremes: waking on everything (battery drain) or waking on nothing (missed events). Attribution is the key to safe tightening.

H2-5 · LIN Sensor/Actuator Node Design (node context)

LIN fits low-speed, high-node-count, cost-sensitive networks when the node is engineered for low-Iq sleep, noise resilience, and serviceable logs. This section focuses on node-level usage and verification, not PHY deep-dive.

Where LIN is a strong fit for nodes
  • Low speed + many nodes: sensors and simple actuators that benefit from predictable scheduling.
  • Cost/size constrained: minimal harness and low component count at the node.
  • Serviceability required: node must explain wake reasons, actuator faults, and recovery outcomes.
Key node parameters (how to use)
Node usage
Sleep / Wake

Define entry criteria (quiet time + stable supply), and define wake handling as a deterministic sequence: log wake cause first, then power the switched domain, then validate bus state. Add debounce and minimum intervals to avoid oscillation under harness noise.

Auto-baud

Treat auto-baud as a controlled acquisition phase after wake: reject unstable edges, require repeated valid headers, and log acquisition failures with counters. Avoid waking the full node on ambiguous activity.

Slew rate

Select slew settings by node-level verification: emission vs error counters vs false-wake rate. Faster edges can increase susceptibility and coupling; slower edges can reduce margin if timing is tight. The pass criterion is a stable error profile on the real harness, not a bench-only waveform.

Actuator nodes: driver + diagnostics + reporting discipline
  • HSS/LSS driver controls: define over-current, short-to-battery/ground, thermal behaviors as explicit states.
  • Fault-to-report mapping: every driver fault class must map to a minimal report field set (class + timestamp + counters).
  • Recovery rules: avoid “reset storms” by bounding retries and enforcing cool-down windows.
  • Service snapshot: capture VBAT/temperature + fault flags + last wake reason before any heavy recovery action.
Low-power tactics (node boundary rules)
  • Main domain off: keep MCU/driver/sensor in the switched domain and hard-off during sleep.
  • Keep wake domain alive: LIN wake-qualified path and minimal log storage remain powered.
  • Respect master-side pull-up/termination: avoid redundant static pull-ups at the node that burn Iq.
  • Pin-state policy: forbid floating pins in sleep; define bias and clamp avoidance explicitly.
  • Backfeed prevention: ensure actuator/load paths cannot inject current into an unpowered domain.

Scope guard: physical-layer specifications and device-family comparisons belong to the LIN Transceiver and LIN SBC subpages.

Diagram · Typical LIN Actuator Node (wake domain + switched domain + driver diagnostics)
LIN Actuator Node (Node Context) Harness LIN + VBAT Node Always-on Wake + Log Switched MCU + IO LIN PHY Wake pin Event Log Wake + Fault MCU Sensor Driver HSS / LSS Load Wake Fault Log

The diagram highlights a wake-qualified always-on path plus a switched compute/driver domain to meet low-Iq targets and serviceability.

H2-6 · CAN/CAN FD Node Design (margin, bus-off recovery, payload discipline)

Node-side CAN reliability is built on verified timing margin on the real harness, disciplined bus-off handling to prevent restart storms, and message rules that keep utilization bounded so diagnostics remain meaningful. This section avoids protocol encyclopedias and PHY deep-dives.

Node CAN reliability objectives (testable)
  • Harness margin: stable operation across harness length, stubs, temperature and VBAT buckets.
  • Error-state stability: counters remain bounded; state transitions are logged and explainable.
  • Controlled recovery: bus-off recovery is bounded in time and retries; no reset storms.
  • Payload discipline: utilization is capped so errors are not self-inflicted by overload.
Timing margin (concept + verification, node viewpoint)

Treat timing margin as a measured property of the full system: harness length and stubs, ground offsets, temperature drift, and disturbance events. The node-level practice is to verify margin by correlating error counters and state transitions under defined buckets, rather than relying on bench-only waveforms.

Verification buckets (examples)
  • Harness: length + stub variant + connector variant.
  • Environment: temperature buckets + VBAT min/nom/max.
  • Stress: disturbance events that typically trigger false transitions (noise, load steps).
  • Outputs: error counters, state timeline, and recovery outcomes.
Bus-off / error counters handling (avoid restart storms)
  • Backoff windows: impose cool-down before rejoin; reduce collision with persistent faults.
  • Bounded retries: cap recovery attempts, then enter a defined degraded/service mode.
  • No blind resets: capture a snapshot (counters + last frames + VBAT/T) before heavy actions.
  • Rejoin discipline: validate bus stability before enabling high-rate event traffic.
Message discipline (keep utilization bounded)
  • Periodic vs event: enforce minimum intervals for events; avoid burst floods under oscillating sensors.
  • Priority rules: safety/control messages have fixed priority; diagnostics are throttled.
  • Utilization cap: define a node maximum (X% placeholder) and validate under worst-case state.
  • Diagnostic fidelity: ensure overload cannot masquerade as timing/PHY faults.

Scope guard: CAN FD/SIC/XL waveform specifics and PHY performance metrics belong to the corresponding transceiver/PHY subpages.

CAN FD node notes (risk focus, not PHY details)

Higher edge rates can increase coupling and error sensitivity under disturbance, which can also increase false wakes. The node-side countermeasure is stricter message discipline (bounded bursts), conservative recovery policies, and bucketed verification on the real harness across temperature/VBAT ranges.

Diagram · Node Error State Machine (with log taps)
CAN Node State Machine (Node Context) Normal Stable traffic Error-active Counters rising Error-passive Reduced impact Bus-off Detach + cool-down Recover Backoff + bounded retry Log tap Counters Last frames VBAT + T Log tap State timeline Recovery Outcome Log tap Utilization Burst guard Throttle EC ↑ EC ↓ EC ↑ EC ↓ Limit Backoff Rejoin OK

The node should log state transitions and counters to differentiate real margin issues from self-inflicted overload and to avoid restart storms.

H2-7 · Minimal EMC/Protection for Small Nodes (node-specific only)

For compact nodes, protection parts can look “correct” yet create failures by breaking return paths, upsetting symmetry, increasing leakage, or reshaping edges. The node-first priority is return path and placement, then minimal components, then verification.

Protection priority (small-node reality)
  1. Return path + ground plan: define where the surge/ESD current returns, short and wide.
  2. Placement distance: connector → protection → PHY must be tight and unambiguous.
  3. Minimal parts: low-C TVS, small series damping, optional CMC only when needed.
  4. Node verification: confirm no low-Iq regression and no false-wake / counter drift.
Minimal node-side protection set
Keep it small
Low-C TVS (matched)

Place at the connector with a short return path. Keep symmetry to avoid differential imbalance that reshapes edges and increases false triggers.

Series damping

A small series element helps control ringing and edge stress, but the benefit depends on location. Verify by counter stability and false-wake rate under disturbance.

CMC (only if needed)

Use only when required for system-level radiation/immunity. Keep the pair symmetric and close to the port. A poorly placed CMC can increase imbalance and degrade node margin.

Protection side effects (common failure modes on small nodes)
Leakage → low-Iq regression

Protection leakage often rises with temperature. Treat leakage as part of the Iq budget and verify across VBAT/temperature buckets.

Parasitic C → edge reshaping

Extra capacitance can distort edges, shift thresholds, and increase false wake or error counters. Matched parts and symmetric placement reduce differential imbalance.

Return injection → ground bounce

A long or ambiguous return path injects disturbance into the PHY/MCU reference, creating intermittent state transitions and mis-attribution.

Scope guard: IEC coupling models and system-level EMC methodology belong to the EMC/Protection & Co-Design page.

Diagram · Port protection placement (connector → protection → PHY) + return path
Small Node Port Protection (Placement + Return) Connector LIN / CAN Low-C TVS Matched Series R Damping CMC Optional PHY Node Keep close to port Keep symmetric Return path GND reference (short + wide) Clamp Minimize loop area

Keep the TVS close to the connector and define a short, wide return path to prevent ground injection into the PHY reference.

H2-8 · Diagnostics & Serviceability (node black box, counters, event snapshots)

A node must be serviceable: it should attribute wake, capture snapshots around failures, and expose a compact event log remotely. The goal is to turn intermittent field issues into bucketed, explainable evidence.

Minimal diagnostics set (node baseline)
  • Bus counters: error counters + bus-off events + recovery outcomes.
  • Wake cause: bus / local / timer plus a compact frame/rule summary when applicable.
  • Supply health: brownout/undervoltage markers with VBAT snapshots.
  • Thermal: overtemp entry/exit markers with temperature snapshots.
  • Actuator safety: overcurrent/short flags and driver state.
  • Post-disturbance anomaly: a marker for abnormal counter drift after disturbance events.
Node “black box” strategy
Ring log
  • Ring buffer: keep the last N records (N as a project parameter).
  • Event snapshot: each record is structured (type + sequence + key variables).
  • Capture first: snapshot before any heavy recovery action (reset, power-cycle, retries).
  • Bucket-ready: include VBAT/T + counters + state + cause so field issues can be grouped.
Deliverables (node-facing)
Log field list
  • Identity: record type, sequence, version.
  • Cause: wake source, rule/ID summary, last frame digest (if bus-related).
  • Environment: VBAT, temperature, mode.
  • State: error counters, bus state, driver flags.
  • Outcome: recovery decision, retry count, time window bucket.
Trigger condition list
  • Wake: bus/local/timer with debounce + minimum interval.
  • Bus-off: include counters + last frames + recovery plan.
  • Overcurrent/short: include driver state + load status.
  • Brownout/UV: include VBAT snapshot and mode transition.
  • Overtemp: include temperature and throttle/disable action.
  • Post-disturbance anomaly: include drift marker and follow-up counters.
Remote diagnostic readout (node-side data structure)

Expose the event log through the bus using a compact object model: a header (version, record count, wrap indicator) plus paged entries (fixed fields + optional payload). Keep readout deterministic: index-based paging and simple integrity checks.

Diagram · Event capture timeline (sleep → wake → snapshot → report → update stats)
Node Event Recording Timeline Sleep Low-Iq Wake detect Cause tag Snapshot Capture first Report Paged read Update Counters Snapshot fields VBAT + T Counters Wake cause Remote readout Header + paging Stats update Buckets + KPI

Capture the snapshot before recovery actions to preserve evidence, then expose a deterministic paged readout for field diagnostics.

H2-9 · Validation & Bring-up Plan (what to measure, how to avoid measurement lies)

The validation goal is not “more tests”, but repeatable evidence: trustworthy Iq measurements, statistically stable false-wake rates, and bus consistency on worst-case harness conditions. Use two gates (bring-up vs production) with clear pass criteria placeholders.

Measurement truth chain (repeatable, node-first)
  1. Define metrics: what is counted, the sampling window, and the denominator.
  2. Stress by buckets: temperature / VBAT / harness / load.
  3. Decide by statistics: rates and distributions, not single runs.
  4. Gate with evidence: logs + counters + matrix coverage.
Iq measurement (avoid “lies”)
Trust first
Range + integration artifacts

Auto-ranging and long integration can hide mode-switch spikes. Verify using a fixed range and a consistent capture window per mode.

Fixture leakage + contamination

Leakage can come from the jig, cables, residue, or protection paths, often worsening at high temperature. A/B isolate suspected paths during bring-up.

Mode transition timing

Sleep → wake → sleep sequences can create misleading averages. Measure steady-state per mode and separately measure transition energy.

False-wake validation (noise / ESD / VBAT disturbance)
Define the wake event

Count wake events with a consistent denominator and window. Require wake attribution (bus/local/timer) to prevent ambiguous statistics.

Stress by buckets

Run bucketed campaigns (temperature / VBAT / harness / load). Report false-wake rate as X per hour (X is a project placeholder).

Post-disturbance drift check

After disturbance, verify counters and wake attribution remain stable. A “pass once” run is insufficient if fragility increases later.

Bus consistency + reliability (node-level)
  • Worst harness + worst stub: re-run consistency checks on realistic wiring, not only bench cables.
  • Cold start + brownout: ensure no oscillation between wake and sleep under VBAT edges.
  • Thermal cycling: confirm counters and false-wake statistics do not drift across temperature buckets.
  • Long sleep stability: validate wake cause and recovery behavior remain consistent after long idle periods.
Deliverables · Bring-up Gate vs Production Gate (pass criteria placeholders)
Bring-up Gate
Evidence-ready
  • Preconditions: fixture sanity + mode definitions + clean setup.
  • Measures: steady-state Iq per mode + short-run false-wake stats + counter dumps.
  • Evidence: snapshots, attribution logs, worst-harness spot checks.
  • Pass criteria: X (project-defined).
Production Gate
Bucketed stats
  • Coverage: matrix buckets across temperature / VBAT / harness / load.
  • Statistics: false-wake rate, bus-off rate, brownout rate per bucket.
  • Stability: long-sleep wake consistency + drift checks after disturbance.
  • Pass criteria: X (project-defined).
Diagram · Validation matrix (tests × buckets)
Validation Matrix (Node-level) Temp VBAT Harness Load Iq Per mode False-wake Rate Bus Consistency Reliability Cycles Covered Pending

Use bucketed coverage to prevent “one good run” from masking temperature, VBAT, harness, or load-driven failures.

H2-10 · Design Hooks & Pitfalls (node-only checklist, not generic bus guide)

This checklist stays node-only: false-wake traps, low-Iq “black holes”, brownout oscillations, and diagnostics definition errors. The fastest path is to map root causes to symptoms and an immediate first check.

Node-only checklist (fastest high-yield checks)
  • False-wake: harness noise coupling / protection parasitics / return injection.
  • Low-Iq black holes: TVS leakage / GPIO state errors / sensor standby current.
  • Reset + power drop: brownout chatter causing wake-sleep loops.
  • Diagnostics pitfalls: mismatched counter definitions and wrong windows/denominators.
Top-6 failure tree
Root → Symptom → First check
False-wake: harness noise

Symptom: wake spikes in specific wiring conditions. First check: wake attribution buckets (bus/local/timer) and time correlation.

False-wake: protection parasitics

Symptom: edge “looks smoother” but wake/counters worsen. First check: temperature sweep of Iq and false-wake rate together.

False-wake: return injection

Symptom: intermittent state jumps after disturbance. First check: TVS clamp return loop area and reference continuity near PHY/MCU.

Iq black hole: TVS leakage

Symptom: room temp OK, hot Iq explodes. First check: isolate protection paths and compare bucketed Iq deltas.

Iq black hole: GPIO state

Symptom: same hardware, different firmware gives large Iq spread. First check: define all sleep pin-states (no floating inputs).

Brownout chatter loop

Symptom: repeated wake-sleep cycles on VBAT edges. First check: correlate brownout events with wake events by timestamp/sequence.

Diagnostics definition warning: counters must share the same window and denominator across firmware and test tooling to avoid false conclusions.

Diagram · Top-6 fault tree (root cause → symptom → first check)
Top-6 Fault Tree (Node-only) Root cause Symptom First check Harness noise coupling Wake spikes Attribution buckets Protection parasitics False-wake rises Hot Iq + rate Return injection State jumps Clamp loop area TVS leakage Iq explodes hot Isolate paths GPIO mis-config Iq spread FW Define pin-state Brownout chatter Wake-sleep loop Event correlation

Use the first-check column to drive fast triage. Keep counters and windows consistent across firmware and tooling to avoid false conclusions.

H2-11 · Engineering Checklist (Design → Bring-up → Production)

Goal: turn “low-Iq + selective wake + serviceability” into auditable artifacts, measured evidence, and production thresholds.

1) Design Checklist (artifacts that must exist before layout freeze)

  • Power-domain map: Always-on vs switched rails; wake domain isolation; “who stays alive” is explicit.
  • Iq budget row: mode / VBAT / temperature / measurement point / contributors / pass criteria (X placeholder).
  • Wake table schema: ID/mask + payload conditions + window + debounce + priority + action; include false-wake KPI bucket plan.
  • Port protection placement rule: connector → protection → PHY distance; return path and symmetry are drawn, not implied.
  • Recovery policy: bus-off backoff + cool-down + “no reboot storm” rule; log points are defined.
  • Node black box: ring-log size N + event triggers + snapshot fields + readout command mapping (LIN/CAN diag session).
  • Bring-up & production gates: test matrix + pass/fail thresholds (X placeholder) + re-test conditions.
Reference BOM building blocks (examples) Use as starting points; verify grade/ASIL needs, bus speed, and node power targets.
  • Always-on LDO: TPS7B82-Q1 (example OPN: TPS7B8250EPWPRQ1)
  • LIN transceiver: TLIN1029-Q1, TJA1021T
  • CAN FD transceiver: TCAN1044A-Q1, MCP2562FD-H/MF, TJA1445
  • CAN SBC (regulator + watchdog + CAN): UJA1169ATK, TLE9471-3ES
  • Port ESD/TVS (CAN/LIN class): ESD2CAN24-Q1
  • Smart high-side switch (actuator power): TPS1H200A-Q1, BTS50015-1TAD
  • Smart low-side switch (multi-channel): TLE9104SH
  • Isolated CAN (HV domain / GPD): ISO1042BQDWRQ1
  • Isolated power driver (for isolated CAN rails): SN6505DQDBVTQ1
  • LIN motor/actuator SoC (tight integration option): TLE9854QX

2) Bring-up Checklist (measurements that must match the design intent)

  • Iq truth test: multiple meter ranges + fixture leakage control + temperature points; confirm each sleep mode transitions cleanly.
  • Wake attribution sanity: wake cause is logged within the first firmware window (before peripherals disturb evidence).
  • False-wake rate: count wakes/hour under noise/ESD/brownout stimuli; bucket by VBAT and temperature.
  • Bus robustness: worst harness + worst stub + worst load; confirm error counters remain bounded and bus-off recovery is stable.
  • Protection side-effects: verify ESD/TVS parasitics do not deform edges into mis-detection or selective-wake false triggers.
  • Black-box usefulness: event snapshot reproduces a fault narrative (wake → symptom → counters → recovery).

3) Production Checklist (station scripts + thresholds + traceability)

  • End-of-line Iq screen: fixed mode + fixed VBAT + fixed soak time; fail bins are linked to contributor hypotheses (TVS leakage / GPIO / sensor standby).
  • Wake filter regression: run the same wake-table vectors across batches; record false-wake KPI and drift.
  • Diag payload contract: counters/log schema versions are frozen; mismatched definitions are blocked at build time.
  • Field trace: station ID + firmware hash + calibration + pass thresholds are stored alongside node identity.
Diagram: 3-stage gate flow — required outputs per stage (tables / logs / thresholds).
Engineering Gates for Sensor/Actuator Nodes Artifacts → Evidence → Thresholds Design Gate Power-domain map Wake table schema Protection placement rule Bring-up Gate Iq evidence (temp/VBAT) False-wake statistics Bus-off recovery proof Production Gate EOL script + limits (X) Diag contract locked Traceability record

H2-12 · Applications (Node Patterns & Bundles)

Node bundles express “what gets built” without expanding into other sub-pages. Each bundle lists a minimal bill of materials with concrete example part numbers.

Pattern A

Ultra-Low-Power LIN Sensor Node (periodic sample + local threshold wake)

  • Target: lowest quiescent current while keeping deterministic wake causes.
  • Wake sources: local threshold / timed wake / LIN bus wake (if required by system policy).
  • Low-power lever: switched main domain; always-on keeps only wake + LIN interface + minimal sensing.
  • Minimum serviceability: wake-cause + VBAT/temperature snapshot + last N wake histogram.
Example BOM (concrete part numbers)
  • Always-on LDO: TPS7B82-Q1 (e.g., TPS7B8250EPWPRQ1)
  • LIN transceiver: TLIN1029-Q1 (alt: TJA1021T)
  • ESD/TVS (bus/IO class): ESD2CAN24-Q1 (used as a low-cap ESD/TVS building block family for in-vehicle nets)
  • Optional smart load switch for sensor rail: TPS1H200A-Q1 (when rail isolation + diagnostics are needed)
Pattern B

LIN Actuator Node (HSS/LSS + overcurrent diagnostics + sleep policy)

  • Target: robust actuation with consistent diagnostics and controlled wake behavior.
  • Wake sources: LIN bus wake / local input wake / timed wake for maintenance pulses.
  • Key rule: actuator power path must not back-feed the node in sleep (define clamp/backfeed checks).
  • Minimum serviceability: OC/OT counters + last fault timestamp + last N actuator command snapshots.
Example BOM (concrete part numbers)
  • Integrated LIN actuator SoC option: TLE9854QX (tight integration for actuator nodes)
  • LIN transceiver (discrete option): TLIN1029-Q1 (alt: TJA1021T)
  • Smart high-side switch (actuator supply): TPS1H200A-Q1 (alt high-current class: BTS50015-1TAD)
  • Smart low-side switch (multi-channel loads): TLE9104SH
  • ESD/TVS: ESD2CAN24-Q1
Pattern C

CAN FD Smart Sensor Node (event report + black box + payload discipline)

  • Target: maintain timing/EMC margin on real harness while keeping diagnostic truth.
  • Wake sources: selective bus wake (if used), local threshold wake, timed wake for calibration.
  • Discipline: cap bus utilization per node; avoid “event storms” that turn into network-wide misbehavior.
  • Minimum serviceability: error counters + bus-off snapshots + wake cause + last message schedule summary.
Example BOM (concrete part numbers)
  • CAN FD transceiver: TCAN1044A-Q1 (alt: MCP2562FD-H/MF, TJA1445)
  • CAN SBC option (regulator + watchdog + CAN): UJA1169ATK or TLE9471-3ES
  • ESD/TVS: ESD2CAN24-Q1
  • Always-on LDO (if not using SBC rail): TPS7B82-Q1
Pattern D

Isolated CAN Actuator Node (HV domain / large GPD + safety fallback)

  • Target: tolerate ground potential differences; prevent ground-shift from turning into false-wake or bus faults.
  • Wake sources: bus wake across isolation boundary + local safe wake; log “pre-fault” context before recovery.
  • Fallback: define actuator safe state on bus-loss and repeated bus-off; prove no reboot storm.
  • Minimum serviceability: isolation-side fault counters + bus-off timeline + supply brownout + temperature snapshots.
Example BOM (concrete part numbers)
  • Isolated CAN transceiver: ISO1042BQDWRQ1
  • Isolated power (transformer driver): SN6505DQDBVTQ1
  • Smart high-side switch: TPS1H200A-Q1 (alt high-current class: BTS50015-1TAD)
  • Smart low-side switch: TLE9104SH
  • ESD/TVS: ESD2CAN24-Q1
Diagram: 4 application bundles — each bundle shows bus, domains, and the “must-have” blocks.
Node Bundles (A–D): Bus + Power Domains + Serviceability A · LIN Always-on LDO LIN PHY Switched MCU/Sensor Wake: local/timer/bus Min log: wake-cause + VBAT B · LIN Actuator Driver (HSS/LSS) LIN PHY Diagnostics OC/OT Sleep policy No backfeed in sleep C · CAN FD CAN FD PHY Payload discipline Black box EMC margin Harness worst-case D · Isolated Isolated CAN + isolated power Actuator driver + safe fallback GPD / HV domain ready

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-13 · FAQs (Node Troubleshooting, 4-line Answers)

Scope: only node-level troubleshooting for low-Iq, selective wake, protection side-effects, diagnostics, and bring-up evidence. No new domains.

Sleep Iq is low in the datasheet, but the vehicle/system Iq is much higher — which 3 leakage sources come first?

Likely cause: always-on rail leakage from (1) TVS/ESD devices, (2) pull-ups/dividers, (3) GPIO clamp/backfeed via external loads.

Quick check: split Iq by domains: measure VBAT with main domain OFF/ON; then temporarily lift/disable pull-ups and isolate protection footprints to rank contributors.

Fix: enforce “always-on only” policy; remove floating GPIO; cap pull-up current; select lower-leakage protection and validate leakage at hot.

Pass criteria: Deep-sleep Iq ≤ X µA @ VBAT=Y V, T=Z °C; top-3 leakage contributors each ≤ X µA (bucketed by temperature).

The node wakes up sporadically, but the log shows no wake frame — check the filter table first or suspect EMC events?

Likely cause: wake attribution is missing/late, so a non-frame wake (local pin, brownout, transient) looks like “no frame.”

Quick check: log wake-cause within the first firmware window (before stacks/peripherals run): {bus/local/timer/reset} + minimal snapshot (VBAT min, reset reason).

Fix: implement deterministic wake attribution; add a “pre-stack” capture path; then validate filter entries only after attribution confirms “bus wake.”

Pass criteria: 100% of wakes include wake-cause + snapshot; false-wake rate ≤ X wakes/hour in bucket (T/VBAT/harness).

After waking, the node immediately falls back to sleep — is it brownout chatter or a state-machine race?

Likely cause: VBAT droop/brownout causes repeated reset, or wake/sleep transitions have a race (flag cleared too early, debounce window wrong).

Quick check: correlate wake timeline with reset-reason + VBAT min capture; look for repeating “wake → reset → sleep” signatures within a short window.

Fix: add hysteresis/hold-up time for wake; gate sleep entry until “system stable” condition; harden debounce and ordering of wake flags.

Pass criteria: wake completes to “online” state within X ms without extra resets; no wake-sleep oscillation over Y cycles (T/VBAT buckets).

After bus-off recovery, the network “gets slower” — is it the backoff strategy or counter/KPI definition?

Likely cause: recovery backoff/cool-down reduces traffic intentionally, or the KPI window/denominator changed after recovery (making it look slower).

Quick check: compare utilization and message latency with identical measurement window and denominator before/after bus-off; log backoff state transitions.

Fix: cap backoff duration and make it observable; standardize KPI definitions (window, denominator, bucket) across firmware and service tools.

Pass criteria: bus-off ≤ X events per Y hours; recovery time ≤ Z ms; post-recovery utilization and latency within X% of baseline under same KPI window.

After changing the TVS, false wakes increase — check leakage first or edge-shape change from parasitic C?

Likely cause: hot leakage shifts the input threshold/bias, or added capacitance/imbalance deforms edges into mis-detection windows.

Quick check: measure TVS leakage vs temperature on the bench; then compare edge rise/fall and symmetry with the old part on the same harness.

Fix: select matched low-cap, low-leakage automotive ESD/TVS (e.g., ESD2CAN24-Q1 as a reference family) and place it to preserve return path symmetry.

Pass criteria: false-wake ≤ X wakes/hour across temperature buckets; leakage at hot ≤ X µA; edge metrics stay within X% of baseline in worst harness.

Low temperature is OK, but false wakes rise at high temperature — which drift path should be checked first?

Likely cause: temperature increases leakage (protection, pull-ups, sensor standby), shrinking noise margin and widening mis-detection windows.

Quick check: bucket false-wake rate by temperature and correlate with Iq contributors (always-on rail current) and wake-cause distribution.

Fix: reduce hot leakage contributors; tighten wake debounce/window; add hysteresis to local thresholds; confirm wake attribution is captured early.

Pass criteria: false-wake ≤ X/hour at hot bucket; always-on leakage increase ≤ X µA vs room; wake-cause distribution matches intended policy.

A LIN node occasionally does not respond — is it auto-baud or sleep/wake timing?

Likely cause: the node is not fully awake/clock-ready when the header arrives, or auto-baud capture window is missed after wake.

Quick check: log “wake-to-ready” time and compare to master’s first header timing; inspect whether the first frame after wake is consistently decoded.

Fix: add a deterministic wake-ready handshake window; delay first header or extend wake-ready margin; validate transceiver wake behavior (e.g., TLIN1029-Q1 / TJA1021T class).

Pass criteria: 0 missed responses over N wake cycles; wake-to-ready ≤ X ms; first-frame decode success ≥ (100% − X ppm) across T/VBAT buckets.

The actuator node triggers overcurrent protection, but the event log is missing — is it sampling window or interrupt priority?

Likely cause: the fault happens faster than the logging path (snapshot taken too late), or the ISR/log write is preempted/blocked during protection action.

Quick check: add a “pre-fault latch” (first-fault flag + timestamp) and compare it to the ring-log sequence; verify ISR latency under load.

Fix: move minimal snapshot capture into the highest-priority path; separate “capture” from “serialize/log write”; ensure protection IC status is read deterministically (e.g., TPS1H200A-Q1 / TLE9104SH class).

Pass criteria: ≥ 99.9% of OC events produce a snapshot with required fields; snapshot timestamp error ≤ X ms; no missed capture under worst load.

The same node behaves differently on different harnesses — what is the first harness bucketing and re-test step?

Likely cause: harness length/stubs/loads change edge shape and noise coupling, shifting selective-wake and error margins.

Quick check: bucket harnesses by {length, stub, node count/load, shielding/return path}; re-test worst bucket first with identical firmware and KPI window.

Fix: tune wake debounce/window and confirm protection placement symmetry; update validation matrix to always include worst harness bucket as a gate item.

Pass criteria: all buckets meet false-wake ≤ X/hour and bus errors ≤ X per Y hours; worst bucket margin is documented and stable across temperature.

The node “woke up but does not join the bus” — check wake attribution first or bus error state first?

Likely cause: the node woke due to a non-bus source (local/brownout), or it entered error-passive/bus-off during early startup on a noisy harness.

Quick check: read wake-cause + startup counters within the first window; check if TEC/REC rise immediately and whether bus-off state is reached.

Fix: ensure deterministic wake attribution; add startup quiet window; harden bus-off recovery (e.g., CAN FD PHY class like TCAN1044A-Q1) and avoid reboot storms.

Pass criteria: 100% of wakes have attribution; node reaches “online” state ≤ X ms; no bus-off during first Y seconds in worst harness bucket.

After a long sleep, the first wake-up fails — which retention / register-loss class should be checked first?

Likely cause: a required retention bit/clock domain is not preserved, or the wake domain powers up in the wrong order causing missed first-frame readiness.

Quick check: compare “first wake” vs “second wake” register snapshots; verify wake-to-ready timing and whether any init step assumes a warm state.

Fix: make long-sleep wake path idempotent; reinitialize all critical registers; validate power-domain sequencing and guard the first-frame window.

Pass criteria: first wake success ≥ (100% − X ppm) over N long-sleep cycles; wake-to-ready ≤ X ms; no “warm-state dependence” in logs.

Service readout looks normal, but customers still report faults — which KPI definition must be aligned first?

Likely cause: KPI mismatch (window, denominator, bucket, trigger) makes “normal” in tools differ from “fault” in the field.

Quick check: re-compute the KPI using the same time window and buckets as the customer scenario; compare raw counters and snapshot triggers.

Fix: freeze KPI contract (definitions + units + windowing); store tool version and station metadata; keep a minimal black-box snapshot for disputed cases.

Pass criteria: KPI agreement within X% across tools and field logs; disputed cases always have snapshot evidence with consistent bucket labels.