Sensors / Actuators Nodes: Compact LIN/CAN with Selective Wake
← Back to: Automotive Fieldbuses: CAN / LIN / FlexRay
A sensor/actuator node succeeds when it can sleep with truly low Iq, wake only for intended reasons (with attribution), and leave diagnosable evidence that survives real harness noise and protection parasitics.
The practical recipe is: design explicit power domains and wake tables, verify false-wake and bus recovery on worst harness buckets, and ship a minimal black-box log that makes field failures explainable.
H2-1 · Overview: What a “Sensor/Actuator Node” Means (and what it doesn’t)
A sensor/actuator node is a remote micro-ECU on a vehicle harness: it combines a bus interface (LIN/CAN), local logic/state control, I/O or power drivers, and a disciplined sleep/wake strategy. The engineering goal is not a single part choice — it is a system that sleeps low, wakes correctly, and stays diagnosable.
- Node boundary, power domains (always-on vs switched), and sleep architecture.
- Selective wake strategy at node level: filtering, false-wake control, wake attribution.
- Minimum protection/EMC for small nodes (placement + parasitics impact).
- Diagnostics and serviceability: counters, snapshots, and a small black-box log.
- Bring-up and production gates: what to measure and what “pass” looks like.
- PHY waveform details and protocol deep dives (Classic/FD/SIC/XL timing formulas).
- Full ISO 11898-6 rule text and complete compliance interpretation.
- General EMC theory; this page only covers node-minimum placement constraints.
Track Iq by mode × temperature × VBAT. Include not only IC standby currents but also leakage from protection parts, pull-ups, dividers, sensor standby, and GPIO clamp paths.
Define a KPI such as X wakes/hour under specified harness noise, EMC stress, and environmental conditions. Always log wake cause (bus/local/timer) to prevent “mystery wakes”.
After bus faults, undervoltage, or thermal events, the node must recover with bounded retries and clear state transitions. Avoid oscillation between wake/sleep due to brownout or racing state machines.
Maintain a small ring buffer of event snapshots: wake cause, counters, VBAT, temperature, and key driver/sensor flags. Without this, field returns become “cannot reproduce”.
- “Datasheet Iq looks great, system Iq is high” — leakage, GPIO clamps, protection parasitics, or sensor standby dominates.
- “Wakes up randomly” — filter tables too permissive, noise coupling on harness, or poor return path triggers false wakes.
- “Wakes but never comes online” — brownout chatter, incorrect power-domain sequencing, or missing state-machine debouncing.
- “Field failure with no evidence” — no attribution log, no counters, no event snapshot at the moment of failure.
The diagram separates always-on blocks (wake filtering + PHY) from switched blocks (MCU/driver/sensor) and shows where wake attribution and event logging must tap.
H2-2 · Node Archetypes & Topologies (LIN-only / CAN / Dual-bus / Isolated)
Node architectures are easiest to control when the type is identified first. Each archetype has a different dominant risk: leakage-driven Iq, false-wake sensitivity, fault recovery, or ground potential differences. The sections below define a must-have set for every archetype: wake source, Iq budget, minimum protection, and minimum logging.
- Wake source: bus wake (filter-based), local wake (sensor/trigger), and optional timed wake.
- Iq budget: component-by-component accounting across modes and temperatures.
- Minimum protection: node-specific placement and parasitic control (avoid protection-induced false wakes).
- Minimum logging: wake attribution + counters + event snapshot (field-service ready).
Dominant risks: leakage-driven Iq, wake threshold sensitivity, missing attribution logs.
Minimum tests: Iq by mode/temperature, false-wake rate under harness noise, wake-cause consistency.
Dominant risks: driver fault recovery, thermal/overcurrent events, log gaps at the moment of fault.
Minimum tests: protection trips + recovery, event snapshot coverage, stable sleep after fault clearing.
Dominant risks: bus-off handling, retry storms, excessive utilization and priority mis-design.
Minimum tests: fault injection + bounded recovery, counter integrity, utilization limits under worst-case traffic.
Dominant risks: return-path surprises, isolation power sequencing, disturbance-induced false wakes.
Minimum tests: disturbance/burst tolerance at system level, wake attribution under stress, stable domain power-up order.
- Bus layer: LIN or CAN interface + wake-qualified receive path.
- Power domains: always-on wake/PHY domain + switched compute/IO domain with deterministic sequencing.
- Protection minimum: connector-adjacent parts placed to preserve edges and avoid leakage dominance.
- Logging minimum: wake cause + counters + event snapshot (VBAT/T/flags) stored before heavy software runs.
- Validation minimum: Iq matrix + false-wake statistics + recovery boundedness under injected faults.
Each archetype keeps the same power-domain structure. Differences are confined to the switched blocks (sensor vs driver vs diagnostics) and, when required, an isolation barrier to tolerate large ground potential differences.
H2-3 · Power Domains & Sleep Architecture (why nodes fail low-Iq)
Low sleep current is a system accounting problem. A node reaches low-Iq only when the power tree is split into always-on and switched domains, sleep modes are defined with deterministic sequencing, and every microamp has an owner in an Iq budget row.
- VBAT → protection → pre-reg/LDO → loads. The domain boundary defines the achievable Iq.
- Always-on domain keeps wake-qualified receive and minimal logging alive.
- Switched domain powers MCU/driver/sensor only when needed, then returns to a stable sleep state.
- Leakage paths (TVS, dividers, pull-ups, GPIO clamps, driver backfeed) often dominate at microamp targets.
Always-on keeps wake-qualified receive. Switched domain is off. Wake cause is logged first, then the node powers up deterministically.
The always-on set is reduced to the smallest reliable wake path. This mode requires strict sequencing and guardrails against oscillation (wake → brownout → sleep → wake).
The node enters a near-off state. Entry and exit conditions must be explicit. Wake sources are intentionally limited, and recovery behavior is validated as a production gate.
- TVS / protection leakage: temperature-dependent leakage silently dominates microamp budgets.
- GPIO floating / wrong bias: undefined pins create clamp currents or keep blocks partially alive.
- Pull-ups / dividers: “small” static resistors become the largest term at low-Iq targets.
- MCU pin clamps: external voltage on an I/O while rails are off backfeeds the domain.
- Driver backfeed: actuator loads inject energy through ESD diodes or body paths.
- VBAT: voltage point (min/nom/max).
- Temperature: test temperature bucket.
- Mode: Standby / Deep sleep / Shipping.
- Measurement point: where current is measured (supply node / domain).
- Contributors: IC Iq, leakage, pull-ups/dividers, sensor standby, backfeed paths.
- Pass criteria: Iq < X (and stable over Y minutes) with defined test setup.
Each row assigns ownership to every current contributor. Any “unknown” term is treated as a defect until localized.
The key design control is the domain boundary. The key debug control is the Iq budget row with explicit leakage owners.
H2-4 · Selective Wake Strategy for Nodes (filter, false-wake, attribution)
Selective wake is a closed loop: define wake sources, implement a deterministic filter table, control false wakes under stress, and log attribution so the strategy can be tightened without losing required events.
- Bus wake: wake only when a rule matches (ID/mask + payload condition + window/debounce).
- Local wake: physical trigger (button, threshold, edge) with debouncing and rate limiting.
- Timed wake: periodic maintenance checks with bounded duration and explicit return-to-sleep.
- Rule ID: stable identifier for attribution logs.
- CAN/LIN ID + Mask: match scope (exact / range-like by mask).
- Payload condition: byte/bit match for the minimum semantic trigger.
- Window: N-of-M counters and time bounds.
- Debounce / Min interval: block repeats and burst noise triggers.
- Action: wake / partial wake / ignore / log-only.
- Priority: resolve multi-hit deterministically.
- Log policy: which snapshot fields must be captured on hit.
- Stress drivers: harness coupling, EMC events, ground bounce, supply disturbance.
- Bucket dimensions: temperature, VBAT, harness variant, operating state (park/drive), event class.
- Primary lever: move from permissive rules to minimal semantic triggers (payload conditions + windows).
Define X wakes/hour (or X wakes/parking cycle) under specified buckets (temperature/VBAT/harness/state). Split statistics by wake source and require bus-wake attribution to include rule ID and frame summary.
- Record wake source: bus / local / timer.
- If bus wake: record rule ID + minimal frame summary (ID, DLC, key bytes).
- Snapshot: VBAT, temperature, error counters, last sleep mode.
- Purpose: convert “mystery wakes” into actionable buckets for table tightening.
The loop prevents two extremes: waking on everything (battery drain) or waking on nothing (missed events). Attribution is the key to safe tightening.
H2-5 · LIN Sensor/Actuator Node Design (node context)
LIN fits low-speed, high-node-count, cost-sensitive networks when the node is engineered for low-Iq sleep, noise resilience, and serviceable logs. This section focuses on node-level usage and verification, not PHY deep-dive.
- Low speed + many nodes: sensors and simple actuators that benefit from predictable scheduling.
- Cost/size constrained: minimal harness and low component count at the node.
- Serviceability required: node must explain wake reasons, actuator faults, and recovery outcomes.
Define entry criteria (quiet time + stable supply), and define wake handling as a deterministic sequence: log wake cause first, then power the switched domain, then validate bus state. Add debounce and minimum intervals to avoid oscillation under harness noise.
Treat auto-baud as a controlled acquisition phase after wake: reject unstable edges, require repeated valid headers, and log acquisition failures with counters. Avoid waking the full node on ambiguous activity.
Select slew settings by node-level verification: emission vs error counters vs false-wake rate. Faster edges can increase susceptibility and coupling; slower edges can reduce margin if timing is tight. The pass criterion is a stable error profile on the real harness, not a bench-only waveform.
- HSS/LSS driver controls: define over-current, short-to-battery/ground, thermal behaviors as explicit states.
- Fault-to-report mapping: every driver fault class must map to a minimal report field set (class + timestamp + counters).
- Recovery rules: avoid “reset storms” by bounding retries and enforcing cool-down windows.
- Service snapshot: capture VBAT/temperature + fault flags + last wake reason before any heavy recovery action.
- Main domain off: keep MCU/driver/sensor in the switched domain and hard-off during sleep.
- Keep wake domain alive: LIN wake-qualified path and minimal log storage remain powered.
- Respect master-side pull-up/termination: avoid redundant static pull-ups at the node that burn Iq.
- Pin-state policy: forbid floating pins in sleep; define bias and clamp avoidance explicitly.
- Backfeed prevention: ensure actuator/load paths cannot inject current into an unpowered domain.
Scope guard: physical-layer specifications and device-family comparisons belong to the LIN Transceiver and LIN SBC subpages.
The diagram highlights a wake-qualified always-on path plus a switched compute/driver domain to meet low-Iq targets and serviceability.
H2-6 · CAN/CAN FD Node Design (margin, bus-off recovery, payload discipline)
Node-side CAN reliability is built on verified timing margin on the real harness, disciplined bus-off handling to prevent restart storms, and message rules that keep utilization bounded so diagnostics remain meaningful. This section avoids protocol encyclopedias and PHY deep-dives.
- Harness margin: stable operation across harness length, stubs, temperature and VBAT buckets.
- Error-state stability: counters remain bounded; state transitions are logged and explainable.
- Controlled recovery: bus-off recovery is bounded in time and retries; no reset storms.
- Payload discipline: utilization is capped so errors are not self-inflicted by overload.
Treat timing margin as a measured property of the full system: harness length and stubs, ground offsets, temperature drift, and disturbance events. The node-level practice is to verify margin by correlating error counters and state transitions under defined buckets, rather than relying on bench-only waveforms.
- Harness: length + stub variant + connector variant.
- Environment: temperature buckets + VBAT min/nom/max.
- Stress: disturbance events that typically trigger false transitions (noise, load steps).
- Outputs: error counters, state timeline, and recovery outcomes.
- Backoff windows: impose cool-down before rejoin; reduce collision with persistent faults.
- Bounded retries: cap recovery attempts, then enter a defined degraded/service mode.
- No blind resets: capture a snapshot (counters + last frames + VBAT/T) before heavy actions.
- Rejoin discipline: validate bus stability before enabling high-rate event traffic.
- Periodic vs event: enforce minimum intervals for events; avoid burst floods under oscillating sensors.
- Priority rules: safety/control messages have fixed priority; diagnostics are throttled.
- Utilization cap: define a node maximum (X% placeholder) and validate under worst-case state.
- Diagnostic fidelity: ensure overload cannot masquerade as timing/PHY faults.
Scope guard: CAN FD/SIC/XL waveform specifics and PHY performance metrics belong to the corresponding transceiver/PHY subpages.
Higher edge rates can increase coupling and error sensitivity under disturbance, which can also increase false wakes. The node-side countermeasure is stricter message discipline (bounded bursts), conservative recovery policies, and bucketed verification on the real harness across temperature/VBAT ranges.
The node should log state transitions and counters to differentiate real margin issues from self-inflicted overload and to avoid restart storms.
H2-7 · Minimal EMC/Protection for Small Nodes (node-specific only)
For compact nodes, protection parts can look “correct” yet create failures by breaking return paths, upsetting symmetry, increasing leakage, or reshaping edges. The node-first priority is return path and placement, then minimal components, then verification.
- Return path + ground plan: define where the surge/ESD current returns, short and wide.
- Placement distance: connector → protection → PHY must be tight and unambiguous.
- Minimal parts: low-C TVS, small series damping, optional CMC only when needed.
- Node verification: confirm no low-Iq regression and no false-wake / counter drift.
Place at the connector with a short return path. Keep symmetry to avoid differential imbalance that reshapes edges and increases false triggers.
A small series element helps control ringing and edge stress, but the benefit depends on location. Verify by counter stability and false-wake rate under disturbance.
Use only when required for system-level radiation/immunity. Keep the pair symmetric and close to the port. A poorly placed CMC can increase imbalance and degrade node margin.
Protection leakage often rises with temperature. Treat leakage as part of the Iq budget and verify across VBAT/temperature buckets.
Extra capacitance can distort edges, shift thresholds, and increase false wake or error counters. Matched parts and symmetric placement reduce differential imbalance.
A long or ambiguous return path injects disturbance into the PHY/MCU reference, creating intermittent state transitions and mis-attribution.
Scope guard: IEC coupling models and system-level EMC methodology belong to the EMC/Protection & Co-Design page.
Keep the TVS close to the connector and define a short, wide return path to prevent ground injection into the PHY reference.
H2-8 · Diagnostics & Serviceability (node black box, counters, event snapshots)
A node must be serviceable: it should attribute wake, capture snapshots around failures, and expose a compact event log remotely. The goal is to turn intermittent field issues into bucketed, explainable evidence.
- Bus counters: error counters + bus-off events + recovery outcomes.
- Wake cause: bus / local / timer plus a compact frame/rule summary when applicable.
- Supply health: brownout/undervoltage markers with VBAT snapshots.
- Thermal: overtemp entry/exit markers with temperature snapshots.
- Actuator safety: overcurrent/short flags and driver state.
- Post-disturbance anomaly: a marker for abnormal counter drift after disturbance events.
- Ring buffer: keep the last N records (N as a project parameter).
- Event snapshot: each record is structured (type + sequence + key variables).
- Capture first: snapshot before any heavy recovery action (reset, power-cycle, retries).
- Bucket-ready: include VBAT/T + counters + state + cause so field issues can be grouped.
- Identity: record type, sequence, version.
- Cause: wake source, rule/ID summary, last frame digest (if bus-related).
- Environment: VBAT, temperature, mode.
- State: error counters, bus state, driver flags.
- Outcome: recovery decision, retry count, time window bucket.
- Wake: bus/local/timer with debounce + minimum interval.
- Bus-off: include counters + last frames + recovery plan.
- Overcurrent/short: include driver state + load status.
- Brownout/UV: include VBAT snapshot and mode transition.
- Overtemp: include temperature and throttle/disable action.
- Post-disturbance anomaly: include drift marker and follow-up counters.
Expose the event log through the bus using a compact object model: a header (version, record count, wrap indicator) plus paged entries (fixed fields + optional payload). Keep readout deterministic: index-based paging and simple integrity checks.
Capture the snapshot before recovery actions to preserve evidence, then expose a deterministic paged readout for field diagnostics.
H2-9 · Validation & Bring-up Plan (what to measure, how to avoid measurement lies)
The validation goal is not “more tests”, but repeatable evidence: trustworthy Iq measurements, statistically stable false-wake rates, and bus consistency on worst-case harness conditions. Use two gates (bring-up vs production) with clear pass criteria placeholders.
- Define metrics: what is counted, the sampling window, and the denominator.
- Stress by buckets: temperature / VBAT / harness / load.
- Decide by statistics: rates and distributions, not single runs.
- Gate with evidence: logs + counters + matrix coverage.
Auto-ranging and long integration can hide mode-switch spikes. Verify using a fixed range and a consistent capture window per mode.
Leakage can come from the jig, cables, residue, or protection paths, often worsening at high temperature. A/B isolate suspected paths during bring-up.
Sleep → wake → sleep sequences can create misleading averages. Measure steady-state per mode and separately measure transition energy.
Count wake events with a consistent denominator and window. Require wake attribution (bus/local/timer) to prevent ambiguous statistics.
Run bucketed campaigns (temperature / VBAT / harness / load). Report false-wake rate as X per hour (X is a project placeholder).
After disturbance, verify counters and wake attribution remain stable. A “pass once” run is insufficient if fragility increases later.
- Worst harness + worst stub: re-run consistency checks on realistic wiring, not only bench cables.
- Cold start + brownout: ensure no oscillation between wake and sleep under VBAT edges.
- Thermal cycling: confirm counters and false-wake statistics do not drift across temperature buckets.
- Long sleep stability: validate wake cause and recovery behavior remain consistent after long idle periods.
- Preconditions: fixture sanity + mode definitions + clean setup.
- Measures: steady-state Iq per mode + short-run false-wake stats + counter dumps.
- Evidence: snapshots, attribution logs, worst-harness spot checks.
- Pass criteria: X (project-defined).
- Coverage: matrix buckets across temperature / VBAT / harness / load.
- Statistics: false-wake rate, bus-off rate, brownout rate per bucket.
- Stability: long-sleep wake consistency + drift checks after disturbance.
- Pass criteria: X (project-defined).
Use bucketed coverage to prevent “one good run” from masking temperature, VBAT, harness, or load-driven failures.
H2-10 · Design Hooks & Pitfalls (node-only checklist, not generic bus guide)
This checklist stays node-only: false-wake traps, low-Iq “black holes”, brownout oscillations, and diagnostics definition errors. The fastest path is to map root causes to symptoms and an immediate first check.
- False-wake: harness noise coupling / protection parasitics / return injection.
- Low-Iq black holes: TVS leakage / GPIO state errors / sensor standby current.
- Reset + power drop: brownout chatter causing wake-sleep loops.
- Diagnostics pitfalls: mismatched counter definitions and wrong windows/denominators.
Symptom: wake spikes in specific wiring conditions. First check: wake attribution buckets (bus/local/timer) and time correlation.
Symptom: edge “looks smoother” but wake/counters worsen. First check: temperature sweep of Iq and false-wake rate together.
Symptom: intermittent state jumps after disturbance. First check: TVS clamp return loop area and reference continuity near PHY/MCU.
Symptom: room temp OK, hot Iq explodes. First check: isolate protection paths and compare bucketed Iq deltas.
Symptom: same hardware, different firmware gives large Iq spread. First check: define all sleep pin-states (no floating inputs).
Symptom: repeated wake-sleep cycles on VBAT edges. First check: correlate brownout events with wake events by timestamp/sequence.
Diagnostics definition warning: counters must share the same window and denominator across firmware and test tooling to avoid false conclusions.
Use the first-check column to drive fast triage. Keep counters and windows consistent across firmware and tooling to avoid false conclusions.
H2-11 · Engineering Checklist (Design → Bring-up → Production)
Goal: turn “low-Iq + selective wake + serviceability” into auditable artifacts, measured evidence, and production thresholds.
1) Design Checklist (artifacts that must exist before layout freeze)
- Power-domain map: Always-on vs switched rails; wake domain isolation; “who stays alive” is explicit.
- Iq budget row: mode / VBAT / temperature / measurement point / contributors / pass criteria (X placeholder).
- Wake table schema: ID/mask + payload conditions + window + debounce + priority + action; include false-wake KPI bucket plan.
- Port protection placement rule: connector → protection → PHY distance; return path and symmetry are drawn, not implied.
- Recovery policy: bus-off backoff + cool-down + “no reboot storm” rule; log points are defined.
- Node black box: ring-log size N + event triggers + snapshot fields + readout command mapping (LIN/CAN diag session).
- Bring-up & production gates: test matrix + pass/fail thresholds (X placeholder) + re-test conditions.
-
Always-on LDO:
TPS7B82-Q1(example OPN:TPS7B8250E)PWPRQ1 -
LIN transceiver:
TLIN1029-Q1,TJA1021T -
CAN FD transceiver:
TCAN1044A-Q1,MCP2562FD-H/MF,TJA1445 -
CAN SBC (regulator + watchdog + CAN):
UJA1169ATK,TLE9471-3ES -
Port ESD/TVS (CAN/LIN class):
ESD2CAN24-Q1 -
Smart high-side switch (actuator power):
TPS1H200A-Q1,BTS50015-1TAD -
Smart low-side switch (multi-channel):
TLE9104SH -
Isolated CAN (HV domain / GPD):
ISO1042BQDWRQ1 -
Isolated power driver (for isolated CAN rails):
SN6505DQDBVTQ1 -
LIN motor/actuator SoC (tight integration option):
TLE9854QX
2) Bring-up Checklist (measurements that must match the design intent)
- Iq truth test: multiple meter ranges + fixture leakage control + temperature points; confirm each sleep mode transitions cleanly.
- Wake attribution sanity: wake cause is logged within the first firmware window (before peripherals disturb evidence).
- False-wake rate: count wakes/hour under noise/ESD/brownout stimuli; bucket by VBAT and temperature.
- Bus robustness: worst harness + worst stub + worst load; confirm error counters remain bounded and bus-off recovery is stable.
- Protection side-effects: verify ESD/TVS parasitics do not deform edges into mis-detection or selective-wake false triggers.
- Black-box usefulness: event snapshot reproduces a fault narrative (wake → symptom → counters → recovery).
3) Production Checklist (station scripts + thresholds + traceability)
- End-of-line Iq screen: fixed mode + fixed VBAT + fixed soak time; fail bins are linked to contributor hypotheses (TVS leakage / GPIO / sensor standby).
- Wake filter regression: run the same wake-table vectors across batches; record false-wake KPI and drift.
- Diag payload contract: counters/log schema versions are frozen; mismatched definitions are blocked at build time.
- Field trace: station ID + firmware hash + calibration + pass thresholds are stored alongside node identity.
H2-12 · Applications (Node Patterns & Bundles)
Node bundles express “what gets built” without expanding into other sub-pages. Each bundle lists a minimal bill of materials with concrete example part numbers.
Ultra-Low-Power LIN Sensor Node (periodic sample + local threshold wake)
- Target: lowest quiescent current while keeping deterministic wake causes.
- Wake sources: local threshold / timed wake / LIN bus wake (if required by system policy).
- Low-power lever: switched main domain; always-on keeps only wake + LIN interface + minimal sensing.
- Minimum serviceability: wake-cause + VBAT/temperature snapshot + last N wake histogram.
- Always-on LDO:
TPS7B82-Q1(e.g.,TPS7B8250E)PWPRQ1 - LIN transceiver:
TLIN1029-Q1(alt:TJA1021T) - ESD/TVS (bus/IO class):
ESD2CAN24-Q1(used as a low-cap ESD/TVS building block family for in-vehicle nets) - Optional smart load switch for sensor rail:
TPS1H200A-Q1(when rail isolation + diagnostics are needed)
LIN Actuator Node (HSS/LSS + overcurrent diagnostics + sleep policy)
- Target: robust actuation with consistent diagnostics and controlled wake behavior.
- Wake sources: LIN bus wake / local input wake / timed wake for maintenance pulses.
- Key rule: actuator power path must not back-feed the node in sleep (define clamp/backfeed checks).
- Minimum serviceability: OC/OT counters + last fault timestamp + last N actuator command snapshots.
- Integrated LIN actuator SoC option:
TLE9854QX(tight integration for actuator nodes) - LIN transceiver (discrete option):
TLIN1029-Q1(alt:TJA1021T) - Smart high-side switch (actuator supply):
TPS1H200A-Q1(alt high-current class:BTS50015-1TAD) - Smart low-side switch (multi-channel loads):
TLE9104SH - ESD/TVS:
ESD2CAN24-Q1
CAN FD Smart Sensor Node (event report + black box + payload discipline)
- Target: maintain timing/EMC margin on real harness while keeping diagnostic truth.
- Wake sources: selective bus wake (if used), local threshold wake, timed wake for calibration.
- Discipline: cap bus utilization per node; avoid “event storms” that turn into network-wide misbehavior.
- Minimum serviceability: error counters + bus-off snapshots + wake cause + last message schedule summary.
- CAN FD transceiver:
TCAN1044A-Q1(alt:MCP2562FD-H/MF,TJA1445) - CAN SBC option (regulator + watchdog + CAN):
UJA1169ATKorTLE9471-3ES - ESD/TVS:
ESD2CAN24-Q1 - Always-on LDO (if not using SBC rail):
TPS7B82-Q1
Isolated CAN Actuator Node (HV domain / large GPD + safety fallback)
- Target: tolerate ground potential differences; prevent ground-shift from turning into false-wake or bus faults.
- Wake sources: bus wake across isolation boundary + local safe wake; log “pre-fault” context before recovery.
- Fallback: define actuator safe state on bus-loss and repeated bus-off; prove no reboot storm.
- Minimum serviceability: isolation-side fault counters + bus-off timeline + supply brownout + temperature snapshots.
- Isolated CAN transceiver:
ISO1042BQDWRQ1 - Isolated power (transformer driver):
SN6505DQDBVTQ1 - Smart high-side switch:
TPS1H200A-Q1(alt high-current class:BTS50015-1TAD) - Smart low-side switch:
TLE9104SH - ESD/TVS:
ESD2CAN24-Q1
Recommended topics you might also need
Request a Quote
H2-13 · FAQs (Node Troubleshooting, 4-line Answers)
Scope: only node-level troubleshooting for low-Iq, selective wake, protection side-effects, diagnostics, and bring-up evidence. No new domains.
Sleep Iq is low in the datasheet, but the vehicle/system Iq is much higher — which 3 leakage sources come first?
Likely cause: always-on rail leakage from (1) TVS/ESD devices, (2) pull-ups/dividers, (3) GPIO clamp/backfeed via external loads.
Quick check: split Iq by domains: measure VBAT with main domain OFF/ON; then temporarily lift/disable pull-ups and isolate protection footprints to rank contributors.
Fix: enforce “always-on only” policy; remove floating GPIO; cap pull-up current; select lower-leakage protection and validate leakage at hot.
Pass criteria: Deep-sleep Iq ≤ X µA @ VBAT=Y V, T=Z °C; top-3 leakage contributors each ≤ X µA (bucketed by temperature).
The node wakes up sporadically, but the log shows no wake frame — check the filter table first or suspect EMC events?
Likely cause: wake attribution is missing/late, so a non-frame wake (local pin, brownout, transient) looks like “no frame.”
Quick check: log wake-cause within the first firmware window (before stacks/peripherals run): {bus/local/timer/reset} + minimal snapshot (VBAT min, reset reason).
Fix: implement deterministic wake attribution; add a “pre-stack” capture path; then validate filter entries only after attribution confirms “bus wake.”
Pass criteria: 100% of wakes include wake-cause + snapshot; false-wake rate ≤ X wakes/hour in bucket (T/VBAT/harness).
After waking, the node immediately falls back to sleep — is it brownout chatter or a state-machine race?
Likely cause: VBAT droop/brownout causes repeated reset, or wake/sleep transitions have a race (flag cleared too early, debounce window wrong).
Quick check: correlate wake timeline with reset-reason + VBAT min capture; look for repeating “wake → reset → sleep” signatures within a short window.
Fix: add hysteresis/hold-up time for wake; gate sleep entry until “system stable” condition; harden debounce and ordering of wake flags.
Pass criteria: wake completes to “online” state within X ms without extra resets; no wake-sleep oscillation over Y cycles (T/VBAT buckets).
After bus-off recovery, the network “gets slower” — is it the backoff strategy or counter/KPI definition?
Likely cause: recovery backoff/cool-down reduces traffic intentionally, or the KPI window/denominator changed after recovery (making it look slower).
Quick check: compare utilization and message latency with identical measurement window and denominator before/after bus-off; log backoff state transitions.
Fix: cap backoff duration and make it observable; standardize KPI definitions (window, denominator, bucket) across firmware and service tools.
Pass criteria: bus-off ≤ X events per Y hours; recovery time ≤ Z ms; post-recovery utilization and latency within X% of baseline under same KPI window.
After changing the TVS, false wakes increase — check leakage first or edge-shape change from parasitic C?
Likely cause: hot leakage shifts the input threshold/bias, or added capacitance/imbalance deforms edges into mis-detection windows.
Quick check: measure TVS leakage vs temperature on the bench; then compare edge rise/fall and symmetry with the old part on the same harness.
Fix: select matched low-cap, low-leakage automotive ESD/TVS (e.g., ESD2CAN24-Q1 as a reference family) and place it to preserve return path symmetry.
Pass criteria: false-wake ≤ X wakes/hour across temperature buckets; leakage at hot ≤ X µA; edge metrics stay within X% of baseline in worst harness.
Low temperature is OK, but false wakes rise at high temperature — which drift path should be checked first?
Likely cause: temperature increases leakage (protection, pull-ups, sensor standby), shrinking noise margin and widening mis-detection windows.
Quick check: bucket false-wake rate by temperature and correlate with Iq contributors (always-on rail current) and wake-cause distribution.
Fix: reduce hot leakage contributors; tighten wake debounce/window; add hysteresis to local thresholds; confirm wake attribution is captured early.
Pass criteria: false-wake ≤ X/hour at hot bucket; always-on leakage increase ≤ X µA vs room; wake-cause distribution matches intended policy.
A LIN node occasionally does not respond — is it auto-baud or sleep/wake timing?
Likely cause: the node is not fully awake/clock-ready when the header arrives, or auto-baud capture window is missed after wake.
Quick check: log “wake-to-ready” time and compare to master’s first header timing; inspect whether the first frame after wake is consistently decoded.
Fix: add a deterministic wake-ready handshake window; delay first header or extend wake-ready margin; validate transceiver wake behavior (e.g., TLIN1029-Q1 / TJA1021T class).
Pass criteria: 0 missed responses over N wake cycles; wake-to-ready ≤ X ms; first-frame decode success ≥ (100% − X ppm) across T/VBAT buckets.
The actuator node triggers overcurrent protection, but the event log is missing — is it sampling window or interrupt priority?
Likely cause: the fault happens faster than the logging path (snapshot taken too late), or the ISR/log write is preempted/blocked during protection action.
Quick check: add a “pre-fault latch” (first-fault flag + timestamp) and compare it to the ring-log sequence; verify ISR latency under load.
Fix: move minimal snapshot capture into the highest-priority path; separate “capture” from “serialize/log write”; ensure protection IC status is read deterministically (e.g., TPS1H200A-Q1 / TLE9104SH class).
Pass criteria: ≥ 99.9% of OC events produce a snapshot with required fields; snapshot timestamp error ≤ X ms; no missed capture under worst load.
The same node behaves differently on different harnesses — what is the first harness bucketing and re-test step?
Likely cause: harness length/stubs/loads change edge shape and noise coupling, shifting selective-wake and error margins.
Quick check: bucket harnesses by {length, stub, node count/load, shielding/return path}; re-test worst bucket first with identical firmware and KPI window.
Fix: tune wake debounce/window and confirm protection placement symmetry; update validation matrix to always include worst harness bucket as a gate item.
Pass criteria: all buckets meet false-wake ≤ X/hour and bus errors ≤ X per Y hours; worst bucket margin is documented and stable across temperature.
The node “woke up but does not join the bus” — check wake attribution first or bus error state first?
Likely cause: the node woke due to a non-bus source (local/brownout), or it entered error-passive/bus-off during early startup on a noisy harness.
Quick check: read wake-cause + startup counters within the first window; check if TEC/REC rise immediately and whether bus-off state is reached.
Fix: ensure deterministic wake attribution; add startup quiet window; harden bus-off recovery (e.g., CAN FD PHY class like TCAN1044A-Q1) and avoid reboot storms.
Pass criteria: 100% of wakes have attribution; node reaches “online” state ≤ X ms; no bus-off during first Y seconds in worst harness bucket.
After a long sleep, the first wake-up fails — which retention / register-loss class should be checked first?
Likely cause: a required retention bit/clock domain is not preserved, or the wake domain powers up in the wrong order causing missed first-frame readiness.
Quick check: compare “first wake” vs “second wake” register snapshots; verify wake-to-ready timing and whether any init step assumes a warm state.
Fix: make long-sleep wake path idempotent; reinitialize all critical registers; validate power-domain sequencing and guard the first-frame window.
Pass criteria: first wake success ≥ (100% − X ppm) over N long-sleep cycles; wake-to-ready ≤ X ms; no “warm-state dependence” in logs.
Service readout looks normal, but customers still report faults — which KPI definition must be aligned first?
Likely cause: KPI mismatch (window, denominator, bucket, trigger) makes “normal” in tools differ from “fault” in the field.
Quick check: re-compute the KPI using the same time window and buckets as the customer scenario; compare raw counters and snapshot triggers.
Fix: freeze KPI contract (definitions + units + windowing); store tool version and station metadata; keep a minimal black-box snapshot for disputed cases.
Pass criteria: KPI agreement within X% across tools and field logs; disputed cases always have snapshot evidence with consistent bucket labels.