Body & Comfort ECUs: LIN Clusters, CAN-FD Gateways, Low Power
← Back to: Automotive Fieldbuses: CAN / LIN / FlexRay
Body & Comfort fieldbuses succeed when sleep and wake are closed-loop and attributable, and when large LIN clusters are contained and shaped so gateways can publish to CAN-FD reliably under event storms. This page provides a system-level blueprint to plan clusters, budgets, recovery, and service logs that keep comfort features responsive without draining the battery.
H2-1. Definition & Boundary of “Body & Comfort Fieldbus”
This page covers the system-level network shape of body & comfort ECUs: many LIN node clusters aggregated by a gateway ECU (often with an SBC) into a CAN-FD backbone. It intentionally avoids electrical-layer deep dives to prevent overlap with sibling pages.
What “Body & Comfort” typically includes
Body/comfort networks connect dense, cost-sensitive nodes where most signals are low-rate states and peak load comes from event bursts (unlock, door open, courtesy lights, seat motion).
Node roles (useful for planning clusters later)
- HMI / inputs: door handle/button, window switch, HVAC panel, key/entry triggers.
- Sensors: position, temperature, light/rain, occupancy, latch status.
- Actuators: motors (window/seat), valves/flaps, lighting drivers, locks.
- Module ECUs: door module / seat module combining multiple roles in one harness segment.
Why LIN is the dominant edge bus (system view)
- Node density & BOM pressure: many small ECUs/sensors must be cheap and repeatable.
- Harness manufacturability: clustering reduces connector pins and simplifies routing in doors/roof/trunk.
- Bandwidth fit: most signals are state-based; bursts are handled by scheduling and gateway shaping.
- Fault containment: a cluster can degrade without taking down the whole domain.
- Low-power reality: sleep/wake behavior becomes the first-order system constraint.
Practical takeaway: treat a LIN cluster as the smallest design unit (power, wake, scheduling, diagnostics), then decide how to aggregate it upward.
Why a CAN-FD gateway becomes mandatory
Three layers the gateway must provide
- Aggregation layer: compress many low-rate signals into a stable, few-frame uplink.
- Policy layer: sleep/wake responsibility chain, burst shaping, and state reconstruction after resets.
- Service layer: diagnostics/logging, OTA/service access boundaries, and domain-to-backbone interoperability.
In body/comfort systems, the “hard part” is rarely raw bandwidth. It is wake attribution, false-wake suppression, and event burst control without breaking user-perceived latency.
Scope guard (to prevent topic overlap)
- Owned here: system topology (LIN clusters → gateway → CAN-FD), wake/power responsibility chain, and gateway policy logic.
- Mention-only: electrical-layer tuning (slew, waveform, timing knobs). Use one sentence + link.
- See also: LIN Transceiver (ISO 17987 / J2602) and CAN-FD Transceiver .
Diagram · Body Domain Map (concept)
Many small LIN edge nodes form clusters (doors/cabin/rear). A gateway ECU (often with an SBC) aggregates them into a CAN-FD backbone toward BCM/domain control.
H2-2. Topology Planning: LIN Clusters + CAN-FD Backbone
Topology planning answers three practical questions: how to partition clusters, how large each cluster may grow, and where the gateway should live (BCM-centric vs zonal vs domain control). This section stays at the architecture and budgeting level.
Partition dimensions (cluster design levers)
- Physical zone: doors / cabin / rear-trunk to minimize harness length and connector complexity.
- Harness routing: align clusters with the harness trunk and avoid long cross-zone branches.
- Fault containment: a single cluster fault should not disable unrelated comfort functions.
- Wake domain: group nodes that share wake sources to avoid “wake diffusion” and simplify attribution.
The wake domain is the most commonly missed lever. It determines whether low-power targets can be met without false wakes, and whether service logs can confidently explain why the vehicle woke up.
Cluster sizing: define N / T / L with usable metrics
Use a consistent cluster “label” to keep topology decisions comparable across platforms: N (node count), T (schedule cycle), and L (load). Avoid vague definitions; the goal is to prevent hidden growth.
- N (node count): include module-internal sub-nodes if they consume schedule slots or diagnostics bandwidth.
- T (schedule cycle): the full main schedule loop period (not a single frame period); define event windows explicitly.
- L (load): record two values: L_avg (steady state) and L_peak (event burst).
Planning rule: keep L_peak visible from day one. Comfort domains fail in the field when peak bursts (unlock + door + courtesy lights) were never budgeted and the gateway has no shaping policy.
Gateway placement: BCM-centric vs Zonal vs Domain control
- BCM-centric: simpler integration; risk of longer harness runs and broader wake impact if poorly partitioned.
- Zonal gateway: shortest harness and scalable clustering; requires stronger policy/logging consistency across zones.
- Domain control: good for unified service boundaries; zone responsiveness depends on backbone behavior and prioritization.
Decision cue: if harness complexity and node growth are the main risks, prefer zonal clustering. If platform scale is small and wiring is stable, BCM-centric can be sufficient.
Electrical-layer CAN-FD timing optimization is out of scope here; see CAN-FD Transceiver .
Diagram · Cluster Partition (N / T / L budgeting placeholders)
Partition by physical zone + wake domain. Label each cluster with N, T, and L_avg/L_peak so growth remains measurable.
H2-3. Low-Power & Wake Strategy (System Level)
Body & comfort networks succeed or fail on sleep stability and wake correctness. The goal is a closed-loop policy that prevents wake storms, reduces false wakes, and produces reliable wake attribution for serviceability.
Power state model (define what is allowed to run)
- IGN OFF: transition window into low-power policy; close open sessions and arm wake monitors.
- Sleep: only minimum monitors remain active; all nonessential buses and tasks are off.
- Partial Wake: a small “confirm set” wakes up to validate the wake source; heavy tasks are blocked.
- Full Wake: domain network and essential services come up; user-facing functions become responsive.
- Run: normal operation plus diagnostics and background services as allowed by policy.
Key rule: Partial Wake must be time-bounded. If confirmation fails (or times out), the system must return to Sleep to prevent repeated wake loops.
Wake sources (classify, then define minimum confirmation)
- Local wake: button/handle/sensor triggers. Risk: noise/ESD bounce → requires debounce + plausibility check.
- Bus wake: CAN remote wake / gateway wake request. Risk: wake diffusion → requires filtering + attribution to a source frame group.
- Timed wake: periodic heartbeats/maintenance tasks. Risk: forgotten timers → requires explicit allow-list and shutdown hygiene.
Each wake source must map to a minimum confirmation action in Partial Wake. Confirmation decides whether to escalate to Full Wake or return to Sleep.
Metrics (use definitions that can be verified)
- Sleep Iq (system): define measurement boundary and steady-state window; track mean and worst-case.
- Wake latency: time from wake trigger edge to “function-ready” (e.g., unlock response, courtesy light on).
- False-wake rate: wakes per time unit (e.g., per 24h) bucketed by wake reason category.
- Wake attribution: a fixed event record that answers “who woke the domain and why”.
Wake event record (recommended fields)
wake_reason · wake_source · confirm_action · confirm_result · duration_ms · next_state
Wake responsibility chain (closed-loop policy)
- Trigger: accept a wake event only from approved sources (local/bus/timed).
- Confirm: execute a minimum confirm action in Partial Wake (debounce / plausibility / allow-list).
- Escalate: only after confirm passes, bring up Full Wake (scope-limited to required clusters).
- Return: if confirm fails or times out, return to Sleep and write a wake event record.
This chain prevents uncontrolled wake propagation and makes field issues diagnosable without guessing.
Scope guard (what this section will not expand)
ISO 11898-6 selective wake details (filter tables, matching rules) are intentionally not expanded here. See: Selective Wake / Partial Networking (ISO 11898-6) .
Diagram · Power State Machine + Wake Attribution
Three wake source types feed a time-bounded Partial Wake. Every transition produces a compact wake event record (wake_reason + confirm action/result).
H2-4. Gateway Architecture: Aggregation, Rate-Shaping, and Mapping
A body/comfort gateway must convert many small LIN signals into a stable CAN-FD uplink without collapsing under burst events. The focus here is policy: aggregation rules, burst shaping, fault containment, and reset recovery.
Aggregation (pack rules that do not explode uplink traffic)
Signal buckets (recommended)
- State: slow-changing values (status, position); uplink at a fixed cadence.
- Event: bursts (unlock/open/lighting); uplink on change with shaping.
- Service: diagnostics/background; never allowed to starve user-visible functions.
Pack policy should favor few stable frames over many micro-updates, and should align update cadences with human-perceived timing needs.
Rate-shaping (survive burst events without breaking UX)
- Input shaping: merge duplicate toggles, debounce bursts, and collapse rapid sequences into a single event.
- Output shaping: use prioritized CAN-FD transmit queues; allow defer/drop for noncritical frames under load.
- Budget visibility: define normal vs peak behavior so “unlock storms” do not become bus storms.
The objective is stability first: critical UX frames must keep bounded latency, while background traffic yields gracefully.
Fault containment (one LIN cluster must not take down the backbone)
- Contain noisy sources: stop repeating fault spam; report health transitions once, then enter a quiet mode.
- Prevent retry storms: cap retries and throttle update frequency during degraded conditions.
- Preserve essential signals: keep a minimal “safe set” alive while isolating the failing cluster.
Reset recovery (state rebuild) + user-perceived latency
Recovery steps (recommended)
- Re-sample: re-collect critical LIN states before emitting uplink deltas.
- Mark cold start: tag the first uplink window so upstream logic does not misread it as a sudden change.
- Gate bursts: apply temporary shaping during recovery to avoid transient spikes.
- Restore priorities: critical UX frames resume first; service traffic follows later.
Human-perceived latency targets (placeholders)
- Unlock response: ≤ X ms
- Courtesy light on: ≤ X ms
- Panel feedback: ≤ X ms
Scope guard (avoid overlap with controller/bridge pages)
Complex gatewaying (TTCAN scheduling deep dive, Ethernet/DoIP bridging, security gateway functions) is not expanded here. See: CAN Controller / Bridge .
Diagram · Gateway Data Path (LIN → Policy → CAN-FD Queues)
The gateway pipeline separates buffering, shaping, and mapping before hitting prioritized CAN-FD transmit queues.
H2-5. LIN Cluster Design: Schedule, Diagnostics, and Event Handling
A LIN cluster should be treated as a small system with explicit time slots, an event spillway, and a service lane. The focus is scheduling policy and resource isolation (not electrical details).
Schedule skeleton (Periodic + Event window + Service lane)
- Periodic frames: stable refresh for core UX states; keep jitter small and predictable.
- Event window: absorb bursts (unlock/open/lighting) without disrupting periodic timing.
- Service lane: diagnostics/maintenance run only in reserved time and must yield under load.
Policy goal: periodic behavior stays stable while event bursts are shaped and service traffic never starves user-visible functions.
Node archetypes (map update patterns to schedule segments)
- Sensors: mostly periodic; rare step changes should be published through the event window.
- Actuators: command/critical feedback belong to periodic (or high-priority event), noncritical telemetry can be slower.
- HMI nodes: user-driven bursts are expected; require debounce and coalescing before uplink.
This mapping prevents “micro-updates” from inflating traffic and keeps response-time expectations consistent.
Diagnostics on LIN (serve without stealing real-time budget)
- Reserve time: run diagnostics only in the service lane or a low-priority window.
- Cap bursts: apply backoff when events peak; split long service transfers into fragments.
- Protect UX: periodic and critical events must have guaranteed access to slots.
Budget placeholders (recommended)
diag_budget (per cycle) · diag_backoff (under load) · diag_fragment_size (service slicing)
Event handling (coalesce, debounce, and throttle)
- Coalesce: multiple toggles in a short window collapse into a final state update.
- Debounce/confirm: reject noise-like triggers before they hit the schedule.
- Throttle: cap event rate during storms; delay or drop low-value event frames first.
Loggable counters (placeholders)
event_bucket · event_coalesce_count · event_drop_count
Error handling & degradation (behavior-first policy)
- Node missing: enter a degraded schedule; publish a single health transition rather than spamming.
- Cluster instability: tighten event throttle and reduce service rate to protect periodic stability.
- Power drop: keep a minimal safe set alive; defer noncritical updates until stable recovery.
A degraded cluster should fail “quietly” and predictably, keeping backbone behavior stable and diagnosable.
Scope guard (no electrical expansion here)
LIN slew control, ESD robustness, and transceiver electrical characteristics are intentionally not expanded. See: LIN Transceiver (ISO 17987 / J2602) .
Diagram · LIN Schedule Timeline (concept)
A conceptual cycle shows periodic slots, an event spillway, and a service lane to isolate diagnostics from real-time UX.
H2-6. CAN-FD Backbone Integration: Load Budget & Priority Rules
CAN-FD integration is best treated as a system budget problem. Define normal and peak load envelopes, reserve space for critical UX, and ensure diagnostic bursts cannot crowd out time-sensitive comfort functions.
Load budget model (normal + peak + diagnostic bursts)
- Normal: stable periodic uplink (gateway-packed states).
- Peak: event storms (unlock/open/lighting) shaped by rate limits and coalescing.
- Diagnostic bursts: service traffic constrained by budget and backoff rules.
The budget must be defined from the body-domain perspective (cluster aggregation and gateway uplink behavior), not from PHY tuning.
Peak modeling (storm window and shaping rules)
Placeholders for a testable peak envelope
- Storm window: W = X ms
- Peak events: E = X events within W
- Packed frames: F = X gateway frames per storm
- Backoff: throttle service and low-value events first
Peak stability is achieved via coalescing + rate limiting + priority, not by allowing uncontrolled frame multiplication.
Priority rules (align business classes with TX queues)
- Critical UX: bounded latency (unlock response, safety-relevant status) → TX Q0.
- Normal comfort: periodic refresh and noncritical events → TX Q1.
- Service: diagnostics/maintenance/log bursts → TX Q2 with backoff.
ID planning should reflect these classes (strategy only). Numeric ID ranges and controller specifics belong to dedicated controller pages.
Gateway queue policy (congestion behavior must be observable)
- Q0: never starved; maintains bounded latency for critical UX frames.
- Q1: deferrable under storms; may stretch update cadence within limits.
- Q2: backoff and drop allowed first; service traffic must not destabilize the bus.
Queue telemetry placeholders (recommended)
txq_depth_q0 · txq_depth_q1 · txq_depth_q2 · defer_count_q1 · drop_count_q2
Scope guard (no CAN-FD PHY timing here)
Sample point, loop delay symmetry, and PHY tuning are intentionally not expanded. See: CAN FD Transceiver (up to 2–8 Mbps) .
Diagram · Load Budget Bars (Normal / Peak / Diagnostic burst)
Three envelopes show why storm shaping and service backoff are required to keep critical UX frames within bounded latency.
H2-7. Robustness & Degradation: What Must Still Work When Things Go Wrong
Body/comfort functions can degrade, but must remain predictable and non-chaotic. The goal is fault containment, minimum viable behavior, and controlled recovery without reboot storms.
Failure taxonomy (detect → radius → immediate action)
- Single node fault: heartbeat missing / frozen state → local impact → isolate event bursts, keep cluster timing stable.
- Single cluster fault: error rate spikes / schedule overruns → zone impact → enter degraded schedule, throttle events, suspend service lane.
- Gateway fault: gateway alive missing / reset loops → aggregation impact → keep minimum local behavior, report a single health transition upstream.
- Power anomaly: brownout / transient drop → multi-node impact → freeze unsafe actions, rebuild state before resuming uplink.
- Wake anomaly: false wake / wake propagation → battery & UX impact → raise confirmation threshold, rate-limit wake attempts, enforce return-to-sleep closure.
Recommended observability placeholders
health_state · fault_counter · cluster_error_rate · gateway_reset_count · wake_reason
Minimum viable behavior (degrade without chaos)
- Door lock/unlock: preserve local intent handling and avoid repeated retries; prohibit oscillation.
- Windows/sunroof: disable automation under instability; keep a safe manual path; prohibit risky unintended motion.
- Lighting: clamp event storms into coalesced updates; prohibit flicker loops and battery-drain patterns.
- HMI feedback: prefer “last-known stable state + limited refresh” over noisy rapid toggles.
The objective is a stable user experience even under partial failures: fewer transitions, bounded retries, and explicit safe fallbacks.
Fault containment (keep the blast radius inside zones)
- Node containment: a bad node cannot collapse the schedule or flood the cluster.
- Cluster containment: a bad cluster cannot saturate the CAN-FD backbone or trigger global wake storms.
- Gateway containment: gateway faults produce one health transition, not repeated noisy state churn.
Containment boundaries should be explicit: where throttling is applied, where health is summarized, and where retries are capped.
Watchdog & recovery policy (reset is a tool, not a default)
- When to reset: deadlock, total scheduling loss, or critical health loops that cannot converge.
- Anti-storm guard: enforce cooldown and maximum resets per time window (placeholders).
- Recovery order: rebuild state → enable periodic → enable events → enable service lane last.
Guardrail placeholders
cooldown_ms · max_resets_per_hour · state_rebuild_timeout_ms
Scope guard (no SBC register-level watchdog here)
Window watchdog registers, SBC reset reason decoding, and chip-specific configuration are intentionally not expanded. See: SBC with CAN FD .
Diagram · Fault Containment Zones
Clusters are contained independently; the gateway applies throttling and health summarization within an explicit containment boundary.
H2-8. EMC / ESD / Harness Reality (Design Constraints, Not Component Deep-Dive)
Body harnesses are long, branched, frequently touched, and often hot-plugged. This section captures system-level constraints: return paths, placement intent, and common pitfalls—without expanding component selection formulas.
Harness realities (what the system must tolerate)
- Long trunk + branches: common-mode conversion and return-path discontinuities drive emissions and fragility.
- Connector hot-plug: transient injection and contact bounce can look like events or wake sources.
- Human touch & ESD: direct discharge at handles/panels produces sharp current paths into the harness.
- Ground offset: imperfect chassis reference creates ground bounce and threshold surprises.
Most “random” field failures map to a small set of physical realities: path length, branch geometry, and where current returns.
System constraints (return path first)
- Return path continuity: plan where current returns (shield/chassis/reference) across connectors and branches.
- Entrance protection intent: place protection where the surge enters, and ensure a short, low-inductance return.
- Reference stability: avoid letting a noisy return path redefine logic thresholds during transients.
A strong component cannot compensate for a long, inductive return path. Placement is a system decision, not a BOM checkbox.
“Do-not-do” pitfalls (symptom-driven)
- Protection too far from the entry: the surge travels on the board first → later clamping does not protect logic.
- Long return to ground: clamp current loops through inductive paths → ground bounce creates false triggers.
- Parasitics treated as “free”: extra C/L distorts edges → intermittent mis-detect and wake anomalies.
The fastest debug move is often geometric: trace where current returns and where the “entry point” truly is.
Verification hooks (prove containment, not just “pass once”)
- Log during ESD/hot-plug: correlate wake_reason, fault_counter, and cluster/backbone error counters.
- Post-event fragility checks: verify the system does not become “more fragile” after a surge.
- A/B path experiments: change the return path geometry and confirm symptom shift before changing components.
Minimal log fields (placeholders)
event_time · injection_point · wake_reason · cluster_error_rate · fault_counter
Scope guard (no TVS/CMC sizing formulas here)
TVS arrays, CMC selection, and split-termination sizing are intentionally not expanded. See: EMC / Protection & Co-Design .
Diagram · Harness + Return Path Concept
A trunk with branches highlights where surges enter and how return paths determine robustness. Protection icons indicate intent, not parameters.
H2-9. Diagnostics & Serviceability: DTC Mapping + Wake Black Box
Serviceability depends on consistent mapping and observability: LIN/cluster faults must become stable CAN-FD-side events and DTCs, while wake anomalies must be attributable with a compact “black box” record.
Fault-to-DTC/event mapping pipeline (source → normalize → aggregate → publish)
- Source: node_missing / schedule_overrun / cluster_error_rate / local_reset (placeholders).
- Normalize: severity, scope, debounce window, confirmation result.
- Aggregate: merge repetitive signals into one stable event per zone/cluster.
- Publish: CAN-FD event + DTC + snapshot + counters, with bounded rate.
A mapping is not “a code”; it is a contract: consistent thresholds, consistent confirmation, and consistent containment scope.
DTC taxonomy (service-first prioritization)
- Class A — Battery drain / wake anomalies: prioritize wake attribution and false-wake counters.
- Class B — Functional degraded: stable fallback behavior + cluster health snapshots.
- Class C — Intermittent observation-only: log + counters, avoid aggressive actions.
- Class D — Infrastructure faults: gateway resets, persistent cluster errors, mapping integrity alarms.
DTC categories should align with diagnostic flow: start with attribution and counters, then consult snapshots for containment scope.
Wake black box (must answer: who/when/why/how-long/confirmed?)
Required fields (minimum viable record)
wake_reason · wake_source · timestamp_start · timestamp_end · duration_ms · debounce_ms · confirm_pass
Optional fields (high value for root-cause)
cluster_id · node_id · battery_v · temp · reset_reason · injection_point
Without debounce and confirmation, a wake record cannot separate real triggers from contact bounce or transient noise.
Counter system (small set, high leverage)
- bus_utilization: average/peak window (placeholders) for storm detection.
- error_counters: cluster-level and backbone-level deltas per time window.
- reset_reason_count: track reset causes and recurrence rate.
- false_wake_count: per wake_source and per wake_reason.
- event_drop/defer: queue shaping visibility (why some events were delayed or collapsed).
Counters should support correlation: “wake events vs. error bursts vs. resets” across the same time base.
Diagram · Wake Event Record Card (field view)
A compact black-box record keeps attribution, gating, and timing in one structure. Field names are placeholders.
H2-10. Validation Plan: Bench → Vehicle → Production
Validation should run as a funnel: fast iteration and reproducibility on the bench, realism and statistics in the vehicle, and fast screening with version-locked logging in production.
Test funnel overview (control → realism → throughput)
- Bench: reproducible harness emulation, hot-plug/ESD injection, power-state entry/exit measurements.
- Vehicle: real harness, real concurrency, false-wake statistics across environment boundaries.
- Production: fast screening, configuration/version lock, mandatory log fields exported.
Each stage narrows risk: bench proves mechanisms, vehicle proves behavior under reality, production proves repeatability and traceability.
Bench plan (reproducibility + measurement definitions)
- Harness emulation: trunk/branch variants (placeholders) to reproduce branch-driven fragility.
- Hot-plug + ESD: verify wake attribution, confirmation gating, and post-event fragility checks.
- Sleep current definition: entry conditions, stabilization wait time, and instrument connection intent.
- Black box completeness: ensure required fields are always recorded.
Vehicle plan (statistics + environment + concurrency)
- False-wake rate: track by wake_source and wake_reason (placeholders per day).
- Boundary matrix: temperature/voltage corners and soak transitions.
- Concurrency storms: multi-door actions, lighting bursts, seat adjustments; verify rate shaping and containment.
- Containment proof: one bad cluster cannot destabilize the rest of the domain.
Production plan (fast screen + reporting integrity)
- Fast screening: wake chain sanity, counter readout, reset reason visibility.
- Version/config lock: ensure calibration and mapping tables match the build identity.
- Mandatory exports: wake black box record schema + key counters (same field names).
- Failure tagging: tag by symptom class to support fast rework and yield analytics.
Diagram · Test Funnel (Bench → Vehicle → Production)
A three-stage funnel ties mechanisms to statistics to manufacturing throughput, with KPIs tracked consistently across stages.
H2-11. Engineering Checklist (Design → Bring-up → Production)
This checklist turns a Body/Comfort LIN-at-scale + CAN-FD gateway design into a repeatable, auditable workflow. It focuses on system behavior (sleep/wake closure, fault containment, logging, serviceability) and avoids PHY-level deep dives.
- Sleep is measurable: system Sleep Iq is repeatable and attributable (wake reasons never “unknown”).
- Wakes are intentional: false-wake rate is bounded; wake latency has a defined budget.
- Faults are contained: a single LIN cluster failure does not cascade to the CAN-FD backbone.
- Service is actionable: DTC mapping + wake “black box” fields enable field diagnosis.
A. Architecture & partition
- Cluster boundaries: partition by physical zone, harness branch, wake domain, and fault domain (Front-L / Front-R / Rear / Trunk).
- Gateway placement: BCM vs zonal controller vs domain controller; define failover behavior when the gateway is unavailable.
- Containment contract: what still works locally if CAN-FD backbone is down (minimum safe/usable behaviors).
B. Budgets & policies
- Sleep Iq budget: allocate per block (SBC, MCU, sensors, wake inputs, LIN/CAN standby) → target system Sleep Iq ≤ X mA.
- Wake latency budget: source → confirm → gateway publish → app ready ≤ Y ms (human-perceived comfort bound).
- Event-storm shaping: define rate limits + coalescing rules (e.g., multi-door unlock bursts) to protect CAN-FD load.
C. Logging schema (non-negotiable fields)
- Wake attribution: wake_reason, source(bus/local/timed), debounce_ms, confirm_result, duration_ms.
- Reset attribution: reset_reason, watchdog_fired, brownout, last_mode, last_bus_state.
- Network health: LIN cluster error counters, gateway drop counters, CAN-FD utilization snapshot windows.
A. Measurement points (define “where to probe”)
- Supply segmentation: VBAT_IN, VREG_MCU, VREG_SENS, LIN_VBAT, CAN_VIO (or logic rail).
- Wake lines: local wake pins, bus wake status, gateway wake-out, application wake-ready GPIO.
- Bus observation: cluster-level LIN activity indicator + CAN-FD gateway TX queue depth counters.
B. Core bring-up experiments
- Sleep Iq procedure: enter final sleep state → wait settle time T_settle → record mean/peak over window W.
- Wake-path closure: each wake source triggers correct wake_reason + confirm step; “unconfirmed wake” returns to sleep within X ms.
- Event storm test: simultaneous door/seat/lighting triggers → verify rate-shaping keeps CAN-FD utilization under U_peak%.
- Fault injection (system-level): disable one LIN cluster → verify containment and minimum function behavior.
C. Pass criteria (placeholders)
- System Sleep Iq ≤ X mA at Temp=25°C and ≤ Y mA across min/max temp.
- False-wake rate ≤ Z / 24h under defined harness/ESD user interactions.
- Wake attribution completeness ≥ 99.9% (no “unknown” wake reasons in field logs).
A. Manufacturing tests
- Sleep current screen: automated entry to sleep + current window test; bin to “pass / marginal / fail”.
- Wake source screen: trigger each wake line + bus wake; verify wake_reason mapping.
- Gateway health screen: queue counters, drop counters, and DTC mapping self-test.
B. Field-debug payload (must be reported)
- Last N wake records + reset reasons + bus error counters.
- Per-cluster health snapshot (node missing counts, schedule overruns, recovery actions).
- CAN-FD load statistics (normal/peak/diagnostic windows) with timestamps.
H2-12. IC Selection Guide (System Logic, Before FAQ)
Selection here is system-driven: node density, wake policy, zonal vs centralized architecture, and serviceability targets. Component deep dives belong to their dedicated pages; this section outputs a repeatable decision path and a shortlist by architecture bucket.
- LIN interface: LIN transceiver or LIN “SBC-like” device with regulator/wake support.
- CAN-FD backbone interface: CAN-FD transceiver (plus PN/selective wake if required).
- System Basis Chip: regulators + watchdog + wake I/O + (optionally) CAN/LIN transceivers integrated.
- Gateway controller: MCU with enough CAN/LIN channels, low-power modes, safety/security hooks.
- Protection: low-cap TVS/ESD arrays suitable for bus lines (matched where needed).
Step-by-step selection logic
- Count & cluster: total LIN nodes, number of clusters, worst-case concurrent events (locks/windows/lights).
- Wake policy first: required wake sources, confirm steps, false-wake tolerance, wake attribution logging needs.
- Choose integration level: discrete transceivers vs CAN/LIN SBC (power + comm + watchdog in one).
- Backbone load shaping: CAN-FD interface feature set + gateway buffering/rate limiting strategy.
- Serviceability hooks: diagnostics counters, wake record fields, reset reasons, DTC mapping support.
- Protection sanity: bus-line TVS/ESD parasitics compatible with signal integrity (avoid “protection breaks comms”).
Key metrics (define the measurement mouth)
- Sleep Iq (system): not “chip-only” — include regulators, wake inputs, sensors, and bus standby.
- Wake capability: bus/local/timed; wake source identification; debounce/confirm mechanism support.
- Robustness: short-to-battery/ground survivability, thermal protection & recovery behavior.
- Diagnostics hooks: error counters, mode reporting, reset reason reporting, limp-home outputs.
- EMC-friendly behavior: wave-shaping / slew control features (policy-level, not PHY tuning here).
Shortlist buckets (example material numbers; non-exhaustive)
Use these as reference “known-good families” to match integration level and wake/diagnostics expectations. Exact grade/package variants follow OEM requirements.
- CAN/LIN System Basis Chips (SBC): NXP UJA1132AHW / UJA1135AHW (HS-CAN + (dual) LIN + regulators/watchdog, body modules) · Infineon TLE9262BQX (HS-CAN FD + LIN + power management) · Infineon TLE9271QX (CAN-LIN SBC family, CAN FD up to 5 Mbps class) · Microchip ATA658x (CAN FD + LIN SBC family) · ST SPSB081 (SBC family with LIN and optional CAN-FD).
- Gateway MCU examples: NXP S32K144 / S32K14x (body/zone class MCU family) · Renesas RH850/F1K (body applications; CAN FD variants) · Infineon AURIX TC375 (TC37xTP family) (domain/gateway class) · ST SPC58 (Chorus family) (gateway/body networking class).
- LIN transceiver / LIN “SBC-like” parts (examples): TI TLIN1028-Q1 (LIN transceiver with integrated regulator class) · TI TLIN1029-Q1 (LIN transceiver family option) · NXP TJA1020 / TJA1021 / TJA1029 (LIN transceiver family) · Microchip ATA6625 / ATA6625C (LIN transceiver with regulator class).
- When to step up to CAN/LIN SBC: more wake inputs, stricter watchdog/failsafe needs, or tighter power sequencing requirements.
- Controller + transceiver integrated: TI TCAN4550-Q1 (CAN FD controller with integrated transceiver; SPI host).
- External CAN-FD controller: Microchip MCP2517FD (external CAN FD controller with SPI).
- Pair with CAN-FD transceiver (examples): TI TCAN1044A-Q1 · NXP TJA1042 / TJA1043 · Infineon TLE9255W.
- CAN/CAN-FD ESD arrays: Nexperia PESD2CANFD (matched ESD array class) · Littelfuse SM24CAN (CAN bus TVS family class).
- Rule of thumb: prioritize low-capacitance, symmetry (where differential), and placement near connector with controlled return path.
- PHY timing (sample point, loop delay tuning) belongs to the CAN-FD transceiver page.
- LIN electrical behavior (slew/ESD wave shaping details) belongs to the LIN transceiver page.
- EMC/TVS/CMC selection math belongs to the EMC / Protection pages.
Recommended topics you might also need
Request a Quote
H2-13. FAQs (Low-Power / Wake / Gateway / Large-Scale LIN)
These FAQs close long-tail troubleshooting without expanding scope. Each answer follows the same 4-line engineering format: Likely cause / Quick check / Fix / Pass criteria (threshold placeholders).