123 Main Street, New York, NY 10001

Stub & Harness Length Design Rules for CAN FD/XL

← Back to: Automotive Fieldbuses: CAN / LIN / FlexRay

Stub length is not “just wiring detail” — it directly determines whether reflections return inside the sampling window and turn a stable bench setup into vehicle-only errors.

This page provides a repeatable method to identify stub/T-branch risks, budget spur length with placeholders, validate on real harness points, and fix issues by topology actions (shorten/relocate/segment) instead of fragile parameter tuning.

H2-1. Definition & Scope: What “Stub” Means in Vehicle Buses

Intent

Standardize the vocabulary (stub / spur / drop line / T-branch / trunk) so every later rule uses the same geometry and measurement boundary.

Scope guard

No termination values and no EMC parts selection (CMC/TVS/RC). No CAN protocol/software stack or controller configuration details.

Key points (engineering definition)

  • Stub (spur / drop line) is the branch conductor from the first discontinuity/junction on the trunk to the node input boundary (including connector + PCB entry trace up to the transceiver pins if it is part of the physical branch).
  • Trunk (main line) is the primary harness path that carries the shared bus between major junctions and end points.
  • T-branch is a junction where the trunk splits into two legs (or a trunk + multiple legs), creating a strong impedance discontinuity and multiple reflection paths.
  • Daisy-chain typically minimizes stub length but changes where discontinuities occur (connectors, inline splices) and can increase effective trunk length.
  • Star hub (coupler) centralizes multiple branches; it can be service-friendly but concentrates discontinuities and reflection interactions at one location.
  • “Length” is not the only variable. Node input structures, connector transitions, branch count, and edge/threshold sensitivity can increase the effective stub impact even when physical length looks “short.”

Deliverable: Vocabulary cards (portable, no wide tables)

Stub
Also called: Spur, Drop line
Boundary: junction on trunk → node input boundary (connector + entry trace can be included if it is part of the branch).
Common mistake: measuring only the harness lead while ignoring connector/PCB entry that behaves as part of the branch.
Trunk
Also called: Main line
Boundary: shared bus path between major junctions/end points.
Common mistake: treating a long trunk as “fine” because single-node bench tests pass—real harness adds junctions and reflections.
T-branch
Also called: T-junction, Tee splice
Boundary: the physical junction where the trunk splits into legs (one-to-many discontinuity point).
Common mistake: focusing on one leg length while ignoring that multiple legs create multiple reflection paths that can stack.
Star hub
Also called: Coupler, Central splice
Boundary: centralized node/box where multiple branches connect to a common point.
Common mistake: assuming “short branches” automatically guarantee stability—hub discontinuity can dominate at higher bit rates.
Diagram: Bus topology vocabulary map (trunk, stub, T-branch, star hub)
Vocabulary Map: TRUNK / STUB / T-JUNCTION / STAR HUB TRUNK + STUB T-JUNCTION STAR HUB NODE NODE TRUNK STUB NODE NODE TRUNK T-JUNCTION HUB NODE NODE NODE NODE STAR BRANCHES Note: “Effective stub” can include connectors + PCB entry traces, not only harness length.

H2-2. Why Stubs Break Links: Reflection First, Then Timing

Intent

Explain the shortest causal chain from a branch (stub) to CRC/ACK errors: discontinuity → reflection → waveform distortion → sampling/decision errors.

Scope guard

No transmission-line math derivation or S-parameters. Only engineering conclusions: where reflections come from, what they look like, and how they corrupt sampling.

Key points (causal chain)

1) Reflection starts at impedance discontinuities

A stub, junction, connector transition, or node input boundary behaves as an impedance step. The incident edge does not “simply pass through”; part of it returns as a reflected component and later re-enters the trunk and receiver waveform.

The critical property is time placement: the reflected bump arrives after a delay and can overlap the receiver’s decision region.

2) Reflections distort the edge near the threshold
  • Overshoot / undershoot: amplitude excursions that increase susceptibility to threshold jitter.
  • Ringing: repeated oscillation that can trigger multiple crossings around the decision threshold.
  • Edge distortion: a “tilted” or “stepped” transition that makes effective edge timing ambiguous.
  • Multiple threshold crossings: the receiver can observe more than one apparent transition for one intended edge.
3) Waveform errors become sampling and decision errors

Receivers do not “see physics”; they see voltage vs time and decide bits in a defined sampling/decision region. If a reflected component perturbs the signal near the threshold or inside the sampling region, the bit decision can flip or become marginal.

  • CRC spikes often indicate marginal high-speed bit decisions rather than random noise.
  • ACK / error counters jump when one or more nodes decode a different bit stream than the transmitter intended.
  • “Bench OK, vehicle fails” is a topology signature: real harness adds junctions, longer drops, and different reflection timing.

Deliverable: Symptom → Physical cause → Sampling risk (no fixes yet)

Ringing near the threshold

Physical cause: reflections from a stub end, T-junction, or connector boundary stacking in time.

Sampling risk: multiple threshold crossings and unstable effective edge timing.

“Stepped” or delayed edge transition

Physical cause: a reflected bump arrives after the incident edge, creating a second slope segment.

Sampling risk: reduced timing margin; the decision region becomes sensitive to small delay shifts.

Errors only at higher bit rates (FD/XL phase)

Physical cause: the same reflection delay occupies a larger fraction of the shorter bit time.

Sampling risk: reflection overlaps the tighter sampling/decision window, causing intermittent bit flips.

Diagram: Incident + reflected components (edge distortion around the decision threshold)
Reflection Mechanism: INCIDENT + (DELAYED) REFLECTED = DISTORTED EDGE TOPOLOGY (SOURCE → HARNESS → RX) TRUNK TX RX NODE STUB INCIDENT REFLECTED RECEIVER WAVEFORM (V vs t) t V THRESHOLD BAND INCIDENT (IDEAL) REFLECTED (DELAYED) RESULT (DISTORTED EDGE) When distortion overlaps the threshold band, the receiver can see extra or shifted edge crossings.
Automotive Fieldbuses
Stub & Harness Length
Topology & Time-Placement Rules for Stub Risk (FD/XL stricter)
Focus: identify where stubs are created, then judge usability by whether reflection return overlaps the sampling window (no standards numbers here).
H2-3

Topology Taxonomy: Where Stubs Come From

Goal: visually locate stub sources and dominant pile-up junctions before any timing/window check.
Intent
One-glance recognition: which structure creates stubs and where reflections stack.
Deliverable: Risk ranking rules
Output: Dominant pile-up points
No OEM harness clauses
Scope guard: topology-only patterns; no OEM-specific routing constraints, no connector supplier clauses, no part-number rules.
Topology atlas (what creates stubs)
Common
Trunk + drop stubs
  • Stub source: every drop from trunk junction → node boundary.
  • Pile-up point: node-dense junction clusters (many small returns stacking).
  • Risk escalator: higher-speed phases tighten tolerance to the same delay.
High-risk
T-branch junction
  • Stub source: multiple legs created from one discontinuity.
  • Pile-up point: the junction itself (multi-path interaction hub).
  • Risk escalator: small geometry changes can shift overlap timing into the decision region.
Centralized
Star hub / coupler
  • Stub source: each hub-to-node branch behaves as a stub path.
  • Pile-up point: the hub boundary (many returns converge).
  • Risk escalator: interaction complexity rises quickly with node count.
Trade-off
Segmentation / gateways
  • Stub source: drops remain, but long drops can be shortened.
  • Pile-up point: added boundaries (interfaces/junctions) create more discontinuities.
  • Risk escalator: risk shifts from long stubs → many boundaries; must re-check timing overlap.
Deliverable: Topology risk ranking rules
LOW
Sparse junctions, short drops
Pile-up is localized. The dominant failure mode appears mainly when speed tier increases and the sampling window tightens.
MEDIUM
Node-dense trunk + drops
Many small discontinuities stack near dense clusters. Sensitivity rises sharply in FD/XL high-speed phases.
HIGH
T-branch / star hub / heavy boundaries
One junction/hub can dominate system margin because multiple reflection paths interact at the same point.
Diagram (H2-3): Topology taxonomy + dominant pile-up markers
Target = PILE-UP
Dot = Junction
N = Node
TOPOLOGY ATLAS DAISY TRUNK + DROPS T-BRANCH STAR HUB N N N N N N N HUB N N N
H2-4

The One Metric That Matters: Reflection Return Time vs Sampling Window

A topology is usable only if the reflected return does not land inside the receiver’s decision-sensitive time region.
Intent
Convert “will it work?” into a timing-overlap decision.
All thresholds = placeholders
No standard numbers
No deep timing config
Scope guard: use bit time and a sampling window concept only. Keep X/Y placeholders for later fill-in.
Core model (physics → decision)
1) Round-trip return time is the quantity
Reflection leaves a junction, reaches the far boundary, then returns. Timing overlap depends on round-trip delay, not one-way.
2) Sampling is a sensitive time region
Decisions are most fragile around a bounded region near the nominal sample time. A return inside this region reduces margin disproportionately.
3) Higher speed tightens overlap tolerance
Same physical stub keeps roughly the same return time, while Tbit shrinks at higher rates. Overlap likelihood rises in FD/XL phases.
Deliverable: reusable formula (placeholders)
Formula
Round-trip return time
troundtrip ≈ 2 × Lstub / v
Lstub = junction → node boundary (per H2-1) · v = harness propagation velocity (parameter)
Pass
Time-placement criterion
troundtrip < X% × Tbit
X = allowable fraction placeholder · tighten X for higher-speed phases (FD/XL)
Practical rule: start from dominant pile-up points (H2-3), then apply this overlap test at the most timing-sensitive receiver.
Diagram (H2-4): Timeline overlap check (PASS vs FAIL)
TIMELINE OVERLAP CHECK PASS FAIL t INCIDENT WINDOW RETURN Δt = t_roundtrip No overlap t INCIDENT WINDOW RETURN Δt = t_roundtrip Overlap
H2-5

Classic vs CAN FD vs CAN XL: Why the Rules Tighten

Same harness can be stable in a slower phase but fail in higher-speed phases because the sampling-sensitive time window shrinks while reflection return time stays roughly constant.
Intent
Explain the root cause of “Classic works, FD/XL breaks” using only PHY timing and edge-sensitivity logic.
Scope guard: no frame structure
Scope guard: no numeric limits
Focus: window overlap + edge sensitivity
This section treats “tightening rules” as a time-placement problem: reflection return is a largely physical constant for a given stub, while the bit time and decision-sensitive window shrink with higher speed tiers.
Why higher tiers become fragile (PHY-only)
1) Return time stays “physical”
For a given harness geometry, the dominant reflection return time is mainly set by stub length and propagation velocity. It does not automatically shrink just because the bus switches to a faster tier.
2) The decision window shrinks with tier speed
As bit time shortens, the receiver’s decision-sensitive region occupies a larger fraction of the bit. Returns that were harmless at a slower tier can land inside the sensitive region at a faster tier.
3) Edge distortions scale into sampling errors
Faster tiers raise sensitivity to overshoot, ringing, and repeated threshold crossings because the timing margin is smaller. The same distortion becomes a larger fraction of the available window.
4) “Low-speed OK” does not prove “high-speed OK”
A stable low-speed phase only validates the low-speed decision window. High-speed phases must beFI a separate overlap check: whether the reflection return falls outside the sensitive window for the target tier.
Deliverable: Speed tier → design constraints (placeholders)
Tier A
Baseline timing tolerance
What changes: longer Tbit, wider decision-sensitive window (concept-level).
What breaks first: rare overlap near dense junction clusters.
Must verify: return stays outside window by X% margin (X placeholder).
Tier B
Shrinking window, rising sensitivity
What changes: shorter Tbit, narrower window around the sample region.
What breaks first: marginal stubs at T-branches / hub boundaries.
Must verify: overlap check on the worst-case receiver + worst-case junction.
Tier C
Tightest window, strict placement
What changes: fastest transitions, smallest timing margin.
What breaks first: returns landing inside the window; small geometry changes can flip pass/fail.
Must verify: strict return placement + worst-case guardband placeholders (temperature/aging/batch).
Diagram (H2-5): Bit time shrinks; the same RETURN shifts from outside to inside the WINDOW
SPEED TIER IMPACT CLASSIC CAN FD CAN XL t T_bit WINDOW RETURN PASS t T_bit WINDOW RETURN MARGINAL t T_bit WINDOW RETURN FAIL
H2-6

Practical Design Rules: Stub Length Budget (with Placeholders)

A repeatable method to budget stub length using inputs → compute → decide, with all numeric thresholds kept as placeholders for later standard/experience fill-in.
Intent
Provide an executable budgeting workflow rather than ad-hoc “rules of thumb”.
No hard numbers (placeholders only)
Worst-case must be explicit
Output includes “must change topology”
Scope guard: this section defines the structure and accounting. Numeric limits, standard-specific tables, and component-based fixes are intentionally left out.
Budget workflow (repeatable)
Step 1
Define inputs with worst-case discipline
  • Target tier: A / B / C (placeholder)
  • Tbit: placeholder
  • Window fraction X%: placeholder
  • Propagation velocity v: placeholder
  • Worst-case scenario fields: harness run, junction density, connector count, temperature/aging/batch guardband (all placeholders)
Step 2
Compute return time and max stub
troundtrip ≈ 2 × Lstub / v
Use Lstub as junction → node boundary. v is harness-dependent.
Window budget = X% × Tbit
X% is a placeholder for allowed overlap-free timing fraction.
Lstub_max ≈ (v × X% × Tbit) / 2
Placeholder output. Apply guardband placeholders before final acceptance.
Step 3
Decide: PASS / MARGINAL / FAIL
PASS
troundtrip < X% × Tbit (placeholders)
The return is outside the sensitive window with guardband placeholders applied.
MARGINAL
Near the threshold (placeholders)
Requires worst-case harness validation and tighter guardband fields; geometry changes can flip results.
FAIL
Overlap expected (placeholders)
Must change topology (shorten drops, remove T-branches, reduce dense junctions, or segment runs).
Deliverable: Stub Budget Worksheet (fields)
Worksheet
Copy-ready field set (placeholders)
Target tier
A / B / C (placeholder)
T_bit
T_bit = [placeholder]
X% window
X% = [placeholder]
Propagation velocity
v = [placeholder]
Computed L_stub_max
L_stub_max = [placeholder]
Worst-case guardband
Temp / aging / batch = [placeholders]
Decision
PASS MARGINAL FAIL
The worksheet is designed for consistent accounting: the same geometry can pass Tier A but fail Tier C once the overlap condition is evaluated under worst-case guardband placeholders.
Diagram (H2-6): Stub budget workflow — Inputs → Compute → Decide
STUB BUDGET WORKSHEET FLOW INPUTS COMPUTE DECIDE Tier (A/B/C) T_bit X% window v Worst-case Temp / aging t_roundtrip 2 × L_stub / v Window budget X% × T_bit L_stub_max (v × X% × T_bit)/2 + guardband PASS no overlap MARGINAL validate FAIL change topology shorten drops
H2-7

Controlling T-Branches: How to Avoid Reflection Pile-Up

T-branches behave like multi-port discontinuities: reflection strength rises and multiple returns can stack and drift in phase—high-speed tiers are the first to lose sampling margin.
Intent
Explain why T-branches are harder than a single stub, and how to control risk using structure-first mitigation.
Scope guard: no termination implementation
Scope guard: component details deferred
Refer to: Termination page (details)
This section focuses on geometry and validation logic. Termination networks and EMC components are intentionally not expanded here to prevent cross-page overlap.
Why T-branches amplify instability (PHY-level)
1) A stronger discontinuity (multi-port junction)
A T-junction behaves like a multi-port node, not a simple two-port boundary. The effective impedance seen by the trunk varies with branch geometry and node attachment, so the reflection source is typically “harder” than a single drop.
2) Reflection pile-up (returns can re-enter other branches)
Energy reflected from one branch returns to the junction and can partially launch into another branch. This creates multiple return paths with different round-trip times, so the waveform can show stacked “bumps” rather than a single clean echo.
3) Phase drift into the sampling-sensitive window
As speed tier increases, the safe time window shrinks. A pile-up return that is “outside” the sensitive region at lower tiers can drift into the window at higher tiers, turning a harness that “mostly works” into a harness that fails deterministically under worst-case conditions.
Mitigation strategy hierarchy (structure-first)
Layer 1
Topology
  • Prefer patterns that remove T-junctions where possible (segmentation or daisy-chain patterns are covered in H2-8).
  • If a T is unavoidable, limit branch fan-out and avoid dense junction clustering (placeholders).
  • Treat the junction as a design object: document its location, branch count, and intended tier (placeholders).
Layer 2
Placement
  • Move the junction close to a node so one branch becomes a very short tap (stub collapses to a controlled variable).
  • Avoid placing a T on the critical high-tier trunk segment; isolate high-tier paths from long drops (concept-level).
  • Keep branch geometry consistent across builds to prevent phase drift (temperature/aging/batch placeholders managed in budget).
Layer 3
Validation
  • Use the same acceptance logic as the stub budget: return placement vs window (X% placeholders).
  • Validate on real harness under worst-case scenario fields (temperature/aging/connector state placeholders).
  • Report results as PASS / MARGINAL / FAIL, not “seems OK”.
Deliverable: T-branch mitigation checklist (Topology / Placement / Validation)
Topology checks
  • List all T-junctions and label intended tier (A/B/C placeholders).
  • Reduce branch count per junction where possible (placeholders).
  • Avoid stacking multiple junctions within a short trunk region (placeholder distance rule).
Placement checks
  • Move junction toward a node to create a short tap branch (target short-tap placeholder).
  • Keep long drops off the high-tier trunk segment (tier boundary placeholder).
  • Document connector count near the junction (placeholder) to avoid hidden discontinuities.
Validation checks
  • Acceptance: no overlap between return and decision-sensitive window (X% placeholders).
  • Exercise worst-case harness scenario fields (temperature/aging/batch placeholders).
  • Record: PASS / MARGINAL / FAIL per junction, not only per vehicle-level result.
Termination network implementation details should be handled in the dedicated Termination page to prevent cross-topic duplication.
Diagram (H2-7): Bad vs better T-branch geometry (pile-up control)
T-BRANCH GEOMETRY: BAD VS BETTER BAD BETTER TRUNK T-JUNCTION STUB STUB NODE NODE PILE-UP RISK TRUNK T-JUNCTION SHORT TAP NODE NODE RISK CONTAINED
H2-8

Harness Patterns That Scale: Daisy-Chain, Segmentation, Gatewaying

Scaling is not about a single “short stub” but about choosing a pattern that makes stub limits, junction density, and validation boundaries predictable.
Intent
Provide physical-structure patterns to keep stubs controllable as node count grows.
Scope guard: no gateway protocol / routing
Focus: physical-domain isolation only
Output: selection matrix (card format)
Each pattern “moves risk”: daisy-chain pushes risk to trunk length, segmentation pushes risk to junction count, and gatewaying concentrates responsibility at a domain boundary.
Three scalable patterns (risk relocation view)
Pattern
Daisy-chain
Controls
Stub length collapses (very short drops).
Moves risk to
Longer trunk path and end-to-end worst-case.
Validation focus
Prove the farthest path stays within window criteria (placeholders).
Pattern
Segmentation
Controls
Shorter segments → easier per-segment stub budgeting.
Moves risk to
More junctions → more discontinuities to manage.
Validation focus
PASS per segment + worst-case segment combination (placeholders).
Pattern
Gatewaying (physical domain split)
Controls
Isolates fast tier from long low-tier drops by separating physical domains.
Moves risk to
Domain boundary correctness (fast domain must stay “clean”).
Validation focus
Prove fast domain margin independently (window criteria placeholders).
Deliverable: Pattern selection matrix (card format)
Selection funnel (physical)
Filter by speed tier → node count → harness length → serviceability, then validate with the same window logic (placeholders).
DAISY Short stubs, longer trunk
Node count
[placeholder]
Speed tier
A/B/C (placeholder)
Harness length
[placeholder]
Serviceability
[placeholder]
SEGMENT Short segments, more junctions
Node count
[placeholder]
Speed tier
A/B/C (placeholder)
Harness length
[placeholder]
Serviceability
[placeholder]
GATEWAY Physical-domain separation
Node count
[placeholder]
Speed tier
A/B/C (placeholder)
Harness length
[placeholder]
Serviceability
[placeholder]
The placeholders are intentionally kept for later standard/experience fill-in while preserving a consistent decision structure.
Diagram (H2-8): Scenario map — which pattern fits which scale
PATTERN SCENARIO MAP NODE COUNT → SPEED TIER ↑ Tier A Tier B Tier C LOW MID HIGH DAISY SHORT STUB LONG TRUNK SEGMENT SHORT SEGMENT JUNCTIONS GATEWAY DOMAIN SPLIT FAST/SLOW
H2-9

Validation & Measurement: What to Probe on Real Harness

Validation requires a closed loop: probe plan → stimulus → time-aligned capture → error-counter correlation, executed at worst-case harness points.
Intent
Show where to probe, how to excite the harness, and how to prove correlation between waveform anomalies and time-windowed error counters.
Scope guard: no instrument brands/models
Scope guard: no deep EMC standards
Output: bring-up test plan (placeholders)
“Scope looks fine” is often a time-window mismatch. Always align capture windows with counter windows and stress the highest tier phase.
Probe plan: three mandatory locations
P1
Far end (worst accumulated path)
Maximizes trunk accumulation and return visibility. Use it to validate end-to-end settling and window margin at the highest tier phase.
P2
T junction (pile-up source)
The junction is the multi-port discontinuity. Use it to detect stacked returns and phase drift that can intrude into decision-sensitive windows.
P3
Worst node (geometry-ranked)
Identify the worst node by geometry: longest stub, densest connectors, and most uncertain attachment. Validate repeatability under worst-case conditions.
Stimulus: excite topology sensitivity (not “luck”)
HS-only stress
Concentrate activity on the highest speed phase to shrink the safe window and reveal intruding returns.
Placeholders
Tier: [A/B/C] · Burst: [X] · Duration: [Y]
Edge-sensitivity sweep
If adjustable edge conditions exist, sweep them to separate topology-limited behavior from configuration-limited behavior.
Placeholders
Sweep: [Setting] · Steps: [N] · Hold time: [Y]
Path perturbation
Controlled branch and connector state changes expose topology sensitivity. Record exactly what changed and when.
Placeholders
Branch: [A/B] · Connector state: [X] · Repeat: [N]
Observables: four must-check views (concept-level)
Overshoot / ringing
A direct indicator of discontinuity and stacked returns, most visible near P2 and amplified at P1 under HS-only stress.
Zero-crossing jitter
Ringing can create repeated threshold crossings. If these crossings approach the decision window, margin collapses quickly at higher tiers.
Edge settling
Use a consistent settling view: the signal must be stable before the decision-sensitive window. Placeholders define the window boundary.
Counter correlation
Align waveform capture windows with error-counter windows. If anomalies and counter spikes share the same window, topology is a primary suspect.
Correlation loop & pass criteria (placeholders, fixed structure)
Correlation workflow (fixed)
Trigger → Capture window [pre/post = X/Y] → Counter window [aligned = Z] → Decision (same-window = topology suspect)
Pass criteria (placeholders)
  • Waveform: no return overlaps the decision-sensitive window (X% of Tbit placeholder).
  • Counters: error rate ≤ X per Y minutes under HS-only stress (placeholders).
  • Repeatability: PASS for N repeated runs at P1/P2/P3 (placeholder).
Keep the structure constant; fill placeholders later with standard- or program-specific values without changing the measurement logic.
Deliverable: Bring-up test plan (step-by-step)
Step 0
Intent & tier
Define target tier (A/B/C placeholder) and the stress mode (HS-only / full-phase placeholder). Record the harness configuration ID.
Step 1
Probe map
Assign probes to P1 (far end), P2 (T junction), and P3 (worst node). Use a consistent naming scheme for screenshots and logs.
Step 2
Stimulus
Run HS-only stress, edge-sensitivity sweep (if adjustable), and controlled path perturbation. Keep placeholders for settings and duration.
Step 3
Trigger & alignment
Trigger on counter threshold or stress boundary. Align capture windows with counter windows; record pre/post placeholders.
Step 4
Observe
Check overshoot/ringing, zero-crossing jitter, edge settling, and counter correlation using the same time-window logic.
Step 5
Pass criteria
Apply waveform window criteria (X% placeholder), counter rate criteria (X/Y placeholders), and repeatability (N placeholder).
Step 6
Evidence pack
Archive captures, counter logs, harness configuration ID, and change notes in a single evidence bundle for reproducibility and serviceability.
Diagram (H2-9): Measurement setup map (probe points P1/P2/P3)
MEASUREMENT SETUP MAP ECU Tx/Rx TRUNK T-JUNCTION STUB NODE NODE NODE BRANCH P2 P1 P3 P1 = FAR END P2 = T JUNCTION P3 = WORST NODE
H2-10

Troubleshooting: Symptom → Likely Topology Cause → Fix Path

Long-tail issues converge faster when topology is evaluated first. If HS-only fails, branch changes matter, or sample-point tweaks are unstable, treat stubs/junctions as primary suspects.
Intent
Collapse troubleshooting into topology-first decisions and strategy-level fix paths (no component selection).
Scope guard: no EMC component selection
Fix path: structure & placement strategies only
Uses probes: P1/P2/P3 (from H2-9)
If a parameter tweak “sometimes helps” but cannot hold margin, the return is likely still inside the sensitive window. Shift from tuning to topology control.
Three hard indicators to prioritize topology
HS-only failure
If only the high-speed phase fails, treat stub/T/junction geometry as a top suspect even when low-speed phases look stable.
Branch/harness sensitivity
If changing a branch, connector state, or harness variant alters the outcome, the failure mode is topology-sensitive by definition.
Sample-point tweak is unstable
If tuning changes “sometimes helps” but cannot hold margin across repeats, the return is likely still inside the decision-sensitive window.
Deliverable: Symptom → likely topology cause → fix path
HS phase errors spike (CRC/ACK/ERR counters)
Likely topology cause
Long stub or dense junctions intruding into HS window
Quick check
Probe P2 (ring/pile-up) + P1 (settling vs window), align with counter window
Fix path
Shorten stub / move junction near node / segment the harness (strategy level)
Bench passes, vehicle fails on real harness
Likely topology cause
Worst-case path exists only in vehicle harness (far end + junction density)
Quick check
Rank worst node by geometry (P3) and reproduce using HS-only stress
Fix path
Reduce worst-path length by segmentation or physical domain split (strategy level)
Changing branch/connector state flips the outcome
Likely topology cause
Discontinuity shift at a junction (phase drift) or hidden stub created by wiring changes
Quick check
Probe P2 during controlled perturbation and time-align with counter spikes
Fix path
Standardize branch geometry, move junction closer to node, and eliminate long drops (strategy level)
Tuning sample point helps briefly but cannot hold
Likely topology cause
Returns remain inside the decision-sensitive window; tuning only shifts overlap
Quick check
Probe P1 for settling vs window; confirm same-window counter correlation
Fix path
Reduce return amplitude/time via geometry control (shorten stub / segment / domain split)
Fix paths are intentionally kept at strategy level to avoid duplicating termination and EMC component content.
Diagram (H2-10): Topology-first decision tree (probe-guided)
TOPOLOGY-FIRST DECISION TREE START HS issue? HS-ONLY FAIL? TOPOLOGY SENSITIVE? PROBE P2: RING? P2 PROBE P1: WINDOW OVERLAP? P1 FIX: SHORTEN STUB FIX: MOVE JUNCTION FIX: SEGMENT / DOMAIN SPLIT RE-TEST (H2-9) PASS/MARGINAL/FAIL
H2-11

Engineering Checklist (Design → Bring-up → Production)

Turn topology control into repeatable gates with evidence packs, fixed placeholders, and clear fail loops.
Intent
Freeze repeatable actions across Design → Bring-up → Production without wide tables.
Scope guard: no wide tables
Scope guard: no hard numeric limits (placeholders)
Output: evidence packs + pass criteria
Gate overview: consistent structure, consistent outcomes
Each gate uses the same structure: Action (do) → Evidence (prove) → Pass criteria (decide) → Fail next step (loop back). This prevents “parameter tuning by luck” and makes results reproducible across teams.
Design Gate
Lock topology and budgets before any validation effort.
Topology selection locked
Action
Choose and freeze the physical pattern (trunk+stubs / daisy-chain / segmentation / domain split).
Evidence
Topology sketch + node list + junction count (card list, not a wide table).
Pass criteria
Risk grade ≤ X (placeholder) for the target tier(s).
Fail next step
Re-evaluate topology sources and scaling patterns (H2-3 / H2-8).
Stub budget worksheet completed
Action
Fill the worksheet with fixed placeholders: Tier / Tbit / X% window / v / Lstub_max.
Evidence
Budget fields captured in a single “worksheet card” per tier (no spreadsheet required).
Pass criteria
All worst nodes satisfy Lstub ≤ Lstub_max (placeholder) for the highest tier.
Fail next step
Re-run the window logic and budgeting flow (H2-4 / H2-6) and adjust structure.
T-branch control rule defined
Action
Define T-junction placement rules (reduce fan-out, keep drops short, prefer near-node junctioning).
Evidence
List of T-junctions with location intent and “allowed/blocked” status (card list).
Pass criteria
All T-junctions comply with the rule set (X/Y placeholders).
Fail next step
Apply mitigation strategy and re-layout the junction placement (H2-7).
Worst node (P3) defined by geometry
Action
Rank nodes by geometry (stub length, connector density, far-end accumulation) and select P3.
Evidence
“Worst-node list” as short cards (Node ID → why worst), no wide matrix.
Pass criteria
P3 is measurable and reproducible across harness variants (placeholders).
Fail next step
Re-check topology sources and measurement plan (H2-3 / H2-9).
Bring-up Gate
Prove correlation on real harness with aligned time windows.
Probe map fixed (P1/P2/P3)
Action: assign probes to P1 far end, P2 T junction, P3 worst node. Evidence: probe map capture + naming rules. Pass: any operator can repeat the same setup.
Stimulus plan fixed (HS-only + perturbation)
Action: execute HS-only stress, edge sensitivity sweep (if available), and controlled path perturbation. Evidence: placeholder record of settings and run length. Pass: stable reproduction or stable PASS.
Time alignment enforced
Action: align capture window with counter window. Evidence: window definitions (pre/post placeholders). Pass: waveform anomalies and counter spikes share the same time window.
Decision tree executed
Action: run the topology-first decision path. Evidence: checked decision nodes (card list). Pass: root cause converges to topology vs non-topology before any tuning escalation.
Production Gate
Preserve geometry across builds and collect field evidence.
Harness consistency checks
Action: verify harness version and branch routing is unchanged. Evidence: version fields and change logs. Pass: configuration fields complete (X% placeholder).
Branch length tolerance control
Action: define tolerance placeholders (±X) for key drops and junction positions. Evidence: sampling cards per batch. Pass: all sampled geometry stays within tolerance.
Sampling strategy (worst-first)
Action: prioritize sampling at worst node and T junction geometry. Evidence: batch sampling plan fields. Pass: worst-path coverage achieved each cycle.
Field service record fields
Action: require geometry-critical fields (harness variant, branch changed, connector state, node location). Evidence: service template. Pass: completeness ≥ X% (placeholder).
Diagram (H2-11): 3-gate pipeline (Design → Bring-up → Production)
3-GATE PIPELINE DESIGN BRING-UP PRODUCTION TOPOLOGY BUDGET T-CONTROL WORST NODE EVIDENCE PROBE MAP STIMULUS ALIGN WIN CORRELATE DECISION CONSISTENCY TOLERANCE SAMPLING SERVICE FIELDS REGRESSION OUTPUT: DESIGN PACK / BRING-UP PACK / PRODUCTION PACK (PLACEHOLDERS)
H2-12

Applications (Strictly within this page’s boundary)

Identify scenarios where stubs and junctions dominate failures and prioritize topology-first strategies.
Intent
Only describe where stub risk amplifies (no gateway protocol expansion).
Scope guard: no DoIP / OTA / secure gateway
Output: application risk map (cards)
Deliverable: application risk map (scenario → risk → priority)
Body / Comfort (long harness + many nodes)
Risk triggers
Many branches · dense connectors · hidden stubs
Primary suspects
T pile-up · worst-node drift · branch variability
Priority strategy
Normalize topology + define P3 early + validate on worst harness points
Powertrain / Chassis (FD/XL tiers tighten)
Risk triggers
Short HS window · higher sensitivity to returns
Primary suspects
Long stubs · far-end overlap · unstable tuning outcomes
Priority strategy
Lock budget placeholders first, then prove HS-only correlation at P1/P2
Star / centralized coupling (serviceable but harder)
Risk triggers
Multi-port junction · return stacking · path sensitivity
Primary suspects
P2 pile-up · geometry inconsistency at the hub
Priority strategy
Make P2 a mandatory validation point and enforce geometry consistency in production
Diagram (H2-12): vehicle domain map (where stub risk amplifies)
VEHICLE DOMAIN RISK MAP LONG HARNESS TREND → BODY COMFORT POWERTRAIN CHASSIS MANY NODES CLASSIC / FD TIGHT WINDOW FD / XL HIGH SPEED FD / XL MARKERS = HIGHER STUB/JUNCTION SENSITIVITY (TOPOLOGY-FIRST PRIORITY)

H2-13 · IC Selection (Stub/Harness Robustness Only)

Translate topology risk (stub/T-branch/geometry variance) into silicon features that reduce sensitivity—then bind every feature to a validation hook.

Scope Guard No pricing · No vendor claims beyond datasheet No termination component details

How to use this chapter (fast, repeatable)

  • Step 1 — Define constraints: speed tier, worst-node, junction density, geometry variance.
  • Step 2 — Pick features: required first (timing/symmetry), then optional (slew shaping, SIC, PN/diag).
  • Step 3 — Bind to validation: probe points P1/P2/P3 + log counters + pass criteria (placeholders X/Y).

Selection Inputs (Topology → Silicon Requirements)

Input Speed tier & sampling window

Faster data phase shrinks the window where reflections must stay outside; feature priority shifts toward tight symmetry, low delay variation, and controller timing flexibility.

Input Geometry variance (harness, connectors, stubs)

If stub lengths and connector states vary across builds, selection must favor wider practical margin plus diagnostic observability to correlate counters with waveform events.

Input Junction complexity (T-branches / star-like coupling)

Reflection pile-up is more likely; prioritize signal improvement / ringing reduction and tighter timing symmetry before relying on parameter tweaks.

Input Serviceability & field debugging

If faults are intermittent, selection should include fault reporting, counter visibility, and timestamp-friendly event capture.

Feature → Why → When → Validation Hook (with example PNs)

Each feature is listed only because it can reduce sensitivity to stub length / junction reflections or improve the ability to prove that topology is the root cause.

1) Adjustable Slew / Edge Shaping (reduce ringing sensitivity)

Optional (use with timing discipline)

Why it matters

Slowing edges can reduce visible ringing and repeated threshold crossings that turn reflection energy into bit decision errors. It must still satisfy the timing window in the target speed tier.

When to pay for it

  • Ring/overshoot is strong at P2 (T junction) or worst-node P3.
  • Topology cannot be changed quickly (stub length close to budget limit).
  • Pass/fail flips with small harness changes (classic topology sensitivity).

Validation hook (placeholders)

  • Probe: P2 + P3 capture dominant edge + post-edge ringing.
  • Log: error counters within the same time window as the capture.
  • Pass: ringing settles before sample window by X% of T_bit (X placeholder) and counters stay < Y per N minutes (Y/N placeholders).

Example part numbers (edge/slew control capability)

  • TI SN65HVD234-Q1 (CAN FD transceiver family listing includes adjustable driver slew rates)
  • Microchip MCP2551 (RS-pin slope-control mode for CAN edges)
  • Analog Devices LTM2889 (isolated CAN FD µModule; RS pin used for variable slew-rate control)

2) Signal Improvement (SIC) / Ringing Reduction (for complex harness)

Strong lever for T-branches

Why it matters

In stub-heavy or junction-dense networks, ringing can dominate the decision margin. SIC-capable transceivers actively reduce ringing effects and improve timing symmetry, making larger topologies more reliable at higher data rates.

When to pay for it

  • HS phase fails only on real harness (bench passes).
  • T-junctions/stubs cannot be fully eliminated (serviceability constraints).
  • Small changes in branch length or connector state cause error-rate swings.

Validation hook (placeholders)

  • Probe: P2 (junction) and P3 (worst node) to confirm ringing reduction.
  • Stress: HS-only traffic + worst-case load pattern for Z minutes (Z placeholder).
  • Pass: stable counters < Y per interval and no topology-sensitive flapping (Y placeholder).

Example part numbers (SIC-capable)

  • NXP TJA1462 (CAN FD SIC; reduces ringing in larger topologies; tighter timing symmetry)
  • NXP TJA1465 (CAN FD SIC + partial networking / selective wake)
  • TI TCAN1462-Q1 (CAN FD transceiver with signal improvement capability)

3) Tight Timing Symmetry / CAN FD Rate Headroom (keep reflections out of the window)

Required for higher tiers

Why it matters

As bit time shrinks, a small shift in loop delay symmetry can move the effective sampling margin. Better symmetry reduces “accidental” overlap between reflection return and sampling decision.

When to pay for it

  • HS tier targets ≥ Tier-B or Tier-C (placeholders).
  • Topology includes multiple junctions and long harness segments.
  • Temperature/aging variance must be tolerated without re-tuning.

Validation hook (placeholders)

  • Sweep: run across harness variants (worst-case geometry) and temperature corners.
  • Observe: margin stability (no periodic error bursts) for Z minutes (Z placeholder).
  • Pass: HS error bursts are absent; counters remain under Y threshold (Y placeholder).

Example part numbers (symmetry / FD headroom)

  • NXP TJA1462 (tighter bit timing symmetry; enables higher FD rates in more complex topologies)
  • Infineon TLE9251VSJ, TLE9351VSJ (CAN FD transceiver family variants highlighted for loop delay symmetry up to 5 Mbit/s)

4) Diagnostics & Partial Networking (debug topology-sensitive faults)

For intermittent / field issues

Why it matters

Topology issues often appear as rare bursts. Diagnostics and PN-capable transceivers can improve observability and power-state behavior, helping attribute events to bus activity versus local conditions.

When to pay for it

  • Intermittent bus disturbances require root-cause proof, not guesses.
  • Low-power modes are used and false wake must be minimized and attributable.
  • Field logs must be actionable with minimal lab rework.

Validation hook (placeholders)

  • Log schema: event type + counter + timestamp + node role (placeholders).
  • Pass: events correlate with captures at P2/P3; false wake rate < X per day (X placeholder).

Example part numbers (PN / diagnostics)

  • NXP TJA1145 (high-speed CAN transceiver for partial networking; includes fail-safe and diagnostic features)
  • NXP TJA1145A (PN transceiver variant supporting CAN FD fast phase up to 5 Mbit/s)
  • NXP TJA1465 (CAN FD SIC + selective wake / PN)

5) Controller-Side Timing Flexibility (bit timing / sample point discipline)

Required if tuning is unavoidable

Why it matters

When reflection return time is close to the sampling window edge, the controller’s bit timing configuration becomes the last lever—but it should be used as a controlled margin exercise, not a band-aid.

When to pay for it

  • Multiple harness variants must be supported without hardware respin.
  • HS fast phase is used and margin is sensitive to small shifts.
  • Need SPI-attached controller channel expansion (gateway / multi-bus nodes).

Validation hook (placeholders)

  • Method: define timing candidates (A/B/C) and test across geometry extremes.
  • Pass: sample-point margin remains > X% across variants; no burst errors during Z-minute stress (X/Z placeholders).

Example part numbers (controller/bridge ICs)

  • Microchip MCP2518FD (external CAN FD controller with SPI interface)
  • Microchip MCP2517FD (external CAN FD controller with SPI interface)
  • TI TCAN4550-Q1 (CAN FD controller with integrated CAN FD transceiver)

Diagram — Selection Funnel (Topology → Features → Validation)

A single funnel keeps this page within scope: it does not prescribe termination components; it links topology risk to silicon features and a measurement plan.

Inputs Required Nice-to-have Validation Speed tier Window shrinks Geometry Variance exists Junctions T / star risk Service Need proof Tight symmetry Stable margin FD headroom Tier support Timing control Disciplined tuning SIC / ringing T-branch help Slew shaping Edge discipline Diag / PN Field proof Probe P1–P3 Worst points HS stress Z minutes Log counters Same window Pass X/Y Placeholders

Note: “Nice-to-have” features must never replace topology fixes. If validation shows reflection energy still returns inside the sampling window, shorten stubs, reduce T-branches, or segment the harness first.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-13 · FAQs (Stub & Harness Length Only)

Scope is strictly limited to stub / spur / T-branch / hub effects and harness length tolerance. Each answer is a fixed, data-ready 4-line structure with placeholders (X/Y/Z/N).

Placeholders X% · sampling window fraction Y · error threshold N · time interval Z · harness variants

Reusable checks: t_roundtrip ≈ 2·L_stub / v and verify that reflection return does not overlap the decision window around the sample point.

Classic CAN is OK, but FD data phase CRC spikes on the real harness — is stub return time inside the sample window? Symptom pattern: HS-only errors that do not reproduce on a short bench harness.
Likely cause A longer spur/T-branch on the vehicle harness moves the reflection return into the FD decision window.
Quick check Estimate t_roundtrip ≈ 2·L_stub/v for the worst spur and compare it to X%·T_bit of the FD data phase.
Fix Shorten the longest spur, reduce/relocate the T-junction, or segment the harness so the HS phase sees fewer reflection sources.
Pass criteria For all Z harness variants: t_roundtrip < X%·T_bit and CRC errors < Y per N minutes under HS-only stress.
Works on the bench, fails in the vehicle — did the harness add an extra T-branch or a longer spur? Symptom pattern: failure appears only after installing the production harness.
Likely cause The in-vehicle harness introduces extra junction count or longer drops, increasing reflection pile-up at HS rates.
Quick check Compare bench vs vehicle topology: count T-junctions, identify the longest spur, and verify whether the “worst node” moved.
Fix Remove unnecessary T-branches, relocate junctions closer to nodes (shorter drops), or segment the harness to reduce HS reflection sources.
Pass criteria Across Z vehicle harness builds: HS stress produces < Y errors per N minutes without topology-sensitive “flapping”.
Only one node causes errors — is its drop line longer or routed near a noisy bundle? Symptom pattern: errors cluster at a single ECU while others remain clean.
Likely cause That node has a longer effective spur or stronger coupling that increases ringing and threshold re-crossing locally.
Quick check Compare P3 captures at the failing node vs a healthy node: look for larger post-edge ringing duration and correlate with the same-time error bursts.
Fix Shorten that node’s drop, move its junction closer, and reroute the spur away from noisy bundles to reduce effective reflection/coupling.
Pass criteria Failing node shows ringing settled before the decision window by X%·T_bit and errors < Y per N minutes.
Errors appear only at certain ambient temperatures — did velocity/edge rate shift move reflection timing? Symptom pattern: room-temp pass, hot/cold fail (or vice versa) at HS rates.
Likely cause Temperature shifts propagation velocity and effective edge behavior, moving the reflection return relative to the sample point.
Quick check Repeat the same HS stress across temperature corners and check whether “error bursts” align with a reflection feature moving closer to the decision window.
Fix Increase topology margin: shorten the longest spur, reduce junction density, and ensure worst-case geometry stays away from the sampling window at all corners.
Pass criteria For hot/cold corners: t_roundtrip < X%·T_bit and errors < Y per N minutes under identical HS stress.
A sample-point tweak helps briefly but is not robust — is the reflection still overlapping the decision threshold crossing? Symptom pattern: “works after tuning” but fails with small harness or temperature variation.
Likely cause Tuning shifted the window edge, but the reflection energy still crosses the decision threshold within that window under variance.
Quick check Validate the tuned setting across Z harness variants and corners; if failures migrate, reflection timing is still near the window boundary.
Fix Treat tuning as secondary: first increase topology margin (shorter spur / fewer T-branches / segmentation), then re-tune only to center a safe window.
Pass criteria After topology fixes: stable operation with < Y errors per N minutes across Z variants without re-tuning per harness.
Adding a gateway/segment fixed it — was the original trunk too long or the junction count too high? Symptom pattern: segmentation improves HS margin without changing nodes.
Likely cause A long trunk plus many junctions increased reflection opportunities; segmentation reduced the effective reflection domain seen at HS rates.
Quick check Compare pre/post segmentation: trunk length per segment and junction count per segment; check if the “worst node” moved or disappeared.
Fix Keep segments short enough for the target tier and design each segment with controlled spur lengths and minimal T-branch pile-up.
Pass criteria Each segment meets t_roundtrip < X%·T_bit for its worst spur and stays under Y errors per N minutes in HS stress.
Two nodes at the same distance behave differently — are the connector and node input creating an effective stub? Symptom pattern: distance looks equal, but ringing and errors differ by node.
Likely cause Connector geometry and input loading change the local impedance step, creating different reflection strength (effective stub behavior) even at similar distances.
Quick check Capture at each node’s P3 and compare ringing amplitude/duration; if one node shows stronger threshold re-crossing, treat it as a higher-risk effective spur.
Fix Shorten/straighten that node’s drop and reduce local discontinuities (layout/connector routing choices); if needed, segment so the node is not at the worst HS position.
Pass criteria Node-to-node variation stays within X% margin and all nodes remain under Y errors per N minutes during HS stress.
Star topology looks clean at low speed but fails at high speed — is reflection pile-up happening at the hub? Symptom pattern: stable in classic phase, unstable only when HS phase is enabled.
Likely cause The hub becomes a reflection aggregation point; multiple branch returns overlap near the HS decision window.
Quick check Probe close to the hub-side junction (P2 equivalent) and at the worst branch node (P3) to see whether ringing persists into the sampling window at HS.
Fix Reduce hub branch length spread, shorten the longest branch, and/or segment the star into smaller HS domains to cut pile-up.
Pass criteria For all branches: ringing settles before the decision window by X%·T_bit and HS errors < Y per N minutes.
Shortening one spur helps but shifts failures elsewhere — did multiple reflections move into another node’s window? Symptom pattern: fix at one node causes a new worst node to emerge.
Likely cause The network has multiple reflection sources; changing one spur shifts the timing of overlap so another node becomes window-critical.
Quick check Re-identify the “worst node” after the change and confirm whether its reflection feature now aligns closer to the sample window than before.
Fix Apply a network-level approach: reduce overall junction density, shorten all long drops above the budget, or segment so no single node becomes window-critical.
Pass criteria After changes, no node becomes worse than X% margin, and the entire network stays under Y errors per N minutes across Z variants.
Sporadic error bursts with no obvious EMC event — could topology resonance be triggered by certain bit patterns/edge density? Symptom pattern: rare bursts under specific traffic profiles, not random noise.
Likely cause Edge-dense traffic can repeatedly excite reflection pile-up at junctions, turning marginal window overlap into bursty errors.
Quick check Reproduce with two stimuli (edge-dense vs edge-sparse) and confirm whether the burst aligns with stronger ringing near P2/P3 in the same time window.
Fix Reduce reflection opportunities by shortening critical drops and reducing T-branch pile-up; validate using the worst-case stimulus that triggers the bursts.
Pass criteria Under the burst-triggering stimulus: 0 burst events over N minutes and total errors < Y.
SIC improves margin — was the prior issue mainly asymmetry/edge integrity under heavy load and complex stubs? Symptom pattern: same harness becomes stable when using a signal-improvement transceiver.
Likely cause The network is junction-dense or heavily loaded, and ringing/asymmetry pushes the effective margin too close to the HS decision window.
Quick check Compare waveforms and counters with/without SIC on the same harness; confirm reduced ringing persistence at P2/P3 during HS stress.
Fix Use SIC where topology cannot be simplified enough; still enforce stub budget and T-branch control so margin does not depend on a single feature.
Pass criteria With worst-case load: errors < Y per N minutes and waveform shows window-clear settling by X%·T_bit.
Passes on a lab harness but fails on service harness — is production harness length tolerance out of budget? Symptom pattern: lab harness is stable; field/service harness shows intermittent HS errors.
Likely cause Service harness variants exceed the stub-length or junction-location tolerance assumed in the original budget.
Quick check Measure/record the longest spur and junction positions on service harness samples and compare to the budgeted maximum L_stub_max (placeholder).
Fix Update the topology budget to include production tolerance and enforce it with build checks; if tolerance cannot be tightened, segment or redesign to restore HS margin.
Pass criteria For Z production/service harness samples: all critical spurs satisfy t_roundtrip < X%·T_bit and system stays under Y errors per N minutes.