Stub & Harness Length Design Rules for CAN FD/XL
← Back to: Automotive Fieldbuses: CAN / LIN / FlexRay
H2-3
Topology Taxonomy: Where Stubs Come From
Goal: visually locate stub sources and dominant pile-up junctions before any timing/window check.
Intent
One-glance recognition: which structure creates stubs and where reflections stack.
Deliverable: Risk ranking rules
Output: Dominant pile-up points
No OEM harness clauses
Scope guard: topology-only patterns; no OEM-specific routing constraints, no connector supplier clauses, no part-number rules.
Topology atlas (what creates stubs)
Common
Trunk + drop stubs
- Stub source: every drop from trunk junction → node boundary.
- Pile-up point: node-dense junction clusters (many small returns stacking).
- Risk escalator: higher-speed phases tighten tolerance to the same delay.
High-risk
T-branch junction
- Stub source: multiple legs created from one discontinuity.
- Pile-up point: the junction itself (multi-path interaction hub).
- Risk escalator: small geometry changes can shift overlap timing into the decision region.
Centralized
Star hub / coupler
- Stub source: each hub-to-node branch behaves as a stub path.
- Pile-up point: the hub boundary (many returns converge).
- Risk escalator: interaction complexity rises quickly with node count.
Trade-off
Segmentation / gateways
- Stub source: drops remain, but long drops can be shortened.
- Pile-up point: added boundaries (interfaces/junctions) create more discontinuities.
- Risk escalator: risk shifts from long stubs → many boundaries; must re-check timing overlap.
Deliverable: Topology risk ranking rules
LOW
Sparse junctions, short drops
Pile-up is localized. The dominant failure mode appears mainly when speed tier increases and the sampling window tightens.
MEDIUM
Node-dense trunk + drops
Many small discontinuities stack near dense clusters. Sensitivity rises sharply in FD/XL high-speed phases.
HIGH
T-branch / star hub / heavy boundaries
One junction/hub can dominate system margin because multiple reflection paths interact at the same point.
H2-4
The One Metric That Matters: Reflection Return Time vs Sampling Window
A topology is usable only if the reflected return does not land inside the receiver’s decision-sensitive time region.
Intent
Convert “will it work?” into a timing-overlap decision.
All thresholds = placeholders
No standard numbers
No deep timing config
Scope guard: use bit time and a sampling window concept only. Keep X/Y placeholders for later fill-in.
Core model (physics → decision)
1) Round-trip return time is the quantity
Reflection leaves a junction, reaches the far boundary, then returns. Timing overlap depends on round-trip delay, not one-way.
2) Sampling is a sensitive time region
Decisions are most fragile around a bounded region near the nominal sample time. A return inside this region reduces margin disproportionately.
3) Higher speed tightens overlap tolerance
Same physical stub keeps roughly the same return time, while Tbit shrinks at higher rates. Overlap likelihood rises in FD/XL phases.
Deliverable: reusable formula (placeholders)
Formula
Round-trip return time
troundtrip ≈ 2 × Lstub / v
Lstub = junction → node boundary (per H2-1) · v = harness propagation velocity (parameter)
Pass
Time-placement criterion
troundtrip < X% × Tbit
X = allowable fraction placeholder · tighten X for higher-speed phases (FD/XL)
Practical rule: start from dominant pile-up points (H2-3), then apply this overlap test at the most timing-sensitive receiver.
H2-5
Classic vs CAN FD vs CAN XL: Why the Rules Tighten
Same harness can be stable in a slower phase but fail in higher-speed phases because the sampling-sensitive time window shrinks while reflection return time stays roughly constant.
Intent
Explain the root cause of “Classic works, FD/XL breaks” using only PHY timing and edge-sensitivity logic.
Scope guard: no frame structure
Scope guard: no numeric limits
Focus: window overlap + edge sensitivity
This section treats “tightening rules” as a time-placement problem: reflection return is a largely physical constant for a given stub, while the bit time and decision-sensitive window shrink with higher speed tiers.
Why higher tiers become fragile (PHY-only)
1) Return time stays “physical”
For a given harness geometry, the dominant reflection return time is mainly set by stub length and propagation velocity. It does not automatically shrink just because the bus switches to a faster tier.
2) The decision window shrinks with tier speed
As bit time shortens, the receiver’s decision-sensitive region occupies a larger fraction of the bit. Returns that were harmless at a slower tier can land inside the sensitive region at a faster tier.
3) Edge distortions scale into sampling errors
Faster tiers raise sensitivity to overshoot, ringing, and repeated threshold crossings because the timing margin is smaller. The same distortion becomes a larger fraction of the available window.
4) “Low-speed OK” does not prove “high-speed OK”
A stable low-speed phase only validates the low-speed decision window. High-speed phases must beFI a separate overlap check: whether the reflection return falls outside the sensitive window for the target tier.
Deliverable: Speed tier → design constraints (placeholders)
Tier A
Baseline timing tolerance
What changes: longer Tbit, wider decision-sensitive window (concept-level).
What breaks first: rare overlap near dense junction clusters.
Must verify: return stays outside window by X% margin (X placeholder).
What breaks first: rare overlap near dense junction clusters.
Must verify: return stays outside window by X% margin (X placeholder).
Tier B
Shrinking window, rising sensitivity
What changes: shorter Tbit, narrower window around the sample region.
What breaks first: marginal stubs at T-branches / hub boundaries.
Must verify: overlap check on the worst-case receiver + worst-case junction.
What breaks first: marginal stubs at T-branches / hub boundaries.
Must verify: overlap check on the worst-case receiver + worst-case junction.
Tier C
Tightest window, strict placement
What changes: fastest transitions, smallest timing margin.
What breaks first: returns landing inside the window; small geometry changes can flip pass/fail.
Must verify: strict return placement + worst-case guardband placeholders (temperature/aging/batch).
What breaks first: returns landing inside the window; small geometry changes can flip pass/fail.
Must verify: strict return placement + worst-case guardband placeholders (temperature/aging/batch).
H2-6
Practical Design Rules: Stub Length Budget (with Placeholders)
A repeatable method to budget stub length using inputs → compute → decide, with all numeric thresholds kept as placeholders for later standard/experience fill-in.
Intent
Provide an executable budgeting workflow rather than ad-hoc “rules of thumb”.
No hard numbers (placeholders only)
Worst-case must be explicit
Output includes “must change topology”
Scope guard: this section defines the structure and accounting. Numeric limits, standard-specific tables, and component-based fixes are intentionally left out.
Budget workflow (repeatable)
Step 1
Define inputs with worst-case discipline
- Target tier: A / B / C (placeholder)
- Tbit: placeholder
- Window fraction X%: placeholder
- Propagation velocity v: placeholder
- Worst-case scenario fields: harness run, junction density, connector count, temperature/aging/batch guardband (all placeholders)
Step 2
Compute return time and max stub
troundtrip ≈ 2 × Lstub / v
Use Lstub as junction → node boundary. v is harness-dependent.
Window budget = X% × Tbit
X% is a placeholder for allowed overlap-free timing fraction.
Lstub_max ≈ (v × X% × Tbit) / 2
Placeholder output. Apply guardband placeholders before final acceptance.
Step 3
Decide: PASS / MARGINAL / FAIL
PASS
troundtrip < X% × Tbit (placeholders)
The return is outside the sensitive window with guardband placeholders applied.
MARGINAL
Near the threshold (placeholders)
Requires worst-case harness validation and tighter guardband fields; geometry changes can flip results.
FAIL
Overlap expected (placeholders)
Must change topology (shorten drops, remove T-branches, reduce dense junctions, or segment runs).
Deliverable: Stub Budget Worksheet (fields)
Worksheet
Copy-ready field set (placeholders)
Target tier
A / B / C (placeholder)
T_bit
T_bit = [placeholder]
X% window
X% = [placeholder]
Propagation velocity
v = [placeholder]
Computed L_stub_max
L_stub_max = [placeholder]
Worst-case guardband
Temp / aging / batch = [placeholders]
Decision
PASS
MARGINAL
FAIL
The worksheet is designed for consistent accounting: the same geometry can pass Tier A but fail Tier C once the overlap condition is evaluated under worst-case guardband placeholders.
H2-7
Controlling T-Branches: How to Avoid Reflection Pile-Up
T-branches behave like multi-port discontinuities: reflection strength rises and multiple returns can stack and drift in phase—high-speed tiers are the first to lose sampling margin.
Intent
Explain why T-branches are harder than a single stub, and how to control risk using structure-first mitigation.
Scope guard: no termination implementation
Scope guard: component details deferred
Refer to: Termination page (details)
This section focuses on geometry and validation logic. Termination networks and EMC components are intentionally not expanded here to prevent cross-page overlap.
Why T-branches amplify instability (PHY-level)
1) A stronger discontinuity (multi-port junction)
A T-junction behaves like a multi-port node, not a simple two-port boundary. The effective impedance seen by the trunk varies with branch geometry and node attachment, so the reflection source is typically “harder” than a single drop.
2) Reflection pile-up (returns can re-enter other branches)
Energy reflected from one branch returns to the junction and can partially launch into another branch. This creates multiple return paths with different round-trip times, so the waveform can show stacked “bumps” rather than a single clean echo.
3) Phase drift into the sampling-sensitive window
As speed tier increases, the safe time window shrinks. A pile-up return that is “outside” the sensitive region at lower tiers can drift into the window at higher tiers, turning a harness that “mostly works” into a harness that fails deterministically under worst-case conditions.
Mitigation strategy hierarchy (structure-first)
Layer 1
Topology
- Prefer patterns that remove T-junctions where possible (segmentation or daisy-chain patterns are covered in H2-8).
- If a T is unavoidable, limit branch fan-out and avoid dense junction clustering (placeholders).
- Treat the junction as a design object: document its location, branch count, and intended tier (placeholders).
Layer 2
Placement
- Move the junction close to a node so one branch becomes a very short tap (stub collapses to a controlled variable).
- Avoid placing a T on the critical high-tier trunk segment; isolate high-tier paths from long drops (concept-level).
- Keep branch geometry consistent across builds to prevent phase drift (temperature/aging/batch placeholders managed in budget).
Layer 3
Validation
- Use the same acceptance logic as the stub budget: return placement vs window (X% placeholders).
- Validate on real harness under worst-case scenario fields (temperature/aging/connector state placeholders).
- Report results as PASS / MARGINAL / FAIL, not “seems OK”.
Deliverable: T-branch mitigation checklist (Topology / Placement / Validation)
Topology checks
- List all T-junctions and label intended tier (A/B/C placeholders).
- Reduce branch count per junction where possible (placeholders).
- Avoid stacking multiple junctions within a short trunk region (placeholder distance rule).
Placement checks
- Move junction toward a node to create a short tap branch (target short-tap placeholder).
- Keep long drops off the high-tier trunk segment (tier boundary placeholder).
- Document connector count near the junction (placeholder) to avoid hidden discontinuities.
Validation checks
- Acceptance: no overlap between return and decision-sensitive window (X% placeholders).
- Exercise worst-case harness scenario fields (temperature/aging/batch placeholders).
- Record: PASS / MARGINAL / FAIL per junction, not only per vehicle-level result.
Termination network implementation details should be handled in the dedicated Termination page to prevent cross-topic duplication.
H2-8
Harness Patterns That Scale: Daisy-Chain, Segmentation, Gatewaying
Scaling is not about a single “short stub” but about choosing a pattern that makes stub limits, junction density, and validation boundaries predictable.
Intent
Provide physical-structure patterns to keep stubs controllable as node count grows.
Scope guard: no gateway protocol / routing
Focus: physical-domain isolation only
Output: selection matrix (card format)
Each pattern “moves risk”: daisy-chain pushes risk to trunk length, segmentation pushes risk to junction count, and gatewaying concentrates responsibility at a domain boundary.
Three scalable patterns (risk relocation view)
Pattern
Daisy-chain
Controls
Stub length collapses (very short drops).
Moves risk to
Longer trunk path and end-to-end worst-case.
Validation focus
Prove the farthest path stays within window criteria (placeholders).
Pattern
Segmentation
Controls
Shorter segments → easier per-segment stub budgeting.
Moves risk to
More junctions → more discontinuities to manage.
Validation focus
PASS per segment + worst-case segment combination (placeholders).
Pattern
Gatewaying (physical domain split)
Controls
Isolates fast tier from long low-tier drops by separating physical domains.
Moves risk to
Domain boundary correctness (fast domain must stay “clean”).
Validation focus
Prove fast domain margin independently (window criteria placeholders).
Deliverable: Pattern selection matrix (card format)
Selection funnel (physical)
Filter by speed tier → node count → harness length → serviceability, then validate with the same window logic (placeholders).
DAISY
Short stubs, longer trunk
Node count
[placeholder]
Speed tier
A/B/C (placeholder)
Harness length
[placeholder]
Serviceability
[placeholder]
SEGMENT
Short segments, more junctions
Node count
[placeholder]
Speed tier
A/B/C (placeholder)
Harness length
[placeholder]
Serviceability
[placeholder]
GATEWAY
Physical-domain separation
Node count
[placeholder]
Speed tier
A/B/C (placeholder)
Harness length
[placeholder]
Serviceability
[placeholder]
The placeholders are intentionally kept for later standard/experience fill-in while preserving a consistent decision structure.
H2-9
Validation & Measurement: What to Probe on Real Harness
Validation requires a closed loop: probe plan → stimulus → time-aligned capture → error-counter correlation, executed at worst-case harness points.
Intent
Show where to probe, how to excite the harness, and how to prove correlation between waveform anomalies and time-windowed error counters.
Scope guard: no instrument brands/models
Scope guard: no deep EMC standards
Output: bring-up test plan (placeholders)
“Scope looks fine” is often a time-window mismatch. Always align capture windows with counter windows and stress the highest tier phase.
Probe plan: three mandatory locations
P1
Far end (worst accumulated path)
Maximizes trunk accumulation and return visibility. Use it to validate end-to-end settling and window margin at the highest tier phase.
P2
T junction (pile-up source)
The junction is the multi-port discontinuity. Use it to detect stacked returns and phase drift that can intrude into decision-sensitive windows.
P3
Worst node (geometry-ranked)
Identify the worst node by geometry: longest stub, densest connectors, and most uncertain attachment. Validate repeatability under worst-case conditions.
Stimulus: excite topology sensitivity (not “luck”)
HS-only stress
Concentrate activity on the highest speed phase to shrink the safe window and reveal intruding returns.
Placeholders
Tier: [A/B/C] · Burst: [X] · Duration: [Y]
Edge-sensitivity sweep
If adjustable edge conditions exist, sweep them to separate topology-limited behavior from configuration-limited behavior.
Placeholders
Sweep: [Setting] · Steps: [N] · Hold time: [Y]
Path perturbation
Controlled branch and connector state changes expose topology sensitivity. Record exactly what changed and when.
Placeholders
Branch: [A/B] · Connector state: [X] · Repeat: [N]
Observables: four must-check views (concept-level)
Overshoot / ringing
A direct indicator of discontinuity and stacked returns, most visible near P2 and amplified at P1 under HS-only stress.
Zero-crossing jitter
Ringing can create repeated threshold crossings. If these crossings approach the decision window, margin collapses quickly at higher tiers.
Edge settling
Use a consistent settling view: the signal must be stable before the decision-sensitive window. Placeholders define the window boundary.
Counter correlation
Align waveform capture windows with error-counter windows. If anomalies and counter spikes share the same window, topology is a primary suspect.
Correlation loop & pass criteria (placeholders, fixed structure)
Correlation workflow (fixed)
Trigger → Capture window [pre/post = X/Y] → Counter window [aligned = Z] → Decision (same-window = topology suspect)
Pass criteria (placeholders)
- Waveform: no return overlaps the decision-sensitive window (X% of Tbit placeholder).
- Counters: error rate ≤ X per Y minutes under HS-only stress (placeholders).
- Repeatability: PASS for N repeated runs at P1/P2/P3 (placeholder).
Keep the structure constant; fill placeholders later with standard- or program-specific values without changing the measurement logic.
Deliverable: Bring-up test plan (step-by-step)
Step 0
Intent & tier
Define target tier (A/B/C placeholder) and the stress mode (HS-only / full-phase placeholder). Record the harness configuration ID.
Step 1
Probe map
Assign probes to P1 (far end), P2 (T junction), and P3 (worst node). Use a consistent naming scheme for screenshots and logs.
Step 2
Stimulus
Run HS-only stress, edge-sensitivity sweep (if adjustable), and controlled path perturbation. Keep placeholders for settings and duration.
Step 3
Trigger & alignment
Trigger on counter threshold or stress boundary. Align capture windows with counter windows; record pre/post placeholders.
Step 4
Observe
Check overshoot/ringing, zero-crossing jitter, edge settling, and counter correlation using the same time-window logic.
Step 5
Pass criteria
Apply waveform window criteria (X% placeholder), counter rate criteria (X/Y placeholders), and repeatability (N placeholder).
Step 6
Evidence pack
Archive captures, counter logs, harness configuration ID, and change notes in a single evidence bundle for reproducibility and serviceability.
H2-10
Troubleshooting: Symptom → Likely Topology Cause → Fix Path
Long-tail issues converge faster when topology is evaluated first. If HS-only fails, branch changes matter, or sample-point tweaks are unstable, treat stubs/junctions as primary suspects.
Intent
Collapse troubleshooting into topology-first decisions and strategy-level fix paths (no component selection).
Scope guard: no EMC component selection
Fix path: structure & placement strategies only
Uses probes: P1/P2/P3 (from H2-9)
If a parameter tweak “sometimes helps” but cannot hold margin, the return is likely still inside the sensitive window. Shift from tuning to topology control.
Three hard indicators to prioritize topology
HS-only failure
If only the high-speed phase fails, treat stub/T/junction geometry as a top suspect even when low-speed phases look stable.
Branch/harness sensitivity
If changing a branch, connector state, or harness variant alters the outcome, the failure mode is topology-sensitive by definition.
Sample-point tweak is unstable
If tuning changes “sometimes helps” but cannot hold margin across repeats, the return is likely still inside the decision-sensitive window.
Deliverable: Symptom → likely topology cause → fix path
HS phase errors spike (CRC/ACK/ERR counters)
Likely topology cause
Long stub or dense junctions intruding into HS window
Quick check
Probe P2 (ring/pile-up) + P1 (settling vs window), align with counter window
Fix path
Shorten stub / move junction near node / segment the harness (strategy level)
Bench passes, vehicle fails on real harness
Likely topology cause
Worst-case path exists only in vehicle harness (far end + junction density)
Quick check
Rank worst node by geometry (P3) and reproduce using HS-only stress
Fix path
Reduce worst-path length by segmentation or physical domain split (strategy level)
Changing branch/connector state flips the outcome
Likely topology cause
Discontinuity shift at a junction (phase drift) or hidden stub created by wiring changes
Quick check
Probe P2 during controlled perturbation and time-align with counter spikes
Fix path
Standardize branch geometry, move junction closer to node, and eliminate long drops (strategy level)
Tuning sample point helps briefly but cannot hold
Likely topology cause
Returns remain inside the decision-sensitive window; tuning only shifts overlap
Quick check
Probe P1 for settling vs window; confirm same-window counter correlation
Fix path
Reduce return amplitude/time via geometry control (shorten stub / segment / domain split)
Fix paths are intentionally kept at strategy level to avoid duplicating termination and EMC component content.
H2-11
Engineering Checklist (Design → Bring-up → Production)
Turn topology control into repeatable gates with evidence packs, fixed placeholders, and clear fail loops.
Intent
Freeze repeatable actions across Design → Bring-up → Production without wide tables.
Scope guard: no wide tables
Scope guard: no hard numeric limits (placeholders)
Output: evidence packs + pass criteria
Gate overview: consistent structure, consistent outcomes
Each gate uses the same structure: Action (do) → Evidence (prove) → Pass criteria (decide) → Fail next step (loop back).
This prevents “parameter tuning by luck” and makes results reproducible across teams.
Design Gate
Lock topology and budgets before any validation effort.
Topology selection locked
Action
Choose and freeze the physical pattern (trunk+stubs / daisy-chain / segmentation / domain split).
Evidence
Topology sketch + node list + junction count (card list, not a wide table).
Pass criteria
Risk grade ≤ X (placeholder) for the target tier(s).
Fail next step
Re-evaluate topology sources and scaling patterns (H2-3 / H2-8).
Stub budget worksheet completed
Action
Fill the worksheet with fixed placeholders: Tier / Tbit / X% window / v / Lstub_max.
Evidence
Budget fields captured in a single “worksheet card” per tier (no spreadsheet required).
Pass criteria
All worst nodes satisfy Lstub ≤ Lstub_max (placeholder) for the highest tier.
Fail next step
Re-run the window logic and budgeting flow (H2-4 / H2-6) and adjust structure.
T-branch control rule defined
Action
Define T-junction placement rules (reduce fan-out, keep drops short, prefer near-node junctioning).
Evidence
List of T-junctions with location intent and “allowed/blocked” status (card list).
Pass criteria
All T-junctions comply with the rule set (X/Y placeholders).
Fail next step
Apply mitigation strategy and re-layout the junction placement (H2-7).
Worst node (P3) defined by geometry
Action
Rank nodes by geometry (stub length, connector density, far-end accumulation) and select P3.
Evidence
“Worst-node list” as short cards (Node ID → why worst), no wide matrix.
Pass criteria
P3 is measurable and reproducible across harness variants (placeholders).
Fail next step
Re-check topology sources and measurement plan (H2-3 / H2-9).
Bring-up Gate
Prove correlation on real harness with aligned time windows.
Probe map fixed (P1/P2/P3)
Action: assign probes to P1 far end, P2 T junction, P3 worst node. Evidence: probe map capture + naming rules. Pass: any operator can repeat the same setup.
Stimulus plan fixed (HS-only + perturbation)
Action: execute HS-only stress, edge sensitivity sweep (if available), and controlled path perturbation. Evidence: placeholder record of settings and run length. Pass: stable reproduction or stable PASS.
Time alignment enforced
Action: align capture window with counter window. Evidence: window definitions (pre/post placeholders). Pass: waveform anomalies and counter spikes share the same time window.
Decision tree executed
Action: run the topology-first decision path. Evidence: checked decision nodes (card list). Pass: root cause converges to topology vs non-topology before any tuning escalation.
Production Gate
Preserve geometry across builds and collect field evidence.
Harness consistency checks
Action: verify harness version and branch routing is unchanged. Evidence: version fields and change logs. Pass: configuration fields complete (X% placeholder).
Branch length tolerance control
Action: define tolerance placeholders (±X) for key drops and junction positions. Evidence: sampling cards per batch. Pass: all sampled geometry stays within tolerance.
Sampling strategy (worst-first)
Action: prioritize sampling at worst node and T junction geometry. Evidence: batch sampling plan fields. Pass: worst-path coverage achieved each cycle.
Field service record fields
Action: require geometry-critical fields (harness variant, branch changed, connector state, node location). Evidence: service template. Pass: completeness ≥ X% (placeholder).
H2-12
Applications (Strictly within this page’s boundary)
Identify scenarios where stubs and junctions dominate failures and prioritize topology-first strategies.
Intent
Only describe where stub risk amplifies (no gateway protocol expansion).
Scope guard: no DoIP / OTA / secure gateway
Output: application risk map (cards)
Deliverable: application risk map (scenario → risk → priority)
Body / Comfort (long harness + many nodes)
Risk triggers
Many branches · dense connectors · hidden stubs
Primary suspects
T pile-up · worst-node drift · branch variability
Priority strategy
Normalize topology + define P3 early + validate on worst harness points
Powertrain / Chassis (FD/XL tiers tighten)
Risk triggers
Short HS window · higher sensitivity to returns
Primary suspects
Long stubs · far-end overlap · unstable tuning outcomes
Priority strategy
Lock budget placeholders first, then prove HS-only correlation at P1/P2
Star / centralized coupling (serviceable but harder)
Risk triggers
Multi-port junction · return stacking · path sensitivity
Primary suspects
P2 pile-up · geometry inconsistency at the hub
Priority strategy
Make P2 a mandatory validation point and enforce geometry consistency in production