123 Main Street, New York, NY 10001

Waveform & Sample-Point: Real-Harness Timing Tuning

← Back to: Automotive Fieldbuses: CAN / LIN / FlexRay

Turn “nice-looking waveforms” into verified sampling margin on real harness.
This page provides a repeatable loop to measure on real vehicle harness, budget the sample-point window, tune timing knobs in the right order, and freeze pass/fail gates with auditable evidence.

H2-1 · Scope & Reader Promise

What this page delivers
A repeatable loop that turns “waveform looks OK” into measurable timing margin: Measure → Budget → Tune → Validate, proven on the real harness with pass/fail evidence.
Scope Guard
Covers
  • Real-harness waveform measurement with repeatable setup and logging fields.
  • Sample-point window budgeting (propagation + loop delay + phase segments) expressed as a margin.
  • Tuning loop to converge quickly and lock a stable configuration.
  • SIC / XL symmetry validation from a waveform-and-margin perspective (pass/fail).
Does NOT cover (refer out)
  • Full termination / split termination design (go to: Termination & Cable Capacitance).
  • CMC selection and EMI shaping networks (go to: CMC & Split Termination).
  • TVS / ESD / surge array selection and parasitic modeling (go to: Port ESD/Surge Arrays).
  • Controller register-by-register deep dive and protocol stack behavior (go to: CAN Controller / Bridge).
Only the interface points are mentioned here (how those factors change margin), without expanding into their full design space.
Reader paths (pick the fastest entry)
Bring-up / Debug
Start with H2-4 Measurement SetupH2-6 BudgetH2-7 Tuning Loop.
Production / Robustness
Jump to H2-10 Engineering Checklist and enforce evidence fields + pass criteria across temperature and harness variants.
SIC / XL Mode Issues
Go straight to H2-8 Symmetry Validation and verify waveform symmetry and loop-delay consistency across modes.
Metrics used across this page (data dictionary)
Primary decision metrics
  • SamplePoint_pct (%): sampling location within the bit time (target region is mode-dependent).
  • Margin_ns (ns): time distance from the sample point to the nearest unstable region (ringing/threshold-crossing/edge uncertainty).
  • Errors_per_1k (count/1k frames): normalized error signal used to confirm stability under a defined condition set.
Budget contributors
  • tPROP_ns (ns): measured propagation delay along the worst-case path on the harness.
  • LoopDelaySym_ns (ns): loop-delay symmetry (Tx→Rx vs Rx→Tx) that shifts the optimal sample-point window.
  • TempRange_C (°C): temperature range used to validate drift of timing and margin.
Evidence fields (required for repeatability)
  • Harness: variant ID, length, node count, worst-path description (placeholders allowed).
  • Operating condition: mode/bitrate, bus load %, VBAT, temperature.
  • Measurement setup: probe type, bandwidth, reference/ground method, trigger rule.
  • Outcome: SamplePoint%, Margin_ns, Errors_per_1k, and a saved waveform snapshot set.
Pass criteria template (placeholders)
  • Margin: Margin_ns ≥ X under defined worst-case condition set.
  • Errors: Errors_per_1k ≤ X over Y minutes at bus load Z%.
  • SIC/XL symmetry: Symmetry metric ≤ X across supported modes (Classic/FD/XL as applicable).
Boundary Map (Owned vs Refer-out)
Owned loop (this page) Refer-out Measure Real harness Budget Window margin Tune Phase / Prop Validate Errors + Sym Termination Split / RC / CMC TVS / ESD Parasitics EMC Layout Return paths Controller Protocol / regs Rule: mention interface points only, then refer out for full design space.

H2-2 · Definitions: Waveform Quality vs Sampling Margin

Core principle
Good-looking waveforms do not guarantee safe sampling. The only reliable outcome is a measured sampling margin that remains positive across the required condition set (harness + load + temperature).
Two definitions used on this page
GoodWaveform (observable set)
A waveform is “good” only when its observable properties stay away from sampling boundaries:
  • Edge: rise/fall and edge jitter that shift the effective transition time.
  • Ringing: overshoot/undershoot and settling time that may invade the stable region.
  • Levels: dominant/recessive headroom against receiver thresholds under noise.
  • Symmetry: mode-dependent balance (SIC/XL) that stabilizes the timing window.
SafeSampleWindow (decisive margin)
A sampling window is “safe” when the chosen SamplePoint_pct is separated from unstable regions by a positive Margin_ns, derived from:
  • Propagation delay along the worst path on the harness (tPROP_ns).
  • Loop-delay symmetry that shifts the optimal window (LoopDelaySym_ns).
  • Phase segmentation decisions (phase/prop segments + SJW) that position the sample point.
Mapping rules (waveform → window)
Rule 1 · Settling must complete before sampling
If ringing/threshold-crossing activity overlaps the sampling region, the effective Margin_ns collapses even when the waveform looks “acceptable” by eye.
Rule 2 · Symmetry stabilizes the best sample point
Poor symmetry (especially across modes) causes the “best” sample point to drift with harness/loading, turning a single timing configuration into an unstable compromise.
Rule 3 · Decision is made by margin + errors, not aesthetics
A waveform metric is only “good” if it improves Margin_ns and keeps Errors_per_1k within limits under the defined condition set.
Minimal record schema (placeholders, mobile-safe)
{
  "Mode": "Classic | FD | XL",
  "Harness": {"Variant":"...", "NodeCount":X, "WorstPath":"..."},
  "Condition": {"BusLoad_pct":X, "VBAT_V":X, "Temp_C":X},
  "Timing": {"SamplePoint_pct":X, "tPROP_ns":X, "LoopDelaySym_ns":X},
  "Outcome": {"Margin_ns":X, "Errors_per_1k":X, "WaveformSet":"..."}
}
Waveform → Window mapping (what matters, what decides)
GoodWaveform (observable) Mapping rules SafeSampleWindow Edge timing shift Ringing settling Levels headroom Symmetry SIC / XL Budget tPROP / loop Position SamplePoint% Decide Margin + Errors Safe window Margin_ns ≥ X Validate Errors/1k ≤ X Modes Classic / FD / XL Focus: convert observables into a stable sampling margin on the real harness.

H2-3 · Real Harness Effects: Why Bench ≠ Vehicle

Key idea
Bench results validate a single topology + measurement point + condition. In a vehicle, harness branching and node distribution change both arrival time and waveform shape, which shifts the sampling window and directly changes Margin_ns.
Cause → effect chain (scope-limited to waveform & sample point)
  • Topology (trunk, stubs, star, heavy load) → tPROP_ns and reflection timing shift.
  • Distribution (which nodes sit where) → worst-path arrival changes → optimal SamplePoint_pct shifts.
  • Ground offset (not solved here) → threshold headroom changes → the stable region narrows and margin becomes fragile.
  • Result: a timing configuration that looked stable on bench becomes a compromise; Errors_per_1k rises under specific harness/conditions.
Minimum topology fields (placeholders, required)
Keep the worst-path definition consistent across iterations; otherwise Margin comparisons become invalid.
  • HarnessLength_m: trunk length (placeholder).
  • StubLength_max_m: longest stub (placeholder).
  • NodeCount: active nodes during measurement (placeholder).
  • WorstPathDesc: e.g., ECU → farthest node (placeholder).
  • WorstCasePath_ns: measured propagation delay of WorstPath (placeholder).
Operational takeaway
Any sample-point tuning is only “done” after verification on the real harness with the same worst-path definition, and with margin + error evidence under the required condition set.
Harness Topology Impact (topology → worst path → margin trend)
Topology patterns (no curves) Trunk + Stubs ECU WorstPath Margin trend Stub echoes can invade window Fields HarnessLength StubMax NodeCount WorstPath Star / Hub ECU Hub WorstPath varies by branch Margin trend Window shifts across branches Fields NodeCount WorstPath WorstCasePath Heavy Load ECU WorstPath: far end Margin trend Stable region shrinks Fields HarnessLength NodeCount WorstCasePath Interpretation: topology shifts arrival/shape → window shifts → margin changes (validate on real harness).

H2-4 · Measurement Setup on Real Harness (Repeatable)

Goal
Produce repeatable waveforms and timing evidence that can be compared across iterations. Without a fixed setup and record schema, Margin_ns changes may be measurement artifacts.
Measurement points (fixed set for comparisons)
  • ECU end: captures local edges and immediate reflections; supports tuning direction decisions.
  • Remote end: represents the worst-path arrival and the decisive stable region.
  • Branch (T-junction): reveals stub echo timing relative to the sampling region.
Rule: keep the same points and the same reference method for every iteration; otherwise before/after evidence is not comparable.
Setup rules (avoid margin artifacts)
Bandwidth
Insufficient bandwidth can “smooth” edges and hide ringing, creating a false sense of larger margin. Fix BW and keep it consistent.
Reference / ground
A changing reference method shifts apparent threshold crossings and can move the inferred stable region. Keep a single RefMethod per measurement plan.
Trigger alignment
A different trigger rule changes time alignment and may look like a sample-point shift. Lock a single TriggerRule for comparisons.
Minimum record schema (placeholders)
{
  "Setup": {"ProbeType":"...", "BW":"...", "RefMethod":"...", "TriggerRule":"..."},
  "Points": ["ECU_end", "Remote_end", "Branch_point"],
  "Topology": {"HarnessVariant":"...", "NodeCount":X, "WorstPathDesc":"...", "WorstCasePath_ns":X},
  "Condition": {"Mode":"Classic|FD|XL", "BusLoad_pct":X, "VBAT_V":X, "Temp_C":X},
  "Outcome": {"SamplePoint_pct":X, "Margin_ns":X, "Errors_per_1k":X, "WaveformSet":"..."},
  "Iteration": {"IterationTag":"...", "ConfigHash":"..."}
}
Fast sanity checks (before trusting any margin change)
  • Repeat capture: two captures at the same point/condition should agree within the expected noise band (placeholder X).
  • Point cross-check: ECU-end vs remote-end should show a consistent worst-path shift direction.
  • Trigger lock: verify TriggerRule and time alignment are unchanged between before/after.
Measurement Rig (signal path + data capture path)
Repeatable setup (fixed points + fixed rules) DUT / ECU ECU end point Harness Branch point Remote end Signal Scope / LA ProbeType + BW RefMethod TriggerRule Log fields (evidence) Topology WorstPath Condition Temp / VBAT Setup BW / Ref Outcome Margin Evidence Rule: fixed points + fixed setup + fixed schema → comparable margins across iterations.

H2-5 · Waveform Metrics That Actually Predict Errors

Why these metrics
Metrics are only “predictive” when they explain what happens near the sampling boundary: edge timing, threshold-crossing stability, settling to a stable level, and headroom under noise/temperature. The goal is not pretty waveforms; the goal is Margin_ns that holds with low Errors_per_1k.
Metrics that map to errors
Edge timing (tr / tf)
  • What it predicts: edge placement drift and reduced timing headroom at higher bit rates.
  • Measure rule: fixed BW + fixed RefMethod + fixed trigger alignment (use H2-4 schema).
  • Log fields: tr_ns, tf_ns (placeholders).
Overshoot / undershoot
  • What it predicts: threshold re-crossing risk and early window contamination.
  • Measure rule: evaluate against a consistent reference level and capture window.
  • Log fields: Overshoot_pct, Undershoot_pct (placeholders).
Ringing settling time
  • What it predicts: whether the stable region exists before the intended sample point.
  • Dominant-term rule: if RingingSettling_ns overlaps the sampling window, tuning SJW rarely helps.
  • Log fields: RingingSettling_ns (placeholder).
Threshold crossing jitter
  • What it predicts: sample boundary instability even when the waveform looks “clean”.
  • Measure rule: compute crossing-time spread at a fixed threshold level.
  • Log fields: CrossingJitter_ns (placeholder).
Level headroom & symmetry
  • LevelMargin: dominant/recessive margin vs threshold under temperature and supply variation.
  • SymmetryIndex: placeholder for SIC/XL symmetry checks (defined later); used to prevent window drift across modes.
  • Log fields: LevelMargin_mV, SymmetryIndex (placeholders).
Data outputs (threshold placeholders X)
  • tr_ns < X, tf_ns < X
  • Overshoot_pct < X, Undershoot_pct < X
  • RingingSettling_ns < X, CrossingJitter_ns < X
  • LevelMargin_mV > X, SymmetryIndex ≤ X
Rule: thresholds must be bound to Mode / Bitrate / HarnessVariant / TempRange; otherwise pass/fail is not portable.
Waveform Scorecard (metrics → pass/fail → margin focus)
Scorecard Predictive metrics for Errors_per_1k Pass Watch tr / tf Edge timing tr_ns < X Overshoot Peak risk < X % Ringing Settling < X ns Jitter Crossing < X ns Level Margin > X mV Symmetry Index ≤ X Errors per 1k < X Margin ns ≥ X Margin focus Margin_ns

H2-6 · Sample-Point Window Budgeting (tSEG / SJW / tPROP)

What budgeting outputs
Budgeting converts measured path timing into a safe sampling region: SamplePoint_pct, tSEG1, tSEG2, SJW, and the resulting Margin_ns. The smallest distance from the sample point to the nearest risk boundary defines the margin.
Measured tPROP and worst-path rule
Use a fixed worst-path definition (WorstPathDesc) and map its measured delay to tPROP_meas. When harness topology changes, tPROP_meas must be re-measured and the window re-budgeted.
Why higher bit rates shrink the window
  • Bit time shortens, but propagation and settling do not scale down proportionally.
  • Ringing and crossing jitter consume a larger fraction of the bit, reducing usable stable region.
  • As a result, the safe sample-point range narrows and Margin_ns becomes harder to keep above X.
SJW: what it can fix vs cannot fix
SJW can help when small phase errors cause sampling drift and resynchronization can pull timing back. SJW cannot help when the stable region is contaminated by long settling (RingingSettling_ns) or when level headroom collapses (LevelMargin_mV). Fix stability first, then tune SJW.
Budget table template (placeholders)
Use this as a state table for tuning loops: inputs come from real-harness measurements, outputs drive configuration and validation.
Mode/Bitrate tPROP_meas LoopDelay tSEG1 tSEG2 SJW SamplePoint_pct Margin_ns Notes
Classic/FD/XL (X) X ns X ns X tq X tq X tq X % ≥ X ns Ringing/Level/Symmetry
Mobile note: the table scrolls horizontally inside this card to prevent page shift.
Timing Budget Ladder (tPROP → tSEG1 → SamplePoint → tSEG2 + margin)
Timing ladder Budget from measured worst-path timing Propagation tPROP_meas PhaseSeg1 tSEG1 PhaseSeg2 tSEG2 SamplePoint SJW Inputs WorstPath timing + LoopDelay + Symmetry tPROP_meas LoopDelay Symmetry Margin Margin_ns ≥ X Interpretation: select SamplePoint and segments so the nearest risk boundary stays away (margin).

H2-7 · Tuning Knobs & Iteration Loop (Fast Convergence)

Goal
Converge with the fewest iterations by enforcing a strict order of operations, fixed measurement conditions, and a single-knob-per-iteration rule. Success is defined by stable Errors_per_1k and non-regressing Margin_ns.
Knob hierarchy (responsibility)
  • SamplePoint_pct: sets the target location inside the stable region (strategy).
  • tSEG1 / tSEG2: shapes the usable window to match real-harness timing (tactics).
  • SJW: provides limited tolerance to small phase drift (boundary), not a fix for poor settling or low level headroom.
Freeze the conditions (or the comparison is invalid)
Keep topology, load, temperature, supply, and measurement setup constant. If any condition changes, reset the baseline before attributing changes to a knob.
Topology & scenario
HarnessVariant, WorstPathDesc, NodeCount, Mode/Bitrate, BusLoad_pct
Environment
Temp_C, VBAT_V (or rail), warm/cold state, node power states
Measurement setup
MeasurementPoints, ProbeType, BW, RefMethod, TriggerRule, RecordFields[]
Fast convergence order (do not shuffle)
  1. Set SamplePoint_pct target based on the measured stable region and worst-path timing.
  2. Adjust tSEG1 / tSEG2 to center the sample point inside the stable region and maximize Margin_ns.
  3. Tune SJW last only to counter small drift; if settling or level headroom is the limiter, fix waveform stability first.
Iteration discipline
  • Single-knob rule: change only one of {SamplePoint_pct, tSEG1/tSEG2, SJW} per iteration.
  • Before/after logging: capture Margin_ns and Errors_per_1k with identical conditions.
  • Freeze criteria: Errors_per_1k stable within X/1k over Y minutes, and Margin_ns ≥ X without regression beyond X.
Data outputs (iteration log schema)
Minimal iteration record enables causality and rollback. Use hashes to avoid “silent condition drift”.
Iteration# KnobChanged Before_Margin After_Margin Errors/1k BusLoad% ConditionHash ConfigHash Notes
1 SamplePoint_pct X ns X ns ≤ X X H-XXXX C-XXXX Dominant term: Ringing/Jitter/Level
Mobile note: the table scrolls inside the card to prevent horizontal page shift.
Closed-loop Flow (Measure → Budget → Tune → Validate → Freeze)
Fast convergence loop Single knob per iteration, fixed conditions Measure Points Setup Budget tPROP Margin Tune 1 knob Record Validate Errors Margin Freeze Gate Iterate until Freeze criteria met Single-knob rule Change only one knob Fix conditions (hash) Log before/after Freeze gate: Errors_per_1k ≤ X over Y minutes and Margin_ns ≥ X without regression beyond X.

H2-8 · SIC / SIC-XL / CAN XL Symmetry Validation

Scope
This section validates symmetry from a waveform/margin viewpoint only. It defines measurable symmetry outputs, links symmetry to sample-point stability, and provides a mode-switch pass gate across Classic / FD / XL domains.
Measurable symmetry definition (placeholders)
  • SymmetryMetricX: a repeatable metric computed from fixed measurement points and fixed thresholds.
  • Asymmetry_ns: time-domain asymmetry bound used as the primary pass/fail gate.
  • Binding rule: keep MeasurementPoints, TriggerRule, and RefMethod identical across modes.
Why symmetry couples to sample-point stability
  • Asymmetry shifts the “center” of the stable region and moves the best sample-point location.
  • During mode switching (Classic ↔ FD ↔ XL), the drift direction can change, causing Margin_ns to collapse on specific harness variants.
  • Therefore symmetry must be validated together with Margin_ns and Errors_per_1k, not in isolation.
Mode-switch pass gate (placeholders)
  • Asymmetry_ns < X and SymmetryMetricX within spec window (X).
  • Margin_ns ≥ X at required BusLoad_pct and TempRange.
  • Errors_per_1k ≤ X in the validation time window.
  • Output: ModeSwitchPass = PASS only if all modes satisfy all gates.
Failure triage (within this page scope)
  1. Check whether asymmetry appears only in one mode (Classic vs FD vs XL) under identical conditions.
  2. Verify whether a single global SamplePoint_pct remains optimal across modes; if not, re-budget per mode and re-validate.
  3. Identify whether the worst-path or branch measurement point dominates the asymmetry result, then lock the validation to that point.
Refer-out placeholder: termination/common-mode path effects should be handled in dedicated pages; this section focuses on symmetry outputs and gates only.
Symmetry Checker (Classic / FD / XL → metric → pass gate)
Symmetry checker Same measurement → same metric → mode-switch gate Classic FD XL Measure points ECU Remote Branch Measure points ECU Remote Branch Measure points ECU Remote Branch Symmetry metric SymmetryMetricX Asymmetry_ns Symmetry metric SymmetryMetricX Asymmetry_ns Symmetry metric SymmetryMetricX Asymmetry_ns Pass gate Asymmetry_ns < X Margin_ns ≥ X Pass gate Asymmetry_ns < X Errors/1k ≤ X Pass gate Asymmetry_ns < X ModeSwitchPass Overall: ModeSwitchPass = PASS only if all columns pass all gates

H2-9 · Failure Signatures → Root Cause Mapping (Waveform-first)

Intent
Map observed error signatures to the first waveform evidence and the first budgeting term to check, then select the smallest in-scope corrective action (SamplePoint / tSEG / SJW / re-budget-per-mode) and a measurable pass gate.
Binding rules (make symptom mapping valid)
  • Same conditions: HarnessVariant, NodeCount, BusLoad_pct, Temp_C, VBAT_V, Mode/Bitrate.
  • Same measurement: MeasurementPoint (ECU/Remote/Branch), ProbeType, BW, RefMethod, TriggerRule.
  • Same accounting: Error counter window and definition (Errors_per_1k) fixed across runs.
Three symptom families (coverage without scope creep)
A) Condition-sensitive
Fails only at high load / high temp / long harness. First evidence is margin collapse versus conditions.
B) Looks-normal-but-fails
“Occasional” errors while the waveform looks fine. First step is measurement artifact and accounting sanity checks.
C) Mode/rate boundary
Fails at a specific rate segment or during mode switching. First step is symmetry and loop-delay consistency.
Family A: condition-sensitive → margin-first triage
  • First check: compare Margin_ns across BusLoad_pct / Temp_C / HarnessVariant (table bins, not “looks”).
  • Second check: confirm the worst measurement point (ECU vs Remote vs Branch) under the failing condition.
  • In-scope actions: adjust SamplePoint_pct → then tSEG1/tSEG2 → SJW last (drift only).
  • Pass gate: Errors_per_1k ≤ X and Margin_ns ≥ X at worst-case condition set (placeholders).
Family B: looks-normal-but-fails → artifact-first triage
  • First check: TriggerRule stability, BW adequacy, probe reference method, and repeatability at the same point.
  • Second check: accounting sanity (same counter window, same denominator, no mixed modes).
  • In-scope actions: do not tune knobs until the symptom is reproducible under a fixed ConditionHash.
  • Pass gate: repeated runs under identical ConditionHash show consistent error rate within X.
Family C: mode/rate boundary → symmetry + loop-delay consistency
  • First check: Asymmetry_ns and ModeSwitchPass gating across modes under identical conditions.
  • Second check: loop-delay and sample-window consistency (ConfigHash comparison before/after switching).
  • In-scope actions: re-budget per mode → validate → then fine-tune tSEG if needed.
  • Pass gate: ModeSwitchPass=PASS and Errors_per_1k ≤ X in all validated modes.
Data output (Symptom → Checks → Action → Gate)
Standard row schema: ErrorPattern, FirstCheck, SecondCheck, FixKnob, PassCriteria. Keep wording short and measurable.
ErrorPattern FirstCheck SecondCheck FixKnob PassCriteria
Fails at high load/temp only Margin_ns vs BusLoad/Temp bins Worst point: ECU/Remote/Branch SamplePoint → tSEG Errors/1k ≤ X, Margin_ns ≥ X
Occasional errors, waveform “OK” Trigger/BW/probe reference sanity Accounting window fixed No tuning until reproducible Repeatability within X
Mode switch fails at one segment Asymmetry_ns + ModeSwitchPass Loop-delay consistency (ConfigHash) Re-budget per mode PASS in all modes
Mobile note: the mapping table scrolls inside the card to prevent horizontal page shift.
Symptom-to-Check Tree (Symptom → Check → Action)
Symptom → Check → Action Waveform-first mapping, measurable gates Condition-sensitive load / temp / harness Looks-normal occasional errors Mode / rate switch boundary Checks Margin vs conditions Worst measurement point Checks Trigger / BW / probe Accounting window fixed Checks Asymmetry + pass gate Loop-delay consistency Action SamplePoint → tSEG SJW last (drift only) Gate: Errors/1k ≤ X Action Fix repeatability first No tuning until stable Gate: repeatability ≤ X Action Re-budget per mode Validate ModeSwitchPass Gate: PASS in all modes

H2-10 · Engineering Checklist (Design → Bring-up → Production)

Intent
Provide a gate-based checklist that is auditable and repeatable: each item has an owner, a tool, a pass criterion, and evidence. The gates map directly to the outputs from measurement, budgeting, tuning, symmetry validation, and symptom mapping.
Design Gate
Lock targets, templates, and measurement plan before real-harness runs.
Item Owner Tool Pass criteria Evidence
Targets defined (SamplePoint_pct, Margin_ns, Errors/1k) System/EE Spec template Targets frozen (X) Target sheet + sign-off
Budget template ready (tPROP, LoopDelay, tSEG, SJW, Margin) Validation Worksheet Template locked Template version ID
Measurement plan (points/setup/fields) frozen Validation Scope/LA plan Plan approved Plan doc + revision
Bring-up Gate
Execute real-harness measurement, converge with an iteration log, and freeze configuration with evidence.
Item Owner Tool Pass criteria Evidence
Real harness measurement complete (ECU/Remote/Branch) Validation Scope/LA + logger All fields recorded Waveform set + log bundle
Iteration loop executed (single-knob rule) FW/Validation Iteration sheet Converged within X iters Iteration log + hashes
Symmetry + mode-switch validation Validation Symmetry checker ModeSwitchPass=PASS Per-mode evidence
Freeze decision (no regression) System/QA Review checklist Errors/1k ≤ X, Margin ≥ X Frozen config pack
Production Gate
Ensure corner coverage, batch consistency, and re-test policy with a golden evidence pack.
Item Owner Tool Pass criteria Evidence
Corner validation (Temp/VBAT/Load matrix) Manufacturing/QA Golden setup Errors/1k ≤ X, Margin ≥ X Corner report pack
Lot-to-lot consistency check QA Sampling plan Drift ≤ X Lot summary table
Re-test policy for repair/return Service/QA Minimal gate set Pass minimal gates SOP + checklist
Mobile note: gate tables scroll inside their cards to prevent page shift.
3-Gate Checklist (Design → Bring-up → Production)
Engineering gates Owner + Tool + Pass criteria + Evidence Design Bring-up Production Targets Budget template Measurement plan Scenario matrix Baseline config Real harness Iteration log Symmetry gate Symptom mapping Freeze decision Corners Lot consistency Aging / drift Re-test policy Evidence pack Golden evidence pack ConditionHash ConfigHash Logs + Screenshots

H2-11 · Application Patterns (Where Sample-Point Usually Breaks)

Intent
Use application-pattern cards to capture long-tail searches while staying in scope: identify the most likely margin-risk topology, then select a measurement-point set that covers the worst path and produces a repeatable sample-window margin gate.
Scope guard (no cross-page creep)
  • Covers: worst-path identification, measurement-point sets, margin gates (Margin_ns / Errors_per_1k / ModeSwitchPass).
  • Refers out: termination/CMC/TVS/EMC implementation details and harness design specifics (link placeholders).
Data output (fixed schema)
Pattern, WorstPath, RecommendedMeasurePoints[], ModeBitrateBin, ConditionHash, ExpectedMarginX
ExpectedMarginX example (placeholders): Margin_ns ≥ X AND Errors_per_1k ≤ X under the fixed ConditionHash.
Pattern cards (actionable, measurement-point first)
Pattern 1 · Long trunk + many stubs
Typical break: arrival-time spread increases, shrinking the safe sampling window across nodes.
  • WorstPath: furthest remote node + the largest stub branch (ns-level worst-case path).
  • RecommendedMeasurePoints[]: ECU end, furthest remote, largest-stub branch point.
  • ExpectedMarginX: Margin_ns ≥ X; Errors_per_1k ≤ X (same ModeBitrateBin).
Pattern 2 · Heavy-load / many nodes
Typical break: edge slows and threshold crossing becomes more sensitive, making the optimum sample-point narrow.
  • WorstPath: highest BusLoad_pct condition on the longest remote path (ConditionHash pinned).
  • RecommendedMeasurePoints[]: ECU end, remote end (same load bin for apples-to-apples).
  • ExpectedMarginX: Margin_ns ≥ X at the highest load bin; no regression across temperature bins.
Pattern 3 · Cross-domain / ground-offset sensitive
Typical break: reference shift makes crossing stability and window margin sensitive across operating corners.
  • WorstPath: the path that maximizes offset sensitivity under the defined corner set (ConditionHash).
  • RecommendedMeasurePoints[]: domain-local ECU point, cross-domain remote point.
  • ExpectedMarginX: Margin_ns ≥ X and ModeSwitchPass does not degrade across corners.
Pattern 4 · Mode / bitrate ladder boundary
Typical break: only one segment fails; window collapses or symmetry shifts in that mode/bin.
  • WorstPath: the narrowest-window bin in the failing mode/bitrate segment (ModeBitrateBin).
  • RecommendedMeasurePoints[]: fixed points, swept bins (TriggerRule/BW unchanged).
  • ExpectedMarginX: Errors_per_1k ≤ X across all validated bins; Asymmetry_ns ≤ X (placeholder).
Pattern → measurement-point set (quick lookup)
Pattern WorstPath focus RecommendedMeasurePoints[] Expected gate
Long trunk + stubs furthest + largest stub ECU, furthest remote, branch point Margin_ns ≥ X; Errors/1k ≤ X
Heavy-load highest BusLoad bin ECU, remote Gate holds across load/temp bins
Cross-domain corner sensitivity domain-local, cross-domain remote ModeSwitchPass stable (X)
Mode ladder narrowest window bin fixed points, swept bins All bins pass (X)
Mobile note: the lookup table scrolls inside the card to prevent horizontal page shift.
Pattern Cards (topology + recommended measurement points)
Pattern cards Topology + measure points (ECU / Remote / Branch) Long trunk + stubs Heavy-load Cross-domain Mode ladder ECU Remote Branch ECU furthest largest stub Gate: Margin ≥ X ECU remote nodes Gate: load bin holds Domain A Domain B local cross-domain Gate: stable across corners Mode 1 Mode 2 (narrow) Mode 3 fixed point sweep bins Gate: all bins
Refer (sibling pages, link placeholders)
Termination / Split termination / CMC · TVS/Surge arrays · EMC emission/immunity co-design · Controller deep dive

H2-12 · IC Selection Logic (Waveform/Sample-Point Only)

Intent
Select controllers and transceivers by measurable timing controllability and validation burden. Focus on tuning granularity, loop-delay symmetry/consistency, and mode-switch repeatability. Provide example part numbers to start component shortlisting.
Selection funnel (what matters for sample-point closure)
1) Network target
  • Mode/bitrate bins: Classic / FD / SIC / XL (as applicable).
  • Worst-case scenario: select from H2-11 patterns and freeze a ConditionHash.
  • Gate: Errors_per_1k ≤ X, Margin_ns ≥ X, ModeSwitchPass=PASS (placeholders).
2) Controller-side timing knobs (measurable controllability)
  • TimingStep_ns: minimum step for tSEG1/tSEG2/sample-point adjustments.
  • SJWRange: available compensation span (used after window center is stable).
  • ModeSwitchBehavior: repeatable switching; clear re-budget requirement per mode.
  • ConfigExport: ability to export/freeze configuration (ConfigHash) for audit and re-test.
3) Transceiver-side timing characteristics (symmetry + consistency)
  • LoopDelaySym_ns: symmetry indicator (lower asymmetry reduces sample-point drift risk).
  • TempDrift: drift across temperature/voltage/lot (use worst-case, not typical).
  • ConfigModes: modes supported and required validation bins.
  • Slew/Drive control: treat as a validation burden knob (avoid relying on “typical” waveforms).
4) Validation burden (what must be proven on real harness)
  • Measurement points: use RecommendedMeasurePoints[] from H2-11 patterns.
  • Evidence pack: ConditionHash + ConfigHash + waveforms + logs + bin summary.
  • Gate: pass in all required bins without regression (placeholders X).
Data output (selection worksheet fields)
Field Meaning (this page only) Target / gate (placeholders)
TimingStep_ns tSEG/sample-point adjustment granularity ≤ X ns step
LoopDelaySym_ns symmetry / consistency proxy Asymmetry ≤ X ns
ConfigModes modes/bins requiring validation All bins gated
TempDrift worst-case drift impacting window Drift ≤ X
ConfigExport / Hash freeze & audit repeatability Yes + stable hash
ModeSwitchRebudget whether switching forces re-budget Defined (Y/N)
Mobile note: the selection worksheet table scrolls inside the card to prevent horizontal page shift.
Selection traps (waveform/sample-point scope)
  • Using typical delay/symmetry numbers instead of worst-case + drift across PVT and lot variation.
  • Assuming a “good-looking waveform” implies a stable sampling margin without margin-by-bin evidence.
  • Switching modes/rates without a defined re-budget rule and without a repeatable ModeSwitchPass gate.
Example IC part numbers (starting points for shortlisting)
Listed as commonly used examples to anchor a search and a worksheet. Validate mode bins, timing controllability, and worst-case behavior on real harness.
Standalone CAN / CAN FD controllers (host interface controllers)
Microchip MCP2517FD, MCP2518FD · NXP SJA1000 (Classic CAN controller, legacy/reference)
Controller + transceiver combo (single-chip bring-up)
Texas Instruments TCAN4550 (CAN FD controller + transceiver, SPI host) · Microchip MCP25625 (Classic CAN controller + transceiver, SPI host)
High-speed CAN / CAN FD transceivers (non-isolated)
NXP TJA1042, TJA1043, TJA1443 · Texas Instruments TCAN1042, TCAN1043A · Microchip MCP2562FD · Infineon TLE9255W
Selective wake / partial networking transceivers (wake policy focus)
NXP TJA1145 · Texas Instruments TCAN1145
Isolated CAN transceivers (ground-difference tolerant)
Texas Instruments ISO1042, TCAN1051, ISO1050 · Analog Devices ADM3053
Worksheet tip: for each candidate, record TimingStep_ns, LoopDelaySym_ns, TempDrift, ConfigExport, and required ConfigModes bins before committing validation time.
Selection Funnel (Network target → Timing knobs → Symmetry → Validation burden)
Selection funnel Focus: controllability + repeatability + evidence Network target modes · bins · worst path Timing knobs TimingStep · SJW · switch Symmetry LoopDelaySym · drift Validation burden points · bins · corners Evidence pack ConditionHash ConfigHash Logs/Scopes

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-13 · FAQs (Waveform & Sample-Point)

Intent
Close long-tail troubleshooting without expanding the main text. Each answer is a fixed 4-line action guide with auditable gates (placeholders X) and a consistent field schema for logging and comparison.
ModeBitrateBin ConditionHash RecommendedMeasurePoints[] Margin_ns Errors_per_1k Asymmetry_ns ConfigHash
Answer format (fixed 4 lines)
Likely causeQuick checkFixPass criteria (threshold placeholders X).
Works on bench, fails on real harness—first margin accounting check?
Likely cause: Condition mismatch hides worst-path delay; the bench setup does not exercise the narrowest sample window.
Quick check: Freeze ConditionHash (topology/load/temp) and compare Margin_ns at RecommendedMeasurePoints[] (ECU + furthest remote + worst branch).
Fix: Re-budget using real-harness tPROP_meas and worst-case loop delay, then retune sample-point (tSEG1/tSEG2) before touching SJW.
Pass criteria: Margin_ns ≥ X AND Errors_per_1k ≤ X for Y minutes in the worst Mode/bitrate bin.
ConditionHash RecommendedMeasurePoints[] Margin_ns Errors_per_1k
Scope waveform looks “OK” but error counters climb—what measurement artifact is most common?
Likely cause: Probe/grounding and trigger definition distort the apparent crossing time; the displayed “clean” waveform is not aligned with the receiver sampling rule.
Quick check: Re-measure with consistent reference/ground and fixed TriggerRule at the same point; compare crossing-time spread (repeat captures) against Margin_ns.
Fix: Standardize measurement settings (probe type, bandwidth, ground method, trigger threshold) and record them inside ConditionHash; only then tune sample-point.
Pass criteria: Repeat captures show stable margin: Margin_ns ≥ X with Errors_per_1k ≤ X under the same TriggerRule and measurement point.
TriggerRule ConditionHash Margin_ns Errors_per_1k
Errors only appear at high bus load—sample-point window or loop-delay symmetry first?
Likely cause: Load pushes edge/crossing spread and collapses the effective sampling window; asymmetry may amplify sensitivity.
Quick check: Hold ModeBitrateBin constant and sweep BusLoad_pct; log Margin_ns vs load at ECU and remote.
Fix: If margin shrinks monotonically with load, move sample-point center (tSEG1/tSEG2) to maximize window; check symmetry only after window center stops drifting with load.
Pass criteria: At the highest validated load bin: Margin_ns ≥ X AND Errors_per_1k ≤ X for Y minutes.
BusLoad_pct ModeBitrateBin Margin_ns
Changing SJW helps sometimes, hurts other times—what does that imply about jitter vs delay?
Likely cause: The issue is not pure edge jitter; the window center is drifting (delay/prop segment), so SJW occasionally “pulls” the sample toward a worse edge.
Quick check: With fixed ConditionHash, compare Margin_ns and error rate for two SJW settings while keeping sample-point unchanged.
Fix: Re-center the sample window first (tSEG1/tSEG2), then set SJW as a bounded stabilizer; avoid using SJW to “compensate” unknown delay drift.
Pass criteria: Across the validated corners: Margin_ns ≥ X with SJW changes producing ≤ X ns margin variation; Errors_per_1k ≤ X.
SJW tSEG1 tSEG2 Margin_ns
After mode switch (Classic↔FD↔XL), errors spike—what symmetry check is fastest?
Likely cause: Mode switch changes delay balance; asymmetry shifts the best sample-point and reduces window stability in the new bin.
Quick check: Use the same measure points and capture in both bins; compute Asymmetry_ns (dominant vs recessive edge timing proxy) and compare against the pre-switch baseline.
Fix: Treat each ModeBitrateBin as a separate budget; re-center sample-point per bin and enforce a ModeSwitchPass checklist before freezing.
Pass criteria: Asymmetry_ns ≤ X AND Margin_ns ≥ X in both bins, with Errors_per_1k ≤ X post-switch.
ModeBitrateBin Asymmetry_ns Margin_ns
Same timing settings, different harness batch fails—what to re-measure first?
Likely cause: Worst-path propagation changes across harness batch, shifting the real tPROP_meas and collapsing margin.
Quick check: Re-measure tPROP_meas at the same RecommendedMeasurePoints[] under identical ConditionHash; compare against the golden batch.
Fix: Update the budget using the new worst-path delay; adjust propagation/phase segments to re-center sample-point before changing any compensation knobs.
Pass criteria: New batch meets the same gates as golden: Margin_ns ≥ X and Errors_per_1k ≤ X across required bins.
tPROP_meas RecommendedMeasurePoints[] ConditionHash
Longer stubs cause sporadic faults—what’s the first propagation-segment adjustment rule?
Likely cause: Stub-induced delay spread pushes the effective worst-path propagation beyond the current propagation/phase budget.
Quick check: Measure at the branch point and the furthest remote; estimate the additional arrival spread and map it to the budget row (tPROP_meas, Margin_ns).
Fix: Increase the propagation allocation (or shift sample-point later) so the sample window center stays away from the latest-arrival edge; re-validate on the worst branch point.
Pass criteria: Worst-branch point passes: Margin_ns ≥ X and Errors_per_1k ≤ X with fixed ConditionHash.
tPROP_meas Margin_ns RecommendedMeasurePoints[]
High temp only: margin collapses—what field must be logged to prove drift vs topology?
Likely cause: PVT drift shifts delay and symmetry; without consistent condition logging, topology changes and drift are indistinguishable.
Quick check: Log Temp_C together with tPROP_meas, LoopDelay, and Asymmetry_ns under the same harness and load.
Fix: Re-budget for worst-temperature corner; re-center sample-point for the hot bin and validate that the cold bin remains within gates.
Pass criteria: Across temperature bins: Margin_ns ≥ X, Errors_per_1k ≤ X, and drift deltas ≤ X (placeholders).
Temp_C tPROP_meas Asymmetry_ns Margin_ns
Reducing bitrate fixes errors—how to estimate how much margin you gained?
Likely cause: A narrower timing budget at higher rate is failing; reducing rate increases available segment time and relaxes window constraints.
Quick check: Measure Margin_ns in both bins using the same points and rules; compute ΔMargin = Margin_lowrate − Margin_highrate (same ConditionHash).
Fix: Use the measured ΔMargin to decide whether to retune sample-point at the higher bin (shift tSEG allocations) or to set the higher bin as out-of-scope for the topology.
Pass criteria: Higher-rate bin passes without relying on rate reduction: Margin_ns ≥ X and Errors_per_1k ≤ X in the target bin.
ModeBitrateBin Margin_ns ΔMargin_ns
SIC claims “better SI” but still fails—what symmetry metric should you demand?
Likely cause: “Better SI” does not guarantee stable sampling; mode-dependent asymmetry or drift still collapses the effective window.
Quick check: Require a measurable symmetry proxy: Asymmetry_ns (edge/crossing timing delta proxy) across the required bins and corners.
Fix: Gate candidate settings by symmetry first, then re-center sample-point per bin; do not accept a solution that only passes in a single bin.
Pass criteria: Asymmetry_ns ≤ X AND Margin_ns ≥ X across all required bins; Errors_per_1k ≤ X.
Asymmetry_ns ModeBitrateBin Margin_ns
Passes in one ECU but fails in another—controller timing granularity mismatch or measurement point mismatch?
Likely cause: Either timing-step granularity prevents reaching the same optimal sample-point, or the two ECUs are not measured at the same physical point/definition.
Quick check: Confirm identical RecommendedMeasurePoints[] and identical TriggerRule; then compare TimingStep_ns capability and achievable sample-point range.
Fix: Align measurement point definitions first; if still failing, adjust the budget to a reachable sample-point given the controller’s step size, or choose a controller with finer granularity for the target bin.
Pass criteria: Both ECUs meet the same gates under the same ConditionHash: Margin_ns ≥ X and Errors_per_1k ≤ X, with matching ConfigHash export.
TimingStep_ns RecommendedMeasurePoints[] TriggerRule ConfigHash
Fix seems stable, but later regresses—what production gate evidence was missing?
Likely cause: The solution was not frozen with evidence; later changes in condition, harness batch, or configuration break an untracked assumption.
Quick check: Verify whether EvidencePack exists: ConditionHash + ConfigHash + mode/bin summary + worst-path waveforms + error logs.
Fix: Add a production gate: require repeatable tests across corners/bins and store the evidence pack with versioning; freeze measurement rules and tuning knobs.
Pass criteria: Regression-proof gate: EvidencePack=COMPLETE AND all required bins pass with Margin_ns ≥ X, Errors_per_1k ≤ X, and logged traceability IDs.
EvidencePack ConditionHash ConfigHash