Clock & Jitter Design Guide for High-Speed Serial I/O
← Back to: USB / PCIe / HDMI / MIPI — High-Speed I/O Index
H2-1. Positioning & Outcomes
Turn clock/jitter from “mysterious” into an engineering workflow that is budgetable, diagnosable, and acceptance-ready across high-speed serial I/O.
- Clock quality and jitter entry points that reduce eye/margin and raise error rate.
- Practical jitter taxonomy and measurement definitions for consistent budgeting and validation.
- Decision logic: when to use retimers (re-timing/cleanup) vs redrivers (channel boost).
- Verification gates: what to measure, how to correlate, and how to define pass criteria (threshold placeholders).
- Protocol-specific CTS/requirements or numeric masks/templates (handled in protocol pages).
- Protocol state machines or training details (USB/PCIe/HDMI/MIPI-specific behavior is out-of-scope here).
- Deep PLL loop derivations or academic oscillator modeling (only engineering-relevant knobs are used).
- Full power-tree design; only jitter-sensitive power entry points and isolation hooks are included.
| Symptom (what shows up) | Likely domain | First check (fast) | Where to go (this page) |
|---|---|---|---|
| Eye looks “open”, but errors happen under heavy traffic | Measurement / power-induced jitter | Correlate error counters with load/temperature; check ref clock stability under load | H2-2 (taxonomy) → H2-10 (measurement) |
| Stable with short cable; fails with dock/long cable | Channel loss vs clock margin | Decide dominant impairment: loss/ISI (needs EQ/boost) vs clock/jitter (needs cleanup) | H2-2 → H2-7/8 (retimer vs redriver) |
| Link trains sometimes, fails sometimes (same hardware) | Clock stability / policy mismatch | Check reference clock quality + distribution; verify training knobs vs static presets | H2-6 (clean refs) → H2-9 (EQ/training alignment) |
| Errors appear only at hot/cold corners | Oscillator drift / jitter sensitivity | Check ref frequency accuracy/stability; observe jitter change with temperature sweep | H2-2 → H2-4 (budget) → H2-10 (validation) |
| Redriver/retimer change makes link “worse” | Wrong lever for dominant impairment | Check if the issue is clock-dominant (needs cleanup) or ISI/loss-dominant (needs boost/EQ) | H2-2 → H2-7 (retimer) / H2-8 (redriver) |
H2-2. Jitter Taxonomy That Engineers Actually Use
A practical jitter workflow needs a small, consistent vocabulary: RJ (random), DJ (deterministic), PJ (periodic), ISI (channel-induced data dependence), and TJ@BER (total jitter at a stated error probability).
- RJ spreads crossings in an unbounded statistical tail; it dominates very-low-BER targets and is sensitive to noise floors.
- DJ is bounded and repeatable; it collapses eye width in a structured way (often linked to data patterns, duty distortion, or asymmetry).
- PJ is sinusoidal/tonal modulation; it creates “beating” failure modes where some stress conditions look fine while others fail.
- ISI is channel memory (loss/reflections/crosstalk); it turns data patterns into timing shifts and can be amplified by aggressive EQ.
- TJ@BER is meaningful only when the BER point and the measurement definition are explicitly stated.
- Typical sources: oscillator/PLL noise floor, power noise coupling, broadband EMI.
- How to measure: integrated jitter from phase-noise (ref clock) or time-interval error statistics (data).
- Dominant signature: distribution widens with longer observation; tails matter at low BER.
- Typical symptom: “Everything looks okay” but error rate refuses to drop under stress.
- First lever: reduce noise floor (clean ref, isolate supplies, improve grounding/shielding).
- Typical sources: duty-cycle distortion, asymmetry, data-dependent crossing shifts.
- How to measure: histogram shows bounded shoulders; decomposition tools reveal pattern linkage.
- Dominant signature: repeatable width collapse at specific patterns/conditions.
- Typical symptom: errors cluster with certain traffic/patterns rather than uniform randomness.
- First lever: fix asymmetry, reduce pattern sensitivity, verify equalization/presets alignment.
- Typical sources: switching regulators, spread-spectrum interactions, clock spurs, periodic aggressors.
- How to measure: phase-noise spurs / jitter spectrum; time-domain shows periodic wandering.
- Dominant signature: failures depend on specific stress states (load, cable, EMI environment).
- Typical symptom: “passes sometimes” across seemingly identical test runs.
- First lever: remove/relocate spurs (power filtering, clock routing isolation, spur-aware clocking plan).
- Typical sources: insertion loss, reflections/return discontinuities, crosstalk, connectors/cables.
- How to measure: eye closure correlates with channel loss/return; pattern sensitivity is strong.
- Dominant signature: “short works, long fails” and EQ changes have large effects.
- Typical symptom: improvement with proper EQ/retimer placement, degradation with over-EQ.
- First lever: fix channel + correct EQ/training; use boost/retiming based on dominant impairment.
- Meaning: a scalar summary used for budgeting only when the BER point and method are explicit.
- How to measure: define BER target, observation window, decomposition method, and instrument setup.
- Dominant signature: two “TJ” numbers can disagree if setup/definitions differ.
- Typical symptom: teams argue about “good/bad” because metric definitions are not aligned.
- First lever: lock metric definitions before tuning; then allocate/measure/close the budget.
- Metric definition: RMS vs p-p, TIE method, decomposition options, BER point for TJ@BER.
- Observation window: record length, histogram confidence, spur visibility for periodic effects.
- Instrument chain: timebase quality, triggering method, probe loading, bandwidth/filter settings.
- Stress controls: fixed temperature/load/cable state; correlate errors with environmental changes.
- Pass criteria placeholder: define thresholds as X/Y/N and keep them consistent across builds.
H2-3. Where Jitter Enters a High-Speed Link
Jitter entry points are ranked by controllability and observability to accelerate root-cause isolation. The workflow prioritizes quick checks (5–15 minutes) before major rework.
- Controllability: can a small, safe change reliably shift margin/error rate?
- Observability: can a simple measurement show a consistent before/after delta?
- Propagation path: does the impairment hit the ref clock, PLL/VCO, crossing threshold, or channel memory?
- Side effects: does the “fix” introduce noise amplification, training instability, or thermal load?
| Entry (ranked) | Affects | Dominant signature | Fastest check (5–15 min) | Fastest fix | Verification gate (X/Y/N) |
|---|---|---|---|---|---|
|
1) Reference clock path (XO/TCXO/OCXO → buffer/fanout → routing) |
PLL input, CDR tracking, system timing margin | “Good oscillator” yet unstable system; changes track distribution or load | Check ref quality at the consumer pin (not only at source); correlate errors with ref activity / temperature | Reduce additive jitter (buffer choice, isolation), improve routing/return, stabilize ref supply | Error rate ≤ X over Y minutes across temps; ref jitter improves by N% |
|
2) Power-induced jitter (PLL/VCO supply noise, buffer PSRR, ground bounce) |
RJ/PJ floor, crossing wander under load, intermittent training | Fails mainly under heavy traffic / load; “looks okay” at light load | Correlate errors with power events; observe spur/noise coupling; compare idle vs stress jitter | Improve decoupling/partitioning; isolate PLL/buffer rails; reduce ground impedance | Under stress, jitter delta ≤ X; error bursts disappear for Y cycles |
|
3) Data-dependent jitter (ISI, reflections, EQ side effects) |
Eye closure tied to patterns/length; sensitivity to EQ presets | Short works, long fails; knob changes swing outcomes strongly | A/B: short vs long path; step EQ one notch; watch margin/error response | Fix channel discontinuities; align EQ/training with policy; select boost/retiming by dominance | Margin improves by ≥ X dB/UI; errors ≤ Y over window |
|
4) Crosstalk-induced jitter (aggressor → threshold crossing wander) |
Threshold timing wander, burst errors with external activity | Errors correlate with neighbor port, cable motion, or switching events | Toggle aggressor on/off; log correlation; change routing/cable posture and observe deltas | Increase spacing/shielding; improve return continuity; reduce common-mode conversion | Correlation coefficient ≤ X; burst rate ≤ Y per hour |
H2-4. Practical Jitter Budgeting
Convert specification-level jitter requirements into a system acceptance margin by enforcing a closed loop: define → allocate → measure → converge. No numeric comparison is valid without consistent definitions and setup.
- Metric definition: RMS vs p-p, TIE method, TJ@BER definition.
- Stress state: temperature, load, cable/fixture condition, traffic pattern class.
- Acceptance placeholder: BER ≤ X over Y minutes, margin ≥ N.
- Partition into Ref clock, TX PLL, Channel-induced, RX residual, and measurement error.
- Allocate by controllability: reserve more headroom for items that drift with manufacturing and environment.
- Guard band is mandatory: it protects against corner drift and measurement uncertainty (placeholders: X% or Y absolute).
- Control timebase/trigger/probe loading and record settings with the data.
- Use the same observation window and stress state when comparing builds.
- Log correlation: margin/error counters vs temperature/load/cable state.
- 1) Metric/setup mismatch (fix definitions before tuning hardware).
- 2) Ref clock & distribution (highest controllability).
- 3) Power-induced coupling (load/temperature sensitivity).
- 4) Channel/ISI/EQ (higher side effects; verify training/presets alignment).
- 5) Crosstalk/EMI (solve via correlation + physical mitigation).
| Budget item | Allocated | Measured | Margin | Gate (X/Y/N) | Action if fail |
|---|---|---|---|---|---|
| Ref clock @ consumer pin | X | Y | (X−Y) | ≤ N | Reduce additive jitter; isolate supply; reroute return |
| TX PLL contribution | X | Y | (X−Y) | ≤ N | Improve rail noise; verify reference coupling; validate load sensitivity |
| Channel-induced (ISI / reflections) | X | Y | (X−Y) | ≤ N | Fix discontinuities; align EQ/training; choose boost vs cleanup by dominance |
| RX residual (CDR tracking limit) | X | Y | (X−Y) | ≤ N | Insert retimer if clock-dominant; verify cleanup vs latency/power |
| Measurement error (instrument + setup) | X | Y | (X−Y) | ≤ N | Lock setup; calibrate; document timebase/trigger/probe and window |
H2-5. Clocking Architectures Across Protocols
This section covers clocking architecture types and their engineering tradeoffs. It intentionally avoids protocol-specific numbers and compliance details, focusing on: who owns the sampling clock, how timing is transported, and where jitter is filtered or amplified.
- Clock ownership: transmitter-owned, receiver-owned, or shared reference.
- Clock transport: embedded in data, forwarded alongside data, or supplied as an external reference.
- Sampling authority: recovered clock (CDR) sets the sampling instant vs. forwarded/reference clock defines the sampling window.
- Filtering locus: jitter shaping happens at TX PLL, along distribution, inside a retimer, or within RX CDR tracking.
| Architecture | Ref dependency | Jitter transfer path | Typical pitfalls | Best-fit scenarios | First sanity check |
|---|---|---|---|---|---|
|
Embedded clock (clock in data) |
Moderate: external ref helps stability, but sampling is primarily recovered at RX | TX PLL → channel memory/ISI → RX CDR tracking → sampling instant | “Good oscillator” yet unstable link; training/EQ changes swing outcomes; stress-only failures | High-speed serial links where RX must recover timing from the stream | Compare margin/error under short vs long channel; check correlation with stress state and EQ presets |
|
Forwarded clock (source-synchronous) |
High: sampling depends on clock/data alignment delivered to RX | Clock + data path matching → sampling window; coupling affects both edges and phase relationship | Skew drift between clock and data; return discontinuity; aggressor coupling into clock net | Short-reach links where deterministic sampling is preferred over recovery uncertainty | Verify clock↔data skew stability across temperature and load; confirm return path continuity |
|
External reference + recovered clock (ref sets framework, CDR samples) |
Variable: ref can be common-related or independent; CDR closes local sampling loop | Ref distribution sets long-term stability; RX CDR filters/tracks short-term components | Wrong assumption about ref relationship; under-budgeted distribution additive jitter; measurement setup mismatch | Systems that need timing coherence across endpoints while keeping RX sampling robust | Identify whether endpoints are common-related or independent; validate ref at consumer pins; check CDR tracking behavior under stress |
H2-6. Clean External References
“Clean reference” must be defined at the consumer pin, not only at the oscillator output. This section converts a clean-ref claim into reviewable engineering actions: source selection, distribution isolation, layout hooks, power hooks, and minimum viable measurement.
- Spectral cleanliness: low phase-noise and controlled spurs in the bands that matter to sampling recovery.
- Delivered cleanliness: additive jitter from buffer/mux/fanout and routing is included.
- Stress robustness: jitter does not degrade sharply with load/temperature changes.
- Frequency plan: ref frequency and multiplication path are compatible with consumers.
- Phase-noise focus: prioritize the offsets that dominate system margin (placeholder: band A/B/C).
- Drift: temperature drift, aging, and warm-up stability are reviewed for system repeatability.
- Spurs: spur management is treated as a first-class risk (burst errors can be spur-driven).
- Additive jitter: distribution is budgeted as a non-zero contributor.
- Isolation: noisy consumers are separated by partitioning or buffering to prevent back-injection.
- Topology: star vs daisy vs partition is chosen by noise-domain boundaries, not convenience.
- Reset/enable behavior: distribution switching is checked for transient disturbance.
- Return continuity: clock nets avoid broken return paths and plane splits.
- Ground bounce control: keep clock reference stable; avoid long shared impedance.
- Crosstalk control: add spacing/guarding; avoid parallel runs with high-slew aggressors.
- Signaling choice: differential vs single-ended is decided by noise immunity and reference-plane dependence.
- Spend where it matters: PLL/buffer rails usually dominate jitter sensitivity.
- Isolation: partition rails and filtering to block noise injection and up-conversion.
- Decoupling strategy: target impedance control and placement are reviewed (no hand-waving).
- Two-point rule: measure at source and at the consumer pin.
- Stress delta: compare idle vs stress (traffic/load/thermal) using the same setup.
- Setup lock: record bandwidth, coupling, probe/loading, trigger/timebase, and observation window.
- Gate placeholder: consumer-pin jitter ≤ X, delta under stress ≤ Y, errors ≤ N.
A “clean reference” claim is invalid unless consumer-pin measurements exist and are compared under a controlled, repeatable stress state.
H2-7. Retimers as Jitter Cleaners
A retimer can reduce sensitivity to certain timing impairments by rebuilding a timing domain and re-sampling the data stream. It is not simply “stronger equalization”. This section defines when re-timing delivers real cleanup and when it cannot.
- Builds a new clocking domain internally (CDR/PLL behavior).
- Re-samples and regenerates edges in that domain.
- Cleanup effectiveness depends on tracking behavior, reference coupling, power noise, and input quality.
- Shapes frequency response or gain of the waveform.
- Does not rebuild the sampling clock domain.
- May amplify noise and crosstalk; “better looking” waveforms do not guarantee fewer errors.
- Slow wander: behaves like long-term phase/frequency drift; retimer behavior is tied to tracking and reference coupling.
- Fast random components: appear as short-term timing noise; retimer can reduce sensitivity only if internal noise and power integrity do not dominate.
- Rule: “Cleanup” must be verified using a consistent measurement and stress state—not assumed from device class.
- Training / configuration mismatch: auto behavior and static presets fight, creating unstable operating points.
- Over-peaking / aggressive equalization: noise and crosstalk are boosted along with the desired signal.
- Reference coupling: the rebuilt timing domain inherits issues from an unclean or poorly delivered reference.
- Power noise / thermal drift: supply ripple and ground bounce degrade internal clocking behavior under load and temperature.
| Problem signature | Dominant domain | Retimer helps? | Why | First check | Pass criteria (X/Y/N) |
|---|---|---|---|---|---|
| Short channel OK, long channel fails | Channel loss / ISI dominates | Often YES | Re-timing reduces sensitivity to accumulated timing uncertainty across the long channel | Measure margin/error vs channel length; compare before/after insertion under identical stress | Margin ≥ Y, error ≤ N, jitter delta ≤ X |
| Eye improves but errors do not | Reference / power / measurement mismatch | Uncertain | Physical waveform gains may not translate to sampling robustness if timing domain or counters are mis-accounted | Verify consumer-pin reference quality; confirm identical stress state and counter definition | Before/after use same setup; errors ≤ N over T |
| Only fails under load or high temperature | Power integrity / thermal coupling | Often NO (alone) | Retimer cannot compensate for supply noise and thermal drift that dominates internal timing behavior | Correlate errors with rail ripple/temperature; repeat with controlled power and cooling | Error ≤ N at worst-case stress; delta ≤ X |
| Gets worse after inserting retimer | Over-peaking / training mismatch / ref coupling / power noise | Investigate | Internal timing rebuild can amplify weaknesses if operating point is unstable or rails/reference are dirty | Reduce EQ aggressiveness; validate ref at consumer pins; audit rails and grounding under stress | After tuning: margin ≥ Y, errors ≤ N |
- Physical: jitter trend, eye opening (use consistent bandwidth/window and identical probing).
- Link: margin trend (same stress, same presets; no counter redefinition).
- System: error counters and service stability (same traffic pattern and duration).
- Pass gate placeholders: jitter ≤ X, margin ≥ Y, errors ≤ N over T.
H2-8. Redrivers: Channel Boost Without Cleanup
A redriver primarily provides channel boost (gain/CTLE-like shaping) and does not rebuild the timing domain. It can improve loss-limited channels but may worsen jitter-limited systems by amplifying noise and crosstalk.
- Noise amplification: high-frequency boost can raise noise floor at the sampling threshold.
- Crosstalk sensitivity: steeper edges and higher gain can magnify aggressor coupling.
- Metric mismatch risk: an eye that “looks larger” may not translate into fewer errors under stress.
| Step | Input | Observation | Decision | Acceptance gate (X/Y/N) |
|---|---|---|---|---|
| 1. Classify symptom |
“Short OK, long fails” “Extra connector breaks stability” “Stress-only errors” |
Symptoms hint whether the channel is loss-limited or timing-noise-limited | Proceed to measurement | Baseline errors ≤ N over T |
| 2. Measure trends |
Loss trend Jitter trend Margin trend |
Loss-dominant channels often benefit from boost; jitter-dominant systems often do not |
If loss-dominant → candidate If jitter-dominant → high risk |
Margin ≥ Y at stress; jitter delta ≤ X |
| 3. Place intentionally | Near source / mid / near sink | Placement changes what is boosted (pre-channel vs mid-channel vs end correction) | Choose by dominant impairment location | Errors ≤ N after placement change |
| 4. Accept only if system metrics improve |
Before/after A/B Same stress |
Eye alone is insufficient; use error and margin consistency | Accept only with consistent gains | Margin ≥ Y, errors ≤ N, stable over T |
- Near connector: sees the worst waveform and can compensate loss, but is exposed to harsh noise and transient environments.
- Near source: acts like pre-boost; may not fix impairments accumulated later in the channel.
- Near sink: improves end-of-channel amplitude, but can amplify local noise coupling near the receiver.
H2-9. EQ & Training vs Static Settings
Cross-protocol rule: automatic training and static presets must optimize the same objective. If policy and overrides disagree, the system can converge to a point that looks aggressive but is operationally fragile.
- Training loop: measure margin/quality, adjust, and converge.
- Static overrides: lock or bias parameters toward a preferred operating point.
- Mismatch: training is forced to converge within the wrong constraints, causing non-convergence or a brittle stable point.
- Signature: drift, retries, and “works once” behavior.
- First check: reference delivery at consumer points and supply-noise trend.
- Fix direction: stabilize reference coupling and rails before tuning aggressiveness.
- Signature: short channel works, long channel cannot converge.
- First check: margin trend vs channel length/topology.
- Fix direction: move back into a trainable envelope (loss budget) before knob tuning.
- Signature: converges, but to a poor point; “stronger” presets look worse.
- First check: compare mild/default/aggressive trends under identical stress.
- Fix direction: reduce aggressiveness to regain repeatable convergence.
- Signature: “passes” yet becomes fragile under temperature/load/EMI shifts.
- First check: stress sensitivity of margin and error counters.
- Fix direction: align objective functions and acceptance gates across layers.
- Noise and crosstalk amplification: stronger EQ can raise sensitivity at threshold crossings.
- Edge-of-stability operating point: temperature and supply variation can push the system out of the narrow stable region.
- Poor repeatability: the same environment can converge differently when overrides constrain the search space.
| Layer | Knob group | Lock / allow | Alignment rule | Evidence required |
|---|---|---|---|---|
| FW | Training policy (objective, retry, stop condition) | Define + document | Same objective function as acceptance gate; avoid “optimize eye only” | Convergence repeatability across power/temperature states |
| FW | Static preset / override limits | Prefer bounded | Do not force aggressiveness beyond trainable region | Margin and error trend improve under worst-case stress |
| HW | EQ mode / strength limits | Constrain | Keep noise and crosstalk amplification within acceptable bounds | Jitter delta ≤ X, errors ≤ N |
| HW | Reference delivery / power mode | Must satisfy | Stable reference and rails are prerequisites for any tuning strategy | Stability at worst-case load/temperature; margin ≥ Y |
- Margin trend: margin ≥ Y with stable spread across time.
- Error trend: errors ≤ N over T at stress.
- Stress sensitivity: delta ≤ X when temperature/load/noise conditions change.
H2-10. Measurement & Validation
Measurement quality is part of the system. Budget closure, retiming gains, and training alignment require repeatable and layer-consistent definitions from lab bring-up through production validation.
- Phase noise proxy and integrated jitter trend
- Reference delivery quality at the consumer pins
- Eye opening trend and margin proxy under the same stress state
- Before/after comparisons with identical setup and windows
- BER proxy and error-counter stability across time
- Service-level stability under temperature/load/noise changes
- Probe/fixture loading: measurement hardware can change the channel and bias results.
- Trigger and timebase quality: trigger jitter and reference instability leak into measured timing.
- Bandwidth and window selection: inconsistent bandwidth/window makes results non-comparable.
- Stress-state mismatch: comparing data under different load/temperature invalidates conclusions.
- One-knob A/B: change only one variable (channel length, load, cooling, or supply filtering) and record trends.
- Stress sweep: temperature/load/noise sweeps reveal dominant failure drivers.
- Three-metric sync: log margin proxy, error counters, and a reference-quality proxy on the same timeline.
- Repeatability check: confirm the same setup converges to the same result across repeated runs.
| Item | Setup requirement | Common pitfall | Sanity check | Pass criteria (X/Y/N) |
|---|---|---|---|---|
| Time window | Fixed duration T for all runs | Comparing different windows | Repeat run-to-run; verify spread is stable | Spread ≤ X |
| Probe/fixture | Known loading; consistent attachment point | Loading changes the channel | Compare with/without fixture; check trend consistency | Trend preserved; delta ≤ X |
| Trigger/timebase | Same reference source and trigger method across runs | Trigger jitter contaminates timing results | Re-run with a different trigger path; compare stability | Measurement stability ≥ Y |
| Bandwidth/window | Fixed bandwidth and analysis window definitions | Results become incomparable | A/B with one controlled change; keep definitions fixed | Differences explainable; delta ≤ X |
| Pass gate | Fixed acceptance gates for production | “Looks good” without system metrics | Validate with margin + counters under stress | jitter ≤ X, margin ≥ Y, errors ≤ N |
| Timestamp | Stress knob | Ref-quality proxy | Margin proxy | Error counters | Notes |
|---|---|---|---|---|---|
| t0 | Temperature | proxy A | proxy B | count C | state |
| t1 | Load | proxy A | proxy B | count C | state |
H2-11. Engineering Checklist (Design → Bring-up → Production)
A) Design review gates (schematic / layout / PI / return path)
How: draw the full clock tree: source → buffer/mux → fanout → consumers; mark every connector/via-field crossing and every enable pin.
Pass criteria: tree is single-source-of-truth; every add-point has a measurable proxy (X).
How: prefer a clean trunk then short local branches; avoid long daisy-chains of buffers unless skew is explicitly closed.
Pass criteria: output-to-output skew ≤ X; enable/disable is glitch-free under reset sequencing.
How: keep clock traces short; isolate from high-swing aggressors; keep reference plane continuous; do not share tight bundles with high-speed data.
Pass criteria: clock coupling risk is reviewed; worst-case aggressor scenario has margin ≥ Y.
How: separate sensitive rails; keep return loops compact; ensure no shared high di/dt path from load-switches/DC-DC into PLL/buffer rails.
Pass criteria: “quiet island” defined; worst-case load step does not violate noise proxy ≤ X.
How: place high-frequency caps at pins with direct via-to-plane; avoid long dog-bones; keep the return via adjacent.
Pass criteria: smallest loop confirmed by layout review; decap loop ESL budget ≤ X.
How: avoid splits under high-speed lanes and clock paths; ensure stitching vias across plane transitions; keep connector reference consistent.
Pass criteria: no critical lane crosses plane gaps; plane transition stitching meets rule ≥ Y.
How: define 360° bond points, chassis coupling, and where “pigtail” is forbidden; keep bond inductance controlled.
Pass criteria: bond locations and method are explicit; EMI stress does not reduce margin below X.
How: map thermal hotspots for clock tree + retimers/redrivers; ensure copper/airflow paths; include derating policy for worst case ambient.
Pass criteria: stress delta (temperature sweep) keeps margin change Δ ≤ X.
B) Bring-up sequence (Ref → Link → Training → Stress)
How: measure ref at source and at key consumers; log ref-proxy vs load/temperature.
Pass criteria: ref proxy ≥ Y at all consumers; drift Δ ≤ X over T.
How: compare a short “golden” path vs target path; record margin proxy and error counters.
Pass criteria: target path margin proxy ≥ Y or improvement lever is identified.
How: power-cycle and re-train N times; verify convergence stability and preset consistency.
Pass criteria: success rate ≥ Y%; margin variance ≤ X; no oscillatory preset behavior.
How: sweep temperature/load; log {ref proxy, margin proxy, error counters} in one timeline.
Pass criteria: errors ≤ N over T; correlation points to one dominant domain (X).
C) Production screens (presence / proxies / fast correlation)
How: check ref-proxy at key consumers via test points or built-in monitor.
Pass criteria: all consumers detect ref; proxy within X band.
How: run minimal retrain loop N times; record success and margin proxy trend.
Pass criteria: success rate ≥ Y%; no monotonic degradation across repeats.
How: short hot/cold soak or localized heating; re-check margin proxy and counters.
Pass criteria: margin change Δ ≤ X; errors ≤ N in T.
H2-12. Applications & IC Selection (Logic + Bundles)
- Spec focus: phase-noise window, additive jitter, skew, enable behavior, supply sensitivity.
- Validation gate: consumer-side ref proxy ≥ Y; drift Δ ≤ X over T; skew ≤ X.
- Example parts: SiTime SiT5356 (Super-TCXO), TI LMK1C1104 (LVCMOS clock buffer), Renesas 5PB1108 (1:8 clock buffer), ADI LTC6957-1 (ultralow additive noise clock buffer).
- When: long/variable channels where eye closure is not solved by boost alone; need deterministic recovery and better BER margin.
- Spec focus: retiming behavior, latency, power/thermal headroom, reference coupling policy.
- Validation gate: before/after: margin ↑, errors ↓, repeatability ↑ (placeholders X/Y/N/T).
- Example parts: TI DS160PT801 (PCIe 4.0 protocol-aware retimer), TI DS125DF410 (quad-channel retimer with CDR/DFE), TI DS250DF410 (25-Gbps 4-channel retimer), TI DS280DF810 (28-Gbps 8-channel retimer).
- When: loss-dominated channels with adequate reference quality; need more eye opening but not full retiming.
- Spec focus: CTLE/EQ range, noise amplification risk, placement sensitivity, channel symmetry.
- Validation gate: eye/margin proxy ↑ while error counters do not worsen under stress (X/Y/N/T).
- Example parts: TI TUSB1046-DCI (10-Gbps linear redriver switch for USB-C Alt-Mode), TI DS160PR810 (16-Gbps 8-channel linear redriver), TI DS80PCI810 (8-Gbps 8-channel redriver), TI SN75LVPE3101 (PCIe 3.0 x1 redriver), TI TDP1204 (HDMI 2.1 redriver), TI TDP142 (DisplayPort 1.4 redriver).
- Jitter-cleaning example (DP dual-mode path): Parade PS8461 / PS8469 (DP mux/demux with internal retimer for jitter cleaning).
Best lever: retimer if margin collapses nonlinearly; redriver if loss-dominant and ref is clean.
Category: Retimer / Redriver.
Validation gate: margin ↑ by X; errors ≤ N over T; repeatability ≥ Y%.
Best lever: clean ref first; then retimer if the channel is beyond envelope; avoid “strong” static overrides without gates.
Category: Clock tree / Retimer.
Validation gate: converge rate ≥ Y%; preset variance ≤ X; no regression under temperature.
Best lever: fix return path/shield strategy; isolate clock rails; only then consider redriver/retimer changes.
Category: Layout/return + Clock tree.
Validation gate: margin drop under EMI ≤ X; error bursts ≤ N over T.
Template: SiT5356 (ref source) → Renesas 5PB1108 (fanout) → TI LMK1C1104 (local buffer per island).
Acceptance gate: consumer ref proxy ≥ Y; skew ≤ X; port-to-port margin spread ≤ X.
Template: clean ref chain (e.g., SiT5356 + low-noise distribution) + a retimer stage: DS160PT801 (PCIe protocol-aware) or DS125DF410/DS250DF410 (generic retimers where applicable).
Acceptance gate: before/after margin ↑ by X; errors ≤ N over T; retrain success ≥ Y%.
Template: ref consistency gates + one well-placed redriver near the dominant discontinuity: DS160PR810, DS80PCI810, SN75LVPE3101, TUSB1046-DCI, TDP1204, TDP142 (pick by interface family).
Acceptance gate: margin proxy ↑ by X without noise/crosstalk regression; errors ≤ N under stress.