Across an isolation barrier, system stability is decided by timing budgets—matching delay/skew, controlling jitter, and verifying drift under real events—not by isolation voltage alone.
Build a measurable contract (X/Y/N), lock match groups, and validate across PVT + dv/dt so sync never “slips” in the field.
Isolation is not only about withstand voltage. System success depends on
delay, skew, jitter, and
drift being budgeted and verified across the barrier.
Where Sync & Timing Breaks First
SYSREF/CLK paths: alignment depends on the relative timing budget across isolation, not on a single “nice” clock spec.
Data vs control: data paths are margin-driven; control paths are state-driven (safe defaults under power events).
Multi-channel (2–8ch): channel matching must be guaranteed as a group, not assumed from “typical” values.
Mixed direction: forward and reverse directions rarely share identical timing; asymmetry must be explicit in the budget.
Boundary Contract
Covers
Definitions and measurement-ready meaning of tPD, tSK, jitter, and drift across isolation.
How to match delays across channels, including multi-part vs single-package grouping risks.
How to budget skew/jitter for SYSREF/CLK paths (budget and acceptance only).
How layout partitioning and common-mode coupling create “apparent timing” failures.
Verification hooks: pass/fail criteria templates and reproducible measurement assumptions.
Does NOT cover
JESD204 protocol mechanics, link training states, or subclass internals (only budget/acceptance for SYSREF/CLK paths).
General PLL/clock-tree theory (only isolation-related budget terms and verification).
Internal isolator architectures (capacitive/magnetic implementation details belong to device-class pages).
Links to (sibling pages)
Low-Jitter Clock Isolator: device-level jitter performance and selection specifics.
Multi-Channel / Mixed Direction: channel-count/topology deep dive and device-class nuances.
Layout & Grounding (Isolation): full partitioning rules beyond timing-only implications.
What This Page Delivers
A reusable timing budget template for isolation paths (fixed offset + mismatch + noise + drift).
Practical grouping rules for matched multi-channel timing.
Measurement-ready acceptance criteria placeholders (X/Y/N) to prevent lab-to-lab disputes.
Figure 1 — A system-level timing map. Budget items are attached to the path (delay, skew, jitter, drift) to prevent “spec-only” decisions.
H2-02. Timing Primitives Across an Isolation Barrier
Consistent engineering outcomes require consistent definitions. The primitives below define what to budget, what to measure,
and what failure modes appear when each term is misunderstood.
Definition: Relative delay difference between channels that are intended to align (within one package or across multiple devices).
How to measure: Drive the same stimulus into all channels; measure output edge timestamps with the same threshold rule; report max-min over Y samples.
System consequence: Multi-channel sampling alignment breaks even if each channel’s absolute delay “looks OK”.
Common trap: Assuming two single/dual-channel parts match like one multi-channel part, or ignoring direction asymmetry in mixed-direction groups.
Jitter — Random vs Deterministic (budget impact only)
Definition: Edge timing variation around a mean. Random jitter accumulates statistically; deterministic jitter is bounded and often coupling-driven.
How to measure: Declare window length and statistic (RMS or p-p); isolate measurement noise; avoid comparing RMS and p-p without a stated conversion factor.
System consequence: Jitter reduces timing margin and can masquerade as “skew” when thresholds bounce under common-mode injection.
Common trap: Using a single p-p capture as a budget value, or failing to lock the same sampling/trigger method across labs.
Deterministic Latency — Fixed Offset That Does Not Average Out
Definition: A repeatable, fixed timing offset (or discrete steps) introduced by architecture, retiming placement, or power/sequence states.
How to measure: Measure absolute delay against a stable reference before/after topology or power-sequence changes; log state conditions (VDD, UVLO events).
System consequence: Alignment can shift “by a chunk” without visible jitter growth; lock can fail after a reset/thermal/power event.
Common trap: Folding deterministic shifts into a generic “tPD” number and losing traceability when systems behave differently after recovery events.
Measurement Assumptions (placeholders to lock later chapters)
Threshold rule: X% of swing (declare X).
Sample size: Y edges per channel (declare Y).
Window: Z seconds per condition (declare Z).
Environment: T = {T1/T2/T3} °C and VDD = {V1/V2} (declare points).
Pass/fail format: “max-min ≤ X” and “RMS ≤ Y” with stated statistic.
Figure 2 — Timing differences are not interchangeable. Fixed delay (tPD), channel mismatch (tSK), drift (Δt), and jitter must be budgeted with declared measurement assumptions.
H2-03. Reading Datasheets Without Getting Fooled (Specs → System Meaning)
Datasheet numbers are not system guarantees. The goal is to convert each spec into a
budget term with explicit conditions and margins, so timing behavior stays consistent across PVT and production.
Spec Transferability Check (test condition → system equivalence)
VDD / UVLO behavior: If VDD changes during startup, recovery, or load steps, treat timing change as drift (not “fixed”).
Load / Cload / edge shaping: If system loading slows edges or introduces threshold bounce, add margin to measurement error and deterministic jitter risk.
Temperature range: If the system operates beyond “typ room”, timing spread must be captured as drift across declared T points.
Data rate / toggle pattern: If real traffic has different edge density than datasheet stimulus, bounded jitter can be underestimated—require an explicit pattern note.
typ vs max vs distribution (production consistency lens)
typ: center estimate only; never used as pass/fail for alignment-critical paths.
max/min: use for worst-case constraints, but verify the scope (across PVT? only one condition?).
distribution: if not provided, treat as unknown spread—add margin or require sample characterization to protect yield.
Engineering rule: if a number is missing its conditions, it is missing its meaning.
When Skew Is More Dangerous Than Jitter
Multi-channel alignment: skew is a deterministic mismatch that does not average out; it directly breaks relative sampling alignment.
Part-to-part grouping: skew risk rises when channels are split across devices (package-level matching is lost).
Clock quality focus: jitter dominates only after skew and drift are bounded to the alignment requirement.
Spec-to-Budget Mapping (fields are placeholders)
Each datasheet spec must land in a budget column: tPD, tSK,
jitter, drift, or measurement error.
Spec name
Propagation delay (tPLH / tPHL)
Datasheet test condition
VDD = X V, Cload = Y pF, Temp = Z °C, pattern = N/A
System equivalent term
tPD fixed + drift
Extra margin reason
Edge polarity and PVT drift must be bounded for alignment windows.
Bandwidth = X, statistic = RMS, window = Y, pattern = Z
System equivalent term
jitter + measurement error
Extra margin reason
Statistic mismatch (RMS vs p-p) and windowing differences inflate disputes.
Spec name
Timing drift over temperature
Datasheet test condition
Temp sweep points = {T1/T2/T3}, VDD fixed at X
System equivalent term
drift
Extra margin reason
Field conditions include simultaneous VDD and temperature variation.
Spec name
UVLO / startup timing behavior
Datasheet test condition
Reset sequence and ramp rate = X; thresholds = Y
System equivalent term
deterministic latency + drift
Extra margin reason
Recovery state can shift alignment by a fixed chunk after events.
Figure 3 — A conversion view: datasheet specs become budget inputs only after conditions, statistics, and margins are made explicit.
H2-04. How to Build a Skew Budget (Step-by-Step Method)
A skew budget is a reproducible process: define the alignment group, model the path by layers, extract fixed/drift/random terms,
add measurement error, then compare the total against a declared pass criterion.
Step-by-Step Workflow (reusable)
Define the alignment group: which channels must match (A/B/…); note direction mix and whether channels span multiple devices.
Total_skew ≤ X ps under declared {T/V} and measurement assumptions
Figure 4 — A budget is a sum of named contributors. Lock drift and fixed mismatch first, add statistical jitter last, and include measurement error in the acceptance claim.
H2-05. JESD204 SYSREF/CLK Paths (Budget Only, No Protocol)
This section focuses on timing targets and budgets for SYSREF/CLK across an isolation barrier.
Protocol mechanics are intentionally excluded.
Alignment Target (timing-only)
Goal: keep SYSREF and CLK relationships predictable across the barrier so downstream alignment windows remain valid.
Budget focus: bound tPD match, tSK, jitter, and drift under declared measurement assumptions.
Critical nuance: deterministic offsets after power/recovery events must be recorded as a fixed budget contributor.
Topology Choice: Low-Jitter Isolation vs Digital Isolation + Retime
Prefer low-jitter clock isolation when the clock is the sampling reference and jitter is a dominant budget term.
Prefer digital isolation + retimer/CDC when deterministic alignment and grouping control dominate, and a controlled retime point exists on the secondary side.
Always declare the statistic (RMS/p-p), window, and trigger/threshold rules; otherwise budgets are not comparable across labs.
Common Failure Signatures (timing accounting first)
Symptom
Intermittent loss of lock / intermittent alignment failures
First accounting check
Jitter statistic and capture window consistency; measurement method normalization
Budget term
jitter + measurement error
Symptom
Works at room temp, fails after temperature drift
First accounting check
Drift term exists as a separate contributor; T/V points cover the operating range
Budget term
drift
Symptom
Reboot/recovery causes a consistent phase/timing shift
First accounting check
State-dependent deterministic latency recorded and bounded
Budget term
tPD fixed (state-dependent)
Symptom
Multi-lane alignment breaks while single-lane still looks “OK”
First accounting check
Group definition and channel-to-channel skew measurement (max-min) consistency
Budget term
tSK + fixed mismatch
SYSREF/CLK Path Checklist (isolation-only)
Path contract: define which segments share the same barrier and belong to the same match group.
tPD match: record edge polarity (rise/fall) and direction; prevent mixing tPLH and tPHL in one claim.
tSK control: define group membership; avoid part-to-part grouping without explicit spread margin.
Jitter accounting: declare RMS/p-p, window length, and bandwidth; include threshold bounce as “equivalent jitter”.
Drift accounting: separate temperature/voltage/aging drift; verify at declared points.
Recovery behavior: bound deterministic offset after UVLO/reset; log state conditions for reproducibility.
Figure 5 — Two budget-only topologies. Choose by dominant risk: jitter-driven (Topology A) vs alignment/grouping and retime-point control (Topology B).
Multi-channel timing is a group integrity problem. Channel count, direction mix, and supply strategy
change skew, drift, and measurement repeatability.
What Changes When Scaling to 2–8 Channels
Single package vs multi-device stitching: part-to-part spread and thermal gradients add skew and drift beyond datasheet “typ”.
Mixed direction: forward and reverse paths can have different tPD distributions; alignment must be defined per match group.
Shared vs independent supplies: shared rails improve correlation; independent rails improve isolation but can worsen edge drift and asymmetry.
Match-Group Rules (timing contract)
Define groups explicitly: which channels must align (Group A/B/…); do not assume all channels share one skew limit.
Split by direction: mixed-direction channels require separate tPD/tSK accounting.
Pin the reference path: SYSREF/CLK-critical channels should not be mixed with unrelated control lanes without a stated skew budget.
Architecture Decision Matrix (no part numbers)
Use these cards to select an architecture by risk, routing, testability, and cost pressure. Scores are qualitative (Low/Med/High).
2ch · Uni-direction · Same barrier
Matching risk: Low
Routing difficulty: Low
Test difficulty: Low
Cost pressure: Low
4ch · Uni-direction · Same barrier
Matching risk: Med
Routing difficulty: Med
Test difficulty: Med
Cost pressure: Med
8ch · Uni-direction · Same barrier
Matching risk: Med
Routing difficulty: High
Test difficulty: Med
Cost pressure: Med
4–8ch · Mixed-direction · Same barrier
Matching risk: High
Routing difficulty: High
Test difficulty: High
Cost pressure: Med
4–8ch · Split across devices
Matching risk: High
Routing difficulty: Med
Test difficulty: High
Cost pressure: Low/Med
Independent supplies (per side)
Matching risk: Med/High
Routing difficulty: Med
Test difficulty: Med/High
Cost pressure: Med
Verification Hooks (group-based)
Stimulus: drive the same edge stimulus across all channels in a group (same threshold rule).
Metric: report skew as max-min over Y samples within a declared window.
Conditions: verify at declared T/V points; include recovery events if field behavior depends on reset/UVLO.
Figure 6 — A group-based alignment map. Mixed direction requires separate match groups and explicit skew accounting.
H2-07. Clock vs Data vs Control: Where to Retiming / CDC / Re-Clock
Retiming placement decides whether isolation uncertainty stays “analog” or becomes a digital, testable contract.
Use the decision flow to optimize a primary goal while keeping budgets comparable across labs.
Signal Class → Dominant Timing Sensitivity
Clock (sampling reference): dominated by jitter and coupling. Re-clock can add fixed offset that must be bounded.
Data (multi-lane / group-aligned): dominated by skew and deterministic mismatch; a defined retime point can “reset” uncertainty in the sink domain.
Control (enable/reset/IRQ/state): dominated by tPD and glitch-free monotonicity; pulses require explicit CDC semantics to avoid loss or duplication.
Decision Tree (3 Steps)
Step 1 sets the optimization goal. Step 2 selects the signal semantics. Step 3 chooses placement.
Each choice must map to explicit budget terms.
Step 1 — Primary Goal
Choose one: min jitter, min skew, min latency, max robustness.
Step 2 — Signal Semantics
Classify: clock / data / control.
Identify level vs pulse semantics for control paths.
Step 3 — Placement
Choose: before, after, or both sides.
Declare fixed offset and group rules.
Placement Options → Budget Meaning
Retime before the barrier
Stabilizes input edges and reduces upstream variability; isolation still contributes its own jitter/drift.
Best when upstream conditioning is the dominant uncertainty source.
Retime after the barrier
Terminates cross-barrier uncertainty in the sink domain; converts variability into a bounded fixed offset.
Best for group alignment and reproducible verification.
Retime on both sides
Maximizes robustness across UVLO/reset and high dv/dt events; easiest to turn failures into digital diagnostics.
Trades for higher latency and stricter state definition.
“Make it digital” strategies
re-clock to bound uncertainty; CDC for event semantics (sync/handshake/FIFO);
vote for critical controls to tolerate sporadic coupling.
Figure 7 — Retiming placement options. Select by dominant risk term and declare the resulting fixed offset and grouping rules.
H2-08. Layout & Partitioning Effects on Timing (The Hidden Couplings)
Timing margins can collapse even when datasheet numbers look safe.
The common cause is hidden coupling that turns return-path mistakes and common-mode injection into
equivalent jitter.
Hidden Couplings That Inflate “Equivalent Jitter”
Return-path across the gap: forces current to detour, distorts edges, and makes trigger thresholds unstable.
Barrier capacitance: injects common-mode transients into the secondary reference, appearing as threshold bounce.
High dv/dt environments: switching nodes couple into the barrier and receiver input networks, creating “fake jitter” under events.
Layout Guardrails (Isolation Timing Focus, ≤10)
Hard partition primary/secondary copper and reference planes: prevent cross-gap reference dependencies.
Do not allow high-speed return paths to cross the barrier gap: cross-gap return equals edge distortion mapped into timing noise.
Keep barrier-underlay copper minimal: reduce capacitive coupling and common-mode injection.
Push high dv/dt nodes away from isolator and timing-critical lines: avoid event-only failures.
Stabilize receiver thresholds (clean local supply + decoupling): threshold bounce inflates equivalent jitter.
Keep match-group routes symmetric: equal reference and environment reduces fixed mismatch and drift gradients.
Close each side’s current loop locally: avoid large loops that radiate and pick up transients.
If a Y-cap is used, treat it as a budgeted element: define leakage constraints and measure the effect on timing noise.
Standardize measurement method: trigger level, probe grounding, and capture window must be declared.
Verify under events: include switching transients, UVLO/reset, and thermal conditions in the timing acceptance plan.
Figure 8 — Return-path errors and barrier coupling can look like “jitter” by moving receiver thresholds. Treat it as an explicit budget term.
“Short-term OK” can still fail after heat, supply events, or long runtime.
Drift must be budgeted as ΔtPD, ΔtSK, and state offsets,
then verified with a declared PVT and event sequence.
Drift Terms (Budget Language)
Continuous drift: track ΔtPD(T) and ΔtSK(T) across temperature points (placeholders only).
State-dependent offset: power-cycle / UVLO / reset can introduce a repeatable fixed offset between states.
Long-term drift: lifetime and stress can widen distributions; focus on tail growth and match-group consistency, not mechanism detail.
ΔtSK(T) ≤ X ps
Offset_state ≤ Y ps
tSK_max ≤ Z ps over N samples
jitter_RMS ≤ J ps over window W
Key rule
State offsets must be recorded as a separate term. Do not mix state-dependent shifts into jitter statistics.
Figure 9 — Drift trends only. Use placeholders for axes and declare Δ terms and state-dependent offsets as separate budget items.
H2-10. Verification & Production: How to Measure and Lock Pass/Fail
A stable pass/fail outcome requires a locked measurement contract:
reference path definition, trigger strategy, window, statistics, and sample count.
Production must reproduce the same contract with automation and traceable logs.
Measurement Contract (Lock Before Debating Numbers)
Paths
reference path (non-isolated baseline) isolated path (DUT through barrier)
Trigger & correlation
Same-source triggering or correlated timing; declare threshold and edge polarity (rise/fall).
Window & samples
Window: W (placeholder)
Samples: N (placeholder)
Statistics
Declare one: RMS / p-p / max-min
Do not mix state offsets into jitter stats.
Tool Selection (When to Use What)
Scope / DSO
Use for edge integrity, event-only faults, and coarse tPD/skew checks. Control probe grounding and trigger levels.
Time-interval / correlated timing
Use for repeatable tSK_max, Δ terms, and Offset_state. Correlation reduces instrument noise dominance.
Jitter / phase-noise oriented tools
Use for clock-path jitter budgeting. Always declare bandwidth and the statistic window.
Production fixtures
Automate stimulus + edge detect +判定. Store minimal logs to reproduce lab outcomes.
Measurement Checklist (≤10)
Declare rise/fall definitions and the trigger threshold used.
Use a defined reference path and keep it unchanged across runs.
Use same-source triggering or correlated timing when comparing paths.
Declare window length and whether event segments are included.
Declare statistic type (RMS / p-p / max-min) and keep it consistent.
Set and record sample count N; avoid mixing runs with different N.
Record temperature and VDD points for every dataset (placeholders allowed).
Separate Offset_state from jitter; store it as its own field.
Standardize probe grounding and fixture delay handling (declare yes/no).
Repeat at multiple PVT points and after event sequences to confirm stability.
Pass Criteria Template (Placeholders)
tSK_max ≤ X ps over N samples @ T = Z°C, V = Vnom
ΔtSK(T) ≤ Y ps from T_low to T_high
Offset_state ≤ W ps across sequence S
jitter_RMS ≤ J ps over window W with BW = B
Production Lockdown (Minimal Trace Fields)
Stimulus &判定
Use a fixed stimulus recipe and an automated threshold check that matches the lab contract.
Black-box fields
Store: T, VDD, lot, rev,
UVLO/OT/reset flags, window, N,
summary.
Figure 10 — Use a declared reference path and correlated timing so pass/fail stays consistent across labs and production fixtures.
Power option: integrated isolated power ISOW7721 when isolated-side rail stability is a dominant skew risk.
Recipe B
2–8 channel alignment (match group first)
Goal: constrain channel-to-channel skew by construction (same package, same supply conditions, same routing class).
Representative parts: multi-ch isolator ISO7741 / ADuM1401 / Si864x → retime/CDC on one chosen side only (architectural rule).
Power option:ISOW7741 (integrated DC-DC class) to reduce supply-induced edge drift on the isolated side.
Recipe C
Drive/control timing loop (dv/dt stress aware)
Goal: prevent dv/dt injection from turning into threshold bounce (apparent jitter) at the receiver.
Representative parts: gate driver UCC21520 + isolated modulator AMC1306 or AD7401A + clean isolated bias via SN6505A (or an integrated power isolator option).
Recipe D
Low-power isolated node (wake + stable timing)
Goal: avoid startup/UVLO transients creating one-time skew events that break sync after wake.
Representative parts: robust low-power isolator family ISO734x (class) + integrated power ISOW7721 when BOM and drift risks must be minimized.
Request a Quote
H2-13. FAQs (Sync & Timing Across Isolation)
Each answer uses a fixed 4-line, audit-friendly structure:
Likely cause / Quick check / Fix / Pass criteria (placeholders X/Y/N).
FAQ 01Datasheet skew is OK, but system alignment still fails—first suspect what definition mismatch?
Likely cause
Skew is being compared under different definitions (edge, polarity, threshold, window, or “max” vs “RMS”).
Quick check
Write the measurement contract: tSK metric, trigger threshold, sample window Y, sample count N, and reference-path definition.
Fix
Standardize one skew metric and one windowing rule across bench, chamber, and production; log it with every dataset.
Pass criteria
tSK_max ≤ X ps over N samples within window Y, using the same trigger/threshold and reference path.
FAQ 02SYSREF passes on bench, fails in chamber—what drift term is usually missing?
Likely cause
Temperature/supply drift and state-dependent offset (Offset_state) are missing from the skew budget.
Quick check
Run a PVT sweep and an event sequence (power-cycle/UVLO/restart) and record ΔtSK(T) and Offset_state.
Fix
Add separate budget rows for ΔtSK and Offset_state; verify after thermal soak and after each event step.
Pass criteria
ΔtSK(T) ≤ X ps across Y temperature corners, and Offset_state ≤ X ps across N event cycles.
FAQ 03Using one 4-ch isolator is stable; two 2-ch parts aren’t—what matching assumption broke?
Likely cause
Match group consistency broke (package-to-package delay distribution, supply asymmetry, or routing class mismatch).
Quick check
Measure cross-device channel skew: compare (chA from IC1) vs (chB from IC2) under the same stimulus and supply corners.
Fix
Keep timing-critical channels inside one matched multi-channel device or add a retiming boundary that resets alignment on one chosen side.
Pass criteria
tSK_group_max ≤ X ps across all channels and across devices over N samples under Y PVT corners.
FAQ 04Clock isolator jitter looks great, but ADC SNR drops—first check what coupling path?
Likely cause
Common-mode injection or return-path coupling shifts receiver thresholds or pollutes ADC reference/clock domain (apparent jitter becomes noise).
Quick check
Correlate SNR drop with dv/dt events and with isolated-side supply ripple; compare “clock-only” vs “clock+switching” conditions.
Fix
Tighten partition/return paths, reduce barrier coupling where possible, and stabilize isolated-side supply/ground referencing near the ADC/clock receiver.
Pass criteria
SNR degradation ≤ X dB under Y switching stress, while jitter_RMS ≤ X and tSK_max ≤ X over N samples (declared window).
FAQ 05Skew meets spec, but intermittent slip occurs—windowing/trigger or real drift?
Likely cause
Measurement window/trigger is masking rare tail events, or drift is event-driven (startup/UVLO/thermal soak) rather than continuous.
Quick check
Repeat with (1) longer window Y, (2) larger N, and (3) explicit event replay steps; compare tail vs mean.
Fix
Lock trigger strategy and windowing; add drift/event terms to the budget and verify pass/fail under the same event schedule.
Pass criteria
tSK_max ≤ X ps over N samples in window Y, and zero slip events across N event cycles under declared corners.
Direction-dependent propagation (rise/fall asymmetry, tPLH/tPHL differences, or different internal paths for forward vs reverse channels).
Quick check
Measure both polarities and both edges for each direction; compare tPD_rise vs tPD_fall and forward vs reverse.
Fix
Align directions within the same match group where required, or add a retime/CDC boundary to remove direction-dependent offsets from the alignment domain.
Pass criteria
|tPD_forward − tPD_reverse| ≤ X ps and |tPD_rise − tPD_fall| ≤ X ps over N samples under Y corners.
FAQ 07After adding Y-cap for EMI, timing margin shrinks—why?
Likely cause
The Y-cap changes common-mode current return, increasing threshold bounce or edge distortion on the receiving side (apparent jitter/skew inflation).
Quick check
A/B test with Y-cap population options and measure tSK tails and receiver threshold stability under the same dv/dt stress.
Fix
Re-balance return paths and placement; reduce injected CM current into sensitive nodes; re-validate timing budget with the final EMI network.
Pass criteria
tSK_max ≤ X ps and jitter_RMS ≤ X over N samples under Y EMI configuration, with no new event-only outliers.