123 Main Street, New York, NY 10001

Phase Noise & Jitter: From Offset PN to RMS Jitter and SNR Budget

← Back to:Reference Oscillators & Timing

This page turns phase-noise plots and “RMS jitter” numbers into a single, comparable engineering contract: a defined integration window, a stated spur policy, and a repeatable conversion to time jitter for system budgets. It also shows how to sanity-check measurements and translate jitter into converter SNR limits, so decisions are driven by budgets instead of misleading single-point specs.

Definition & Scope: What Phase Noise and Jitter Mean in Real Designs

Phase noise (frequency domain) and jitter (time domain) describe the same underlying phenomenon: random and deterministic phase/time fluctuations that degrade timing quality. A design becomes comparable and actionable only after a defined integration window and a spur policy are stated.

What to optimize (in one sentence)

The engineering target is RMS time jitter (σt) over a specified window [fL, fH], validated at the correct probe point and mapped to a system budget (e.g., converter SNR or interface margin).

Page ownership boundary (to prevent scope creep)
This page owns
  • How to read an offset phase-noise curve and identify noise regions vs spurs.
  • A repeatable conversion contract: PN → integrated jitter → σt (with declared window).
  • How σt impacts timing budgets (e.g., converter SNR-limited performance).
  • Measurement reporting fields and sanity checks that keep results comparable.
This page does not expand into
  • PLL / clock-cleaner loop design and stability tuning (handled in the synthesis/cleaning subpages).
  • Clock-tree topology, routing, termination, and distribution architecture (handled in distribution/layout subpages).
  • Interface compliance specifics (e.g., JESD204/PCIe/SyncE masks and rules) beyond referencing budgets.
  • Part-number recommendations; only selection logic belongs here.
Three high-impact confusions (and how to prevent wrong comparisons)
Spot PN vs integrated jitter

A single offset point cannot rank clock quality. The only comparable value is integrated jitter (σt) over a declared window. If the window is missing, the result is not comparable.

Random vs deterministic components

Spurs, periodic components, and SSC are deterministic. Mixing deterministic energy into a single RMS number can hide the true failure mechanism. Classify first, then decide include/exclude/mask.

Window-free “RMS jitter”

“RMS jitter” without [fL, fH] is a label, not a spec. Large disagreements across tools are often caused by different default windows, spur handling, and bandwidth settings.

Reporting contract (required fields for any jitter claim)
  • Metric: integrated PN or RMS jitter (σt) / TIE.
  • Integration range: [fL, fH].
  • Carrier / rate: output frequency or edge rate context.
  • Spur policy: include, exclude, or mask.
  • Method & settings: analyzer type, BW/RBW, averaging/time.
  • Probe point: where the clock is measured (source vs endpoint).

Pass criteria example: σt < X ps (your budget) over [fL, fH] with stated spur policy, measured at the endpoint.

Diagram: Frequency domain ↔ Time domain bridge (PN → σt → budget)
Phase noise to jitter budgeting bridge Block diagram showing an offset phase-noise curve, an integration window, conversion to RMS time jitter sigma t, and mapping to a budget. Offset PN (dBc/Hz) spur offset Integrate window [ fL , fH ] spur policy Time jitter (σt) σt Map to budget (SNR / margin) and verify at the endpoint

Practical rule: any claim about “jitter” must state [fL, fH] and spur policy; otherwise comparisons across datasheets, labs, and vendors are invalid.

Jitter Taxonomy: Random vs Deterministic, RMS vs Peak-to-Peak, Jitter vs Wander

Clear taxonomy prevents measurement disputes and wrong root-cause conclusions. The goal is not to memorize definitions, but to classify timing errors first and then choose an appropriate reporting method and pass criterion.

RMS (σ) vs peak-to-peak (p-p)
  • RMS is stable and comparable when the integration window and method are declared.
  • p-p depends on observation time, sample count, and rare tail events; it is not comparable without stating the capture conditions.
  • Use p-p only when the test explicitly defines observation time (e.g., margin testing with a fixed capture window).

Decision rule: if observation time is not part of the spec, prefer RMS with declared [fL, fH].

Random vs deterministic timing components
Random (noise-floor driven)
  • Dominant in broadband PN regions and high-offset noise floors.
  • Best summarized as σt over a declared window.
  • Pass criterion example: σt_random < X ps (your budget).
Deterministic (spurs / periodic / SSC)
  • Shows as discrete lines or structured skirts in the spectrum.
  • Must be explicitly handled: include, exclude, or mask-based integration.
  • Pass criterion example: spur amplitude and location stay within a defined mask.
Jitter vs wander (boundary statement)

Jitter is timing variation evaluated over a specified bandwidth/window. Wander is long-term, low-frequency drift that is better treated as tracking/holdover behavior rather than edge-to-edge timing noise. Mixing wander into a jitter number changes both the budget and the measurement method.

Practical check: if the dominant variation appears only when observation time increases significantly, classify it as wander-like behavior and report it separately.

Communication template (copy as a team standard)
Metric: RMS jitter (σt) / integrated PN / TIE
Integration range: [fL, fH]
Carrier / rate: output frequency and context
Spur policy: include / exclude / mask
Method: PN analyzer / spectrum app / TIE
Settings: BW/RBW, averaging/time, termination/coupling, probe point

Pass criteria example: σt < X ps (your budget) over [fL, fH], spur handling declared, measured at the same node across comparisons.

Diagram: Jitter classification tree (what to report and what not to mix)
Jitter taxonomy tree Tree diagram separating jitter into random and deterministic components, splitting metrics into RMS and peak-to-peak, with wander shown as a separate long-term category. Jitter Random Deterministic RMS (σ) p-p p-p depends on time Spurs SSC Wander low offset Classify first → then choose window, spur policy, and pass criteria

Engineering rule: deterministic components (spurs/SSC) should not be silently averaged away; report them explicitly or apply a declared mask strategy.

Reading an Offset Phase Noise Curve: What Matters and What Misleads

An offset phase-noise curve is useful only when it drives a comparable conclusion. The curve must be read as regions and features (close-in slope, knee, far-out floor, and discrete spurs) under stated conditions. Ranking clocks by a single offset point is a common mistake; the comparable outcome is integrated jitter over a declared window.

Five observation points (read the curve as regions, not as a single number)
Close-in 1/f region (low offset)

A steep close-in slope indicates low-offset dominance. If the integration lower bound fL is pushed downward, integrated jitter can increase sharply. Quick check: raise fL and confirm whether σt drops substantially.

Far-out noise floor (high offset)

A flat far-out floor often drives the high-offset contribution. Instrument floor and bandwidth settings can mask true device performance. Quick check: change averaging/correlation settings; if the floor shifts, the measurement is instrument-limited.

Knee / slope change (mechanism transition)

A clear knee suggests a transition between dominant mechanisms. Integrated jitter becomes window-sensitive when [fL, fH] spans the knee. Quick check: segment the window around the knee and compare contributions.

Discrete spurs (deterministic components)

Spurs are deterministic and should be handled explicitly (include/exclude/mask). Likely sources include reference leakage, fractional artifacts, supply coupling, or digital interference. Quick check: notch a dominant spur and observe the σt delta, then report spur location/amplitude separately.

Datasheet conditions (make comparisons legal)

Output frequency, output standard, loading/termination, supply conditions, and measurement method can change the curve materially. A curve without conditions is not a comparable spec. Quick check: record the full condition set before ranking devices or vendors.

Comparison rule (must be stated before ranking)

Do not rank clocks by a single offset PN point. Use a declared contract: [fL, fH], spur policy, carrier/output frequency, method/settings, and probe point. The curve is a map that guides window selection and spur handling; the comparable output is integrated jitter (σt).

Diagram: Annotated offset PN curve (regions, knee, and spurs)
Annotated offset PN curve Curve diagram with labeled regions: close-in 1/f, knee, far-out floor, discrete spur, plus a datasheet-conditions box. Offset PN curve 1/f knee floor spur offset dBc/Hz Conditions f0 / output load / term supply method compare by σt

Practical reading: identify which region dominates the intended integration window; treat spurs as explicit deterministic entries rather than silently folding them into a single RMS number.

PN → RMS Jitter: Integration Window, Units, and a Repeatable Conversion Recipe

A phase-noise plot becomes an engineering spec only after it is converted into RMS time jitter (σt) under a declared contract: carrier/output frequency, integration bounds [fL, fH], and spur handling. Different default windows can change σt by orders of magnitude, even for the same device.

Conversion recipe (SOP for comparable σt)
  1. Declare the contract: define the comparison purpose (device ranking, budget, or acceptance), then freeze window and spur policy for the entire comparison set.
  2. Set carrier/output frequency (f0): state the measured clock frequency and the probe point (source vs endpoint).
  3. Choose integration bounds [fL, fH]: use a default window for the use-case, then adjust only with stated rationale.
  4. Decide spur handling: include, exclude, notch, or apply a mask. Do not mix policies within one comparison.
  5. Integrate L(f) and convert to σt: integrate within [fL, fH] to obtain phase variance, then map to time jitter using f0. Report σt with the contract fields attached.

Pass criteria template: σt < X ps (your budget) over [fL, fH], spur policy stated, f0 stated, measured at the same node.

Why σt changes dramatically with integration range
Lower bound (fL) controls low-offset contribution

Lower offsets often contain close-in 1/f mechanisms. Extending the window downward increases the integrated area and can inflate σt. Adjust fL only when the use-case truly cares about long-timescale variation, and always document the change.

Upper bound (fH) exposes far-out floor and edge-related effects

Higher offsets include far-out noise floors and bandwidth-limited edges. Raising fH can increase σt if the floor is high or the measurement is instrument-limited. Keep fH consistent across comparisons and confirm the measurement floor is below the device floor.

Required report fields (a jitter number without these is not comparable)
  • f0 / output rate and probe point (source vs endpoint).
  • Integration bounds [fL, fH].
  • Spur policy: include/exclude/notch/mask.
  • Result: σt (RMS time jitter) with units.
  • Method & settings: instrument type, BW/RBW, averaging/time.

Consistency rule: comparisons are valid only when all fields are identical across candidates, except for the device under test.

Minimal executable statement (example format)

RMS jitter (σt) integrated over [fL, fH], spurs excluded, at f0, measured at the endpoint with declared instrument BW/RBW and averaging.

Use this format in reviews to prevent hidden window/policy mismatches.

Diagram: Integration window overlay + contract fields (fL, fH, spurs, f0)
Integration window overlay and contract fields Phase-noise curve with shaded integration window between fL and fH, plus a side contract card listing fL, fH, spurs, and f0. Offset PN → integrate fL fH spur Integrate → convert to σt → budget Contract fL fH spurs? f0 method no contract → no compare

Practical rule: changing [fL, fH] or spur policy changes the meaning of “RMS jitter”. Freeze the contract first, then compute and compare.

Converter SNR Budgeting: How σt Limits SNR (and When It Doesn’t)

Sampling-clock jitter limits converter SNR through timing uncertainty at the sampling instant. The limiting effect strengthens with higher input frequency and higher target dynamic range. A correct budget starts from the SNR target and the highest relevant input frequency, then back-solves an allowable RMS time jitter (σt) under a declared window and spur policy.

Practical takeaway (use in reviews)

For a given fIN and SNR target, there is a maximum allowed σt; exceeding it sets an SNR ceiling regardless of how clean the amplitude path is.

Engineering relationship (no derivation; define terms and limits)

SNR(jitter-limited) −20·log10( 2π · fIN · σt )

  • fIN: input signal frequency (or the highest critical tone/edge-equivalent frequency).
  • σt: RMS time jitter at the sampling clock node, with declared [fL, fH] and spur policy.
  • This is an upper bound set by timing uncertainty; total SNR can be lower due to other noise and distortion terms.

Comparability rule: σt must be computed from the same window and spur policy before using this relationship for ranking or budgeting.

Budget workflow: SNR target → σt budget → allocation frame
Step 1: Back-solve an allowable σt (budget)

Choose the SNR target and the highest relevant fIN (or worst-case tone). Compute σt_budget from the relationship above, then apply a guardband (X/your margin).

Pass criteria template: σt_meas < σt_budget / (1 + X/your margin)

Step 2: Allocate σt across blocks (framework only)
  • Source (reference oscillator / synthesizer output)
  • Cleaner / conditioning block (if present)
  • Fanout / distribution block (if present)
  • Board / coupling contribution (layout, supply, interference)
  • Measurement margin and repeatability allowance

Default combining rule (uncorrelated): σt_total ≈ sqrt(σ1² + σ2² + …). Correlated paths must be declared separately in reviews.

When jitter is not the primary limiter (avoid the wrong fix)
  1. Compute σt_budget for the SNR target and worst-case fIN.
  2. Measure or estimate σt_meas at the sampling node using the same window and spur policy.
  3. If σt_meas is comfortably below budget (e.g., < 1/X of budget), investigate other dominant terms first.
Common dominant alternatives (do not expand here)
  • Input-referred noise (front-end + bandwidth) dominates the noise floor
  • Front-end distortion limits SFDR/THD before timing does
  • Reference/supply noise couples into the conversion result
  • Quantization or digital processing noise dominates the achievable SNR

Decision rule: treat jitter fixes as high leverage only when σt_meas is close to the computed σt_budget under a consistent contract.

Diagram: SNR vs fIN for different σt budgets (illustrative budget lines)
SNR vs input frequency for different jitter budgets Illustrative plot with three budget lines representing different RMS time jitter values; demonstrates that higher input frequency requires lower jitter to meet SNR targets. SNR budget lines (illustrative) σt = A ps σt = B ps σt = C ps SNR target fIN SNR Budget fields fIN SNR target σt budget [fL, fH] higher fIN → lower σt

This diagram is illustrative: the purpose is to show the directionality (higher fIN requires tighter σt) and the required budget fields.

Choosing the Jitter Window: Recommended Bounds for Common Clock Use-Cases

The same clock can look “good” or “bad” depending on the integration range and spur policy. A usable window guide must provide a default starting point per use-case, plus a disciplined rule for adjusting bounds. Comparisons are valid only when the same use-case uses the same [fL, fH] and the same spur policy.

Default windows (starting points; use-case must be declared)
High-speed converter sampling

Default: [fL, fH] = [X kHz, Y MHz] (placeholder). Spur policy must be stated (include/exclude/mask).

SerDes reference clock

Default: [fL, fH] = [X kHz, Y MHz] (placeholder). Declare whether discrete spurs/SSC are included.

General digital clock tree

Default: [fL, fH] = [X Hz/kHz, Y MHz] (placeholder). Fix the window first, then compare devices and configurations.

Note: exact numeric bounds vary by system and standard; this page enforces the contract and adjustment rules, not a single global number.

How to adjust bounds (rules that prevent “window fights”)
Lower bound fL
  • Set by observation time and what the system treats as “slow variation”.
  • If low-offset behavior is absorbed or tracked by the system, fL can be raised to align with that behavior.
  • Changing fL changes the meaning of σt; document the rationale.
Upper bound fH
  • Set by edge bandwidth / receiver sensitivity / effective measurement bandwidth.
  • Raising fH increases sensitivity to far-out floors and measurement limitations.
  • Verify the instrument floor remains below the device floor before extending fH.
Comparison rule (hard)

Same use-case comparisons require the same [fL, fH], spur policy, probe point, and method. If any differ, the result is not comparable.

Diagram: Use-case → window selection flow (output contract: [fL, fH] + spur policy)
Use-case to integration window selection flow Flowchart with three input use-cases (converter, serdes, general) leading through two simple decisions (low-offset include, high-offset cover) to output contract cards listing fL, fH, and spur policy. Use-case Converter SerDes General Decisions Include low-offset? Cover high-offset? Output contract [ fL , fH ] use-case default spurs policy declare it Same use-case → same [fL, fH] and spur policy (otherwise no compare)

The flow is intentionally simple: the goal is to standardize the contract (window + spur policy) before any ranking or budgeting discussion.

Spurs, SSC, and Masks: How Deterministic Components Change “Jitter” Outcomes

When discrete spurs or spread-spectrum clocking (SSC) are present, a single RMS jitter number can be misleading. The same clock can look “worse” (higher σt) while improving EMI, because deterministic energy is redistributed rather than removed. A usable engineering report must declare how deterministic components are treated: include, exclude, or mask-based.

Rule of thumb (prevents “RMS fights”)

With spurs/SSC present, report σt_total (include) and σt_random (exclude/notch) plus a deterministic summary (spur list or mask verdict).

Three handling strategies (choose based on the decision being made)
Include spurs (most conservative)
  • Use for: system tolerance and worst-case risk assessment.
  • What it does: folds deterministic energy into σt_total.
  • Watch: not a clean way to compare random noise floors across devices.
  • Report: [fL,fH], policy=include, σt_total, top spur list.
Exclude / notch spurs (random floor focus)
  • Use for: comparing device noise floors and improvements to random jitter.
  • What it does: produces σt_random by removing discrete components per policy.
  • Watch: may hide deterministic failures (periodic errors, hitless issues, video “jumps”).
  • Report: [fL,fH], policy=exclude/notch, σt_random, removed spur list.
Mask-based (interface/system acceptance)
  • Use for: pass/fail against a defined tolerance curve or custom mask.
  • What it does: evaluates deterministic and skirt energy relative to a threshold.
  • Watch: results are not comparable across different masks; mask must be declared.
  • Report: mask definition, verdict, worst offender (offset & level), settings.
SSC reporting (describe it as controlled deterministic modulation)

SSC redistributes energy into a shaped skirt. This can reduce EMI peaks while increasing integrated σt under “include” policies. The correct approach is to declare SSC parameters and keep spur policy and window consistent.

  • Spread depth: X (ppm or % of f0).
  • Modulation rate: X (kHz).
  • Mode: down-spread / center-spread.
  • Policy: include / exclude / mask-based (must be stated).
  • RBW/VBW & span: must be logged for skirt comparability.

Acceptance practice: combine a deterministic verdict (mask pass/fail or spur list) with σt_random to avoid conflating EMI improvements with random-noise degradation.

Minimal deterministic summary (portable across teams)
Always include
  • f0 / output rate
  • [fL, fH]
  • spur policy (include/exclude/mask)
  • σt_total and/or σt_random
Deterministic details
  • Top spurs: offset + level (Top N)
  • SSC: depth + rate + mode (if enabled)
  • Mask: definition + verdict + worst offender (if used)

Comparability rule: deterministic policy and mask definition must be identical before comparing “jitter” numbers.

Diagram: Spectrum with spur lines and SSC skirt (include / exclude / mask)
Spectrum with spurs and SSC skirt Carrier line with several spur lines and a shaded SSC skirt; three policy labels show include, exclude/notch, and mask-based evaluation with a pass/fail cue. Spectrum view (concept) SSC skirt carrier spur include exclude mask offset level Outputs σt_total σt_random spur list mask verdict Policy must be declared (include / exclude / mask)

Deterministic energy must be handled explicitly: fold it in (include), remove it for random-floor comparisons (exclude/notch), or evaluate against a defined mask for acceptance.

How to Measure Phase Noise & Jitter: Methods, Settings, and What to Report

Measurement results differ across instruments primarily due to settings, probe points, coupling, and instrument noise floors. A reliable workflow selects a method that matches the question (frequency-domain vs time-domain), validates the instrument floor, and logs a complete report contract. Without the contract fields, results are not comparable across teams or labs.

Method map (pick based on output and limitations)
Phase-noise analyzer
  • Best for: L(f) curves and controlled integration.
  • Pitfalls: instrument floor, incorrect offset span, poor coupling/termination.
  • Log: offset range, RBW/VBW, averaging, probe point, policy.
Spectrum analyzer + PN app
  • Best for: quick checks and spur identification.
  • Pitfalls: RBW/VBW and detector settings distort skirt/spur amplitude.
  • Log: span, RBW/VBW, detector, averaging, reference level.
TIE (scope / time-interval)
  • Best for: time-domain jitter, periodic components, stability vs time.
  • Pitfalls: trigger sensitivity, bandwidth limits, observation-time dependence.
  • Log: bandwidth, threshold, record length, statistics window.
Cross-correlation (floor reduction)
  • Best for: revealing device floor below instrument noise.
  • Pitfalls: insufficient correlation count, inconsistent cabling/paths.
  • Log: correlation count, averaging time, path matching notes.
Setup SOP (makes results explainable)
  1. Declare the probe point (source / after cleaner / after fanout / endpoint).
  2. Declare coupling (AC/DC, single-ended/differential, transformer/attenuator/buffer).
  3. Declare termination/impedance (50Ω, differential termination, loading).
  4. Log instrument offset span, RBW/VBW, and averaging/time.
Floor & convergence checks (quick validity gates)
  • Change averaging/correlation; large movement indicates non-convergence or instrument-limited floor.
  • Confirm the far-out floor does not shift with instrument settings; shifting indicates measurement dominance.
  • State warm-up/temperature state when repeatability matters.
Report contract fields (missing fields → not comparable)
  • f0 / output rate and power level / swing.
  • Integration range [fL, fH] and spur policy (include/exclude/mask).
  • BW / RBW / VBW and averaging / time / correlation count.
  • Termination / impedance and coupling method.
  • Probe point and path notes (buffer/attenuator/cable/fixture).

Alignment rule: different probe points or coupling methods must be labeled explicitly; otherwise, “same clock” results cannot be reconciled.

Diagram: Measurement setup block diagram + report tags
Measurement setup and report tags DUT feeds a buffer/attenuator then coupling/termination into instrument; a report tag card lists key fields to log for comparability. Measurement path DUT buffer attenuator coupling termination instrument PN / TIE corr / avg probe point protection 50Ω / diff Report tags f0 [fL,fH] policy RBW avg term

The setup is intentionally generic: the critical requirement is to declare probe point, coupling, termination, and instrument settings so results can be reproduced and compared.

Measurement Traps & Sanity Checks: Why Readings Disagree and How to Validate Quickly

When phase-noise and jitter numbers do not match across instruments—or do not match system behavior—the root cause is often a contract mismatch (window/policy/probe point) or a measurement artifact (compression, RBW effects, coupling, or supply/ground spurs). This section provides fast, repeatable sanity checks that turn “untrusted numbers” into explainable results.

Sanity sequence (run in order before debating numbers)
  1. Confirm carrier: level, termination, no compression.
  2. Freeze contract: [fL,fH], spur policy, probe point.
  3. Test deterministic: RBW change, spur notch test.
  4. Check correlation: A/B same-source vs different-output.
Pass criteria (minimum)
  • L(f) shape is stable under small level changes (no sudden floor/shape shifts).
  • Reported σt uses a declared [fL,fH] and spur policy (include/exclude/mask).
  • Notching top spurs changes σt by an explainable amount (Δσt not “mystery”).
  • A/B result identifies correlated vs uncorrelated behavior (Δσt improves vs budget).
Trap 1 — Carrier level & compression (common hidden cause)

If the instrument front-end or coupling path is compressed or clipped, the apparent noise floor and spur amplitudes can shift in non-physical ways. Different instruments may disagree simply because they are operating in different linearity regimes.

Quick check
  • Insert/remove a small attenuator (±X dB) and verify L(f) shape does not “jump.”
  • Confirm termination (50Ω / differential) matches the measurement path and DUT output standard.
  • Verify carrier power/swing is logged and consistent across setups.
Pass criteria
  • No sudden floor/shape shifts under ±X dB level change.
  • Carrier level recorded (dBm or equivalent swing) and reproducible.
Trap 2 — RBW / gate / averaging changes “spur visibility”

Spurs and SSC skirts are highly sensitive to RBW/VBW, detector, and averaging/time. A spur that “disappears” may only be smeared, averaged, or hidden by an incompatible span/RBW combination. Conclusions are valid only under a fixed measurement contract.

Quick check
  • Change RBW by ×10 and verify spur behavior is explainable (not random).
  • Hold detector + averaging constant when comparing conditions.
  • Log span/offset range so “same offset” comparisons are actually aligned.
Pass criteria
  • Spur location remains stable; amplitude trend is repeatable under fixed settings.
  • RBW/VBW, detector, and averaging/time are documented.
Trap 3 — Probe/coupling can inject AM→PM artifacts

The coupling chain (probe, transformer, attenuator, buffer) can convert amplitude noise into phase noise or reshape spurs due to impedance mismatch and reflections. This often explains why a bench number looks good while the system behaves poorly—or why touching a cable changes the result.

Quick check
  • Repeat the measurement with a different coupling path (AC vs DC, single-ended vs differential).
  • Add isolation (attenuator/buffer) and see whether near-offset noise/spurs shift strongly.
  • Confirm probe point and termination are identical before comparing instruments.
Pass criteria
  • Changing coupling does not invert conclusions; shifts remain within X (your tolerance).
  • Probe point + coupling method are recorded in the report.
Trap 4 — Supply noise & ground return create “mystery spurs”

Spurs often correlate with switching frequencies, load states, or digital activity. In that case the clock core may be fine; the measurement is observing coupled interference. The goal is to confirm correlation and document it, then evaluate acceptance using the declared policy/mask.

Quick check
  • Change load/digital state and observe whether spur amplitude moves in-step.
  • Repeat at an earlier/later probe point to locate where the spur is injected.
  • Notch the dominant spur and quantify Δσt; compare against expectations.
Pass criteria
  • Correlation is confirmed and recorded (frequency and condition).
  • Acceptance is expressed as mask verdict or declared σt policy over [fL,fH].
Fast self-consistency recipes (turn suspicion into evidence)
A/B correlation test
  • Compare two taps driven from the same distribution output vs different outputs/paths.
  • Use Δ(A−B) to separate correlated drift from uncorrelated noise.
  • Pass: residual σt drops below X (your alignment budget) over window W.
Upper-bound sensitivity
  • Hold fL and policy constant; vary fH (×2 / ÷2) and recompute σt.
  • Strong sensitivity indicates high-offset floor or instrument/coupling dominance.
  • Pass: σt changes match expectations for the use-case; otherwise re-check floor.
Spur notch test
  • Remove/notch the dominant spur(s) and recompute σt_random.
  • Large Δσt indicates deterministic dominance; switch to mask-based acceptance.
  • Pass: Δσt is explainable and spur list is recorded (offset & level).
Diagram: Sanity-check checklist (fast validation flow)
Sanity-check checklist flow A four-step flow with compact action blocks and pass criteria pills; intended for quick validation of phase noise and jitter measurements. Sanity checks (run in order) Step 1 carrier level termination OK / retest Step 2 window [fL,fH] policy OK / retest Step 3 RBW notch spur OK / retest Step 4 A/B corr Freeze contract → validate artifacts → trust results

A stable result requires a fixed contract (window + policy + probe point) and at least one deterministic sensitivity test plus an A/B correlation check.

Engineering Checklist: Bring-up, Production Test, and Field Logging for PN/Jitter

This checklist turns measurement and reporting into a lifecycle process: establish a reproducible baseline in bring-up, compress it into fast production proxies with periodic audits, and keep field logging aligned to the same contract so issues remain explainable and actionable.

Contract anchor (must remain constant across bring-up / production / field)
  • Use-case: converter / SerDes / general
  • Probe point: source / after cleaner / fanout / endpoint
  • Window: [fL,fH] and spur policy (include/exclude/mask)
  • Outputs: σt_total and/or σt_random + deterministic summary
  • Pass criteria: σt < X ps (your budget) over [fL,fH]

Interpretation rule: production and field proxies must map back to the same contract; otherwise trends and thresholds cannot be trusted.

Bring-up (lab baseline + reproducibility gates)
  • Freeze window [fL,fH], spur policy, and report fields; run a baseline capture.
  • Run the sanity sequence (carrier, contract, RBW/notch, A/B correlation) at least once.
  • Record deterministic summary: top spurs (offset + level), SSC parameters if enabled, mask verdict if used.
  • Perform an upper-bound sensitivity check (vary fH) to detect floor/coupling dominance early.
Pass criteria
  • σt < X ps (your budget) over [fL,fH] under the declared policy.
  • Repeatability: re-run variation stays within X% (your tolerance) under identical contract.
  • Deterministic list/mask verdict is stable across repeated captures.
Production (fast proxies + periodic audits)

Production testing often needs faster metrics than full L(f) sweeps. Use a proxy that remains mapped to the same contract, then audit it periodically with full measurements to keep thresholds honest.

  • Define a proxy metric (e.g., L(f) at one or two fixed offsets, or σt_proxy over a fixed narrow window).
  • Lock the fixture: termination, coupling, cabling path, and probe point must be controlled and logged.
  • Audit plan: every N units (or per lot), run a full measurement to recalibrate proxy thresholds.
Pass criteria
  • Proxy < X (your guardband threshold) under fixed settings.
  • Audit consistency: proxy-to-full difference < X (your tolerance) across audits.
  • Fixture changes trigger re-baselining (no silent drift).
Field (logging + alarms aligned to the same contract)

Field data must remain comparable to bring-up baselines. Log only what helps explain PN/jitter behavior and deterministic events, and attach contract tags (window/policy/probe point/mode) to every alarm snapshot.

Log tags
  • freq offset
  • lock status
  • temperature
  • supply-noise proxy
  • SSC mode (on/off) and reference source selection
Alarm snapshot
  • σt_proxy > X (your budget) for W (your window)
  • mask violations count > X (your threshold)
  • contract tags: [fL,fH], policy, probe point, coupling mode
Pass criteria
  • Every alarm includes a contract snapshot (no “unknown settings” incidents).
  • Field proxies can be reproduced in the lab under the same contract.
Diagram: Lifecycle checklist ladder (Bring-up → Production → Field)
Lifecycle checklist ladder A three-stage ladder with compact action blocks for bring-up, production proxy testing, and field logging; includes pass criteria placeholders for sigma-t budget checks. Lifecycle checklist ladder Contract: probe point • [fL,fH] • policy • outputs Bring-up freeze baseline sanity spurs σt < X ps Production proxy guard audit fixture proxy < X Field log alarm snapshot trace aligned tags

The ladder enforces one measurement contract across the lifecycle: establish a baseline, map production proxies to it with audits, and attach contract tags to field alarms for explainable tracebacks.

Applications: How PN/Jitter Requirements Differ Across Scenarios

PN/jitter targets are not universal. The same “RMS jitter” number can be irrelevant in one scenario (deterministic/mask dominated) and critical in another (random σt dominated). This section maps common scenarios to what matters, typical failure symptoms, and the minimum measurement contract that must be declared to make comparisons valid.

Diagram: Use-case map (pick scenario → declare contract → compute budget)
Use-case map for PN and jitter Six-tile map that helps select the correct PN/jitter focus: random sigma-t, deterministic spurs, close-in noise, wander, and mask-based acceptance. Use-case map Pick scenario → declare contract → compute budget σt budget Converters ADC/DAC clocks mask SerDes / PCIe deterministic wander SyncE / PTP holdover genlock Video / Audio spur sensitivity EVM RF LO close-in PN drift RTC / Monitor aging / alarms Declare: output freq • probe point • window [fL,fH] • spur policy

Map first, then budget. The same σt number is only meaningful under a declared window and spur policy.

1) High-speed converters (ADC/DAC sampling clocks)

Primary concern: random σt (aperture-driven), far-out floor, window consistency

Typical symptom: SNR/SFDR drop that grows with input frequency

Declare contract: [fL,fH], include/exclude spurs, probe point (at ADC pin vs upstream)

Next pages: ADC Sampling Clocks • DAC / RF Synth Clocks

Example building blocks (MPNs, starting points only)

Verify package/suffix/availability; selection must be driven by σt budget and declared window/policy.

  • Si5341 / Si5345 (jitter attenuators)
  • LMK04828 / LMK05318 (clock generator / cleaner)
  • AD9528 / HMC7044 (clock generator / jitter cleaner)

2) SerDes / PCIe / JESD204 reference & SYSREF alignment

Primary concern: deterministic components (spurs/SSC), mask-based acceptance, skew/alignment

Typical symptom: link training failures, intermittent lock/BER, SYSREF alignment issues

Declare contract: spur policy (include vs mask), probe point, termination/levels (HCSL/LVDS)

Next pages: PCIe Ref Clocks (SRNS/SRIS) • JESD204 Ref Clock & SYSREF

Example building blocks (MPNs, starting points only)
  • Si5341 / Si5328 (jitter cleaning / timing)
  • LMK04828 / LMK03328 (multi-output timing / clocking)
  • AD9528 / AD9523-1 (SYSREF-capable clock generators)

3) SyncE / carrier timing / network synchronization

Primary concern: wander vs jitter, close-in PN, holdover stability, alarm logic

Typical symptom: EEC/SEC quality alarms, holdover drift, loss-of-lock events

Declare contract: low-offset bounds, long observation time, include spurs/mask if required

Next pages: SyncE • IEEE 1588/PTP Hardware Timestamping

Example building blocks (MPNs, starting points only)
  • AOCJY (Abracon OCXO series) / NH36M (NDK OCXO module)
  • TG-5006 (Epson TCXO family for stability/holdover variants)
  • Si5345 (timing-grade jitter attenuation class)

4) Video / audio clocks (genlock, MCLK families, switching artifacts)

Primary concern: deterministic jitter, switching spurs, SSC interactions, hitless behavior

Typical symptom: frame “jumps,” intermittent lock after switching, audible artifacts

Declare contract: spur policy + mask; switching mode and observation conditions

Next pages: USB3/SDI/Video Clocks • Audio Master Clocks

Example building blocks (MPNs, starting points only)
  • Si570 (programmable XO class for frequency families)
  • LMK04828 / AD9523-1 (multi-output clock generator class)
  • ADCLK948 / LMK00304 (fanout buffer class)

5) RF LO / synthesizers (frac-N, DDS: close-in PN + spurs)

Primary concern: close-in PN (EVM), discrete spurs, mask-based purity rules

Typical symptom: EVM failure, in-band spurs, degraded adjacent-channel performance

Declare contract: close-in bounds, spur include/mask, carrier power, RBW settings

Next pages: RF LO Synthesizers • DDS

Example building blocks (MPNs, starting points only)
  • AOCJY / NH36M (OCXO classes for low close-in noise references)
  • Si5345 (timing-grade cleaner class)
  • HMC7044 (jitter cleaning / distribution class)

6) Low-speed timebase / RTC / monitoring & health

Primary concern: drift/aging and low-frequency behavior (wider time horizons)

Typical symptom: time offset growth, alarm thresholds exceeded, switchover anomalies

Declare contract: observation window, temperature context, logging fields

Next pages: RTC • Monitoring & Health

Example building blocks (MPNs, starting points only)
  • TG-5006 (TCXO class) / DSC1001 (MEMS oscillator class)
  • SG-8002 (XO family) / ASFL1 (XO series)

Interpretation rule: comparing “RMS jitter” across scenarios is invalid unless the window [fL,fH], spur policy, output frequency, and probe point are declared and aligned.

IC Selection Logic: Using PN/Jitter Specs to Choose Sources, Cleaners, and Fanouts

Selection should not start from a datasheet curve. Start from a system σt budget, declare a measurement contract (window + spur policy + probe point), then decide whether the dominant risk is close-in noise, far-out floor, or deterministic components. Only then does “device class selection” become stable.

Diagram: Budget → contract → noise-shape → device class
Budget to device class decision tree A compact decision tree linking system jitter budget and reporting contract to device class suggestions: oscillators, cleaners, and fanouts. Selection tree Budget → Contract → Noise shape → Device class Inputs σt_budget window [fL,fH] spur sensitivity Gate contract fixed? probe • policy • freq Dominant noise shape close-in far-out floor deterministic Outputs (device classes) source class XO/TCXO/OCXO/MEMS cleaner class jitter attenuator fanout class buffer / distro Always report: freq • probe point • [fL,fH] • spur policy • σt

Device choice becomes stable only after the contract is fixed and the dominant noise shape is identified.

Selection chain (repeatable logic)
  1. Start from σt_budget (system-level, not a single datasheet offset point).
  2. Declare contract: output frequency, probe point, window [fL,fH], spur policy (include/exclude/mask).
  3. Identify dominance: close-in / far-out floor / deterministic (spurs/SSC).
  4. Pick device class: source class + cleaner class + fanout class; then allocate budget buckets.
Dominance questions (decide where improvements actually come from)
Close-in dominated?

Strong sensitivity to low offsets / long-term behavior. Improvements typically require a stronger reference source class (stability / close-in PN) and clean isolation.

Far-out floor dominated?

σt changes strongly with higher fH. Improvements often come from cleaner/attenuator class choices and measurement/coupling discipline.

Deterministic dominated?

σt drops significantly after spur notch. Acceptance should shift to mask-based rules and spur hygiene; “better RMS” alone may not fix compatibility.

Budget allocation buckets (framework only)

Allocate σt_budget across the chain to reflect dominance and risk. The buckets below provide a repeatable review structure without binding to a single architecture.

  • Source: reference oscillator class contribution (close-in stability / baseline PN)
  • Cleaner: jitter attenuation / transfer shaping contribution
  • Fanout: additive jitter + distribution artifacts contribution
  • Board: supply/ground coupling (deterministic spurs) contribution
  • Margin: measurement uncertainty + environment guardband
Standard selection statement template (use across pages)

Copy and fill in. This keeps comparisons consistent and prevents “single-offset” misunderstandings.

  • Use-case: ____
  • Contract: output freq ____ ; window [fL,fH]=____ ; spur policy=____ ; probe point=____
  • Budget: σt_total < X ps (system budget) ; margin=____
  • Dominant risk: close-in / floor / deterministic
  • Preferred class: primary=source/cleaner/fanout ; secondary=____
  • Acceptance: mask verdict and/or σt under declared window/policy
Reference examples (concrete MPNs; starting points only)

These part numbers are included to speed datasheet lookup and lab validation. Final selection must be driven by the contract and worst-case conditions. Always verify package, ordering suffix, frequency/grade options, and availability.

Source oscillators (XO / TCXO / OCXO / MEMS / programmable)
  • XO: SG-8002 (Epson XO family), ASFL1 (Abracon XO series)
  • TCXO: TG-5006 (Epson TCXO family), ASTX-H11 (Abracon TCXO series)
  • OCXO: AOCJY (Abracon OCXO series), NH36M (NDK OCXO module)
  • MEMS: DSC1001 (Microchip MEMS oscillator family)
  • Programmable XO: Si570 (Silicon Labs programmable XO family)
Jitter attenuators / clock cleaners / generators
  • Silicon Labs: Si5341, Si5345, Si5328
  • Texas Instruments: LMK04828, LMK05318, LMK03328
  • Analog Devices: AD9528, AD9523-1, HMC7044
Distribution / fanout buffers (additive jitter matters)
  • Analog Devices: ADCLK948 (fanout buffer class), LTC6952 (LVDS clock buffer class)
  • Texas Instruments: LMK00304 (clock buffer / distribution class)

Tip: treat fanout “additive jitter” as its own budget bucket, not a rounding error—especially after cleaning.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs: Phase Noise & Jitter (Troubleshooting, Reporting, Sanity Checks)

These FAQs close common long-tail disagreements (instrument mismatch, window definitions, spur policy, and measurement traps) without expanding the main text. Each answer uses a fixed 4-line structure and measurable pass criteria with X / your budget placeholders.

Why do two instruments report different RMS jitter for the same clock?

Likely cause: Reporting contract mismatch (window [fL,fH], spur policy, RBW/VBW, acquisition/averaging, probe point, or carrier level/loading).

Quick check: Force identical window [fL,fH] and spur policy; match RBW/VBW and acquisition time; verify carrier power within [Pmin,Pmax] (no compression).

Fix: Lock a single report template (freq, probe point, window, spur policy, RBW/VBW, averaging/time, termination) and re-run both instruments under that contract.

Pass criteria: |Δσt| < X ps (or < X%) under the same declared contract and probe point.

How should [fL, fH] be chosen for “RMS jitter” reporting?

Likely cause: Window is unspecified or inherited from an instrument preset, making numbers incomparable across parts and teams.

Quick check: Recompute σt using two windows (e.g., [fL1,fH1] vs [fL2,fH2]) and observe whether conclusions flip.

Fix: Set window by use-case contract (converter / SerDes / general clock) and enforce “same use-case → same window” for all comparisons; document the bounds in every report.

Pass criteria: All compared results cite a single window [fL,fH]; σt meets budget: σt < X ps over [fL,fH] (your budget).

My PN curve looks low, but converter SNR is still worse—what is the first budget sanity check?

Likely cause: Wrong integration window (or wrong probe point) compared to the sampling aperture sensitivity; or SNR is dominated by non-jitter noise (front-end/REF/quantization).

Quick check: Compute implied σtbudget from target SNR at fIN; compare to measured σt at the ADC clock pin under the declared [fL,fH] and spur policy.

Fix: Align the contract to the converter use-case (probe at endpoint, consistent [fL,fH]); then re-check if SNR changes with fIN as predicted by jitter-limited behavior.

Pass criteria: If jitter-limited, SNR tracks budget within X dB across a sweep; if not jitter-limited, improving σt by >X% yields <X dB SNR change.

A single spur dominates RMS jitter—should it be included or excluded?

Likely cause: Deterministic component is driving the integrated result; “RMS jitter” is being used without a declared spur policy.

Quick check: Perform a notch test (exclude that spur only) and record Δσt; also check if the interface/system is mask- or tolerance-limited by that spur.

Fix: Use include-spurs for worst-case system compatibility; use exclude-spurs only for comparing random noise floor; use mask-based reporting where applicable.

Pass criteria: Policy is explicitly stated (include/exclude/mask) and acceptance is met: σt<X ps (per policy) and/or mask violation count = 0 (your budget).

Why does widening RBW or shortening acquisition time change spur visibility?

Likely cause: Instrument resolution and statistical processing change how narrowband tones stand out from the noise floor.

Quick check: Repeat with fixed RBW/VBW, fixed averaging, and longer acquisition (×10); confirm whether the spur level converges to within X dB.

Fix: Standardize RBW/VBW, acquisition time, and averaging for the use-case; report those fields alongside σt and spur policy.

Pass criteria: Spur amplitude and resulting σt vary by <X dB / <X% across repeated runs under the same settings.

How can AM noise turn into apparent PM/jitter in measurement?

Likely cause: AM→PM conversion in the measurement path (compressing amplifiers, mixers, detectors, or improper coupling) makes amplitude noise appear as phase noise.

Quick check: Insert/adjust attenuation by X dB and verify whether the reported PN/jitter shifts; compare two coupling methods (AC vs transformer) while holding the carrier level constant.

Fix: Keep the chain in a linear power range; use proper attenuation, termination, and (if available) cross-correlation to suppress instrument artifacts.

Pass criteria: Changing attenuation/coupling shifts PN/jitter by <X dB (or <X%) when the carrier is held within [Pmin,Pmax] and the setup is linear.

Why does σt look great at the cleaner output but fail at the endpoint (ADC/PHY pin)?

Likely cause: Additive jitter and deterministic coupling are introduced by fanout, level translation, routing, termination, or probing; probe point mismatch hides endpoint degradation.

Quick check: Measure at three points (source → after fanout → endpoint) with the same [fL,fH] and spur policy; compare Δσt per stage to locate the dominant insertion.

Fix: Budget additive jitter for distribution; tighten termination/levels; validate probing method; treat board-coupled spurs as deterministic and manage with mask/policy.

Pass criteria: Endpoint σt < X ps over [fL,fH] (your budget), and stage-to-stage additive contribution stays within allocated buckets (X% each).

When should mask-based reporting be used instead of integrated RMS jitter?

Likely cause: Deterministic components (spurs/SSC) can pass or fail an interface even when integrated σt looks acceptable (or vice versa).

Quick check: Compare “include-spurs σt” with a mask verdict; if the failure correlates with discrete tones or a skirt rather than noise floor, prefer mask-based acceptance.

Fix: Use mask-based reporting for compatibility decisions; keep RMS jitter for random-noise budgeting and cross-part comparisons under a fixed contract.

Pass criteria: Mask violation count = 0 (or within X margin) and reported RMS jitter includes the declared policy and window; no “policy-free” numbers remain.

How can far-out noise floor dominance be confirmed quickly?

Likely cause: The integrated result is being driven by high-offset floor (or measurement floor), not close-in behavior or discrete spurs.

Quick check: Sweep only fH (e.g., fH1 → fH2) while holding fL constant; if σt rises steeply with fH, floor dominance is likely.

Fix: Confirm instrument floor with cross-correlation (if available) and ensure the window matches the use-case; avoid comparing parts with different implicit fH limits.

Pass criteria: Under the declared contract, Δσt/ΔfH behaves as expected and stays within budget: σt < X ps over [fL,fH] (your budget).

Why do results change with coupling/attenuation even when the same DUT is measured?

Likely cause: Loading, compression, or impedance mismatch changes waveform shape and introduces measurement-induced artifacts (including AM→PM conversion).

Quick check: Hold carrier level constant at the instrument input while swapping coupling; verify return loss/termination; confirm no saturation by stepping power ±X dB.

Fix: Standardize termination and coupling; use proper attenuation; document probe point and input power; avoid using “scope-only” coupling for PN/jitter decisions.

Pass criteria: Changing coupling/attenuation shifts PN/jitter by <X% with constant instrument input power and verified termination (your contract).

Why can “excluding spurs” make two parts look identical while the system behaves differently?

Likely cause: Spur exclusion hides deterministic components that violate system tolerance or masks, even if the random noise floor is comparable.

Quick check: Compare include-spurs σt and a mask verdict; correlate failures with spur frequencies (reference leak, fractional spurs, supply-coupled tones).

Fix: Use exclude-spurs only for “noise-floor-only” comparisons; for compatibility, report include-spurs and/or mask-based metrics and manage deterministic components explicitly.

Pass criteria: Under the declared policy, deterministic components meet limits: spur level < X dBc (or mask margin > X dB) and system failures disappear.

What must be included in a PN/jitter report so results are reproducible?

Likely cause: Missing contract fields (window, spur policy, probe point, and instrument settings) makes a “good number” non-portable.

Quick check: Attempt to reproduce using only the report; if any of these are missing, the report is incomplete: output frequency, probe point, [fL,fH], spur policy, RBW/VBW, averaging/time, termination/coupling, carrier level.

Fix: Enforce a mandatory template and reject results without complete fields; add a “settings snapshot” and endpoint photo/diagram reference if probe ambiguity exists.

Pass criteria: Independent reproduction matches within |Δσt| < X% (or < X ps) under the same declared contract.