123 Main Street, New York, NY 10001

Frequency / Time Interval Counter Architecture & Calibration

← Back to: Test & Measurement / Instrumentation

A frequency/time-interval counter turns input edges into a trusted, traceable number by combining a calibrated timebase with gated/reciprocal/timestamp measurement and controlled trigger timing. The key to reliable results is managing timing uncertainty (edge slew, time-walk, TDC limits, dead time/throughput) and proving performance with logs, calibration tags, and a repeatable validation checklist.

H2-1 · What a Frequency/Time-Interval Counter really measures

A frequency/time-interval counter converts signal edges into time-referenced numbers. Instead of “showing a waveform,” it anchors measurements to a timebase (internal or external 10 MHz / 1 PPS) and uses counting + interpolation + statistics to deliver repeatable, traceable results.

Key takeaway
Pick the measurement quantity first (frequency / period / interval / timestamp / totalize), then pick the mode and settings. If the edge timing is not well-defined, no mode can “save” accuracy.

Measurement menu (what the instrument can output)

Quantity Definition (practical) Primary limits Best use
Frequency (f) Cycles per second inferred from counts or time intervals. Gate-time quantization (gated), timebase accuracy, edge noise. Clock verification, divider chain checks, stable oscillators.
Period (T) Time between adjacent like edges (rising→rising). Trigger-point uncertainty, slew-rate limits, interpolation residual. Low-frequency high-resolution measurement; jitter studies.
Time interval (Δt) Start–Stop delay across one or two channels. Channel skew, arming/holdoff behavior, trigger threshold mismatch. Propagation delay, latency validation, interval timing.
Timestamp / TOA Time-of-arrival for each event, relative to the timebase. FIFO throughput, overflow, missing-edge handling, timebase stability. Event streams, diagnostics, burst analysis, missing-edge proof.
Totalize / event count Accumulated number of events (optionally within a gate window). Gate boundary errors, dead time, input conditioning errors. Production counts, pulse accumulation, gated event statistics.

Why a counter is not an oscilloscope (engineering reasons)

  • Timebase anchoring: results are explicitly referenced to a timebase (internal/external). This supports calibration and long-term comparability.
  • Interpolation + statistics: counters combine coarse counting with fine timing interpolation and report stable averages/variation, which helps interpret “how trustworthy” a number is.
  • Edge-time definition matters: the measurement is about the event timing (crossing a threshold), so trigger-point stability is a first-class design constraint.

Practical checklist (to avoid misleading numbers)

  • Define the quantity: frequency vs period vs interval vs timestamps vs totalize.
  • Confirm edge quality: ensure adequate slew rate at the trigger threshold; slow/noisy edges inflate timing uncertainty.
  • Confirm reference state: internal vs external 10 MHz / 1 PPS, and whether the instrument indicates “locked/valid.”
  • Choose the right mode: gated is fast; reciprocal improves low-frequency resolution; timestamping exposes missing events.
Measurement map for a frequency/time-interval counter Diagram showing three input signal types (sine, square, pulse) feeding a counter measurement core, producing five measurement outputs: frequency, period, time interval, timestamp/TOA, and totalize. What is being measured? One input signal can produce multiple time-anchored results. Input signals Sine Square Pulse / Event Counter measurement core Timebase + counting + timing Edge detect Count / Δt Statistics + reporting mean • σ • N • limits Referenced to timebase (10 MHz / 1 PPS) Measurement outputs Frequency Period Time interval Timestamp / TOA Totalize Tip: choose the quantity first; then select the measurement mode and settings that best control uncertainty.

H2-2 · Measurement modes: gated vs reciprocal vs timestamping

Measurement “mode” defines what is counted, what is timed, and which error term dominates. Three practical modes cover most use cases: gated counting, reciprocal counting, and timestamping.

Gated = fast, simple
Reciprocal = stronger at low frequency
Timestamping = event-level diagnostics

1) Gated counting (cycle count within a fixed window)

  • Mechanism: count cycles N during a gate time Tgate, then estimate f ≈ N / Tgate.
  • Dominant limit: quantization of ±1 count, especially at low frequency (small N).
  • Best for: quick checks, mid/high frequencies, and applications where speed matters more than ultra-fine resolution.
  • Key knob: longer Tgate improves resolution but reduces update rate.

2) Reciprocal counting (time a period/interval, then invert)

  • Mechanism: measure time for one or multiple cycles Δt, then compute f ≈ N / Δt (or T = Δt / N).
  • Why it wins at low frequency: low-frequency signals have larger periods, so fine time interpolation becomes a smaller fraction of the total.
  • Dominant limit: edge timing uncertainty at the trigger point (noise-to-time conversion, time-walk).
  • Key knobs: number of cycles N, averaging strategy, and trigger threshold/conditioning quality.

3) Timestamping (time-stamp every event, then post-process)

  • Mechanism: assign a timestamp to each edge/event tk, then derive period/frequency/jitter/missing edges offline or in firmware.
  • Why it is powerful: exposes event-level anomalies (bursts, missing edges, overflows) that average-based modes can hide.
  • Dominant limit: throughput and data integrity (FIFO depth, overflow flags, missing-edge counters).
  • Key knobs: event rate limits, holdoff/arming rules, and overflow reporting (must be monitored).

Mode selection table (rules, not slogans)

Need / constraint Recommended mode What to set / verify
Fast updates and adequate resolution at mid/high f Gated Set Tgate; confirm stable edge detection; watch for gate-boundary errors.
High resolution at low frequency or long periods Reciprocal Choose N cycles; optimize trigger threshold; ensure adequate slew rate at threshold.
Event stream diagnostics (missing edges, bursts) Timestamping Verify FIFO headroom; monitor overflow + missing-edge counters; log validity flags.
Edge quality is poor (slow/noisy crossings) Fix the edge first Improve slew rate / conditioning; otherwise uncertainty is dominated by trigger-point jitter regardless of mode.

Practical pitfalls (the ones that silently ruin results)

  • Low-frequency “too good to be true”: short gate time makes ±1 count dominate; increasing Tgate should reduce variance predictably.
  • Reciprocal mode disappointment: slow edges increase timing jitter; the root cause is usually low dV/dt at the trigger threshold.
  • Timestamp data lies: overflow/missing-edge flags not monitored; post-processing assumes all events are present.
  • Hidden dead time: re-arming/holdoff causes blind intervals; verify with known pulse trains and missed-edge counters.
Three measurement modes: gated, reciprocal, and timestamping Three parallel block-diagram lanes showing the processing flow for gated counting, reciprocal counting, and timestamp FIFO modes, with short labels for typical best-use scenarios. Measurement modes (choose by need) Same input, different timing strategy and dominant error term. Gated counting Reciprocal counting Timestamping Input edges Input edges Input edges Gate window Cycle count N Compute f Best for fast checks • mid/high f Edge time t Coarse+TDC Δt interpolation Compute f/T Best for low f • higher resolution Timestamp tₖ FIFO / log Post-process f • jitter • missing edges Best for event diagnostics • bursts Rule of thumb: if events can be missed, use timestamping and monitor overflow/missing-edge flags.

H2-3 · System architecture: from input edge to a trusted number

A trustworthy counter result is built by a chain of decisions: where an edge is defined, which timebase anchors the measurement, and how calibration and validity flags travel with the number. The architecture below is a practical mental model for reading datasheets, configuring modes, and interpreting logs.

The three pillars of a “trusted number”
  • Reference timebase: a valid, known timebase (internal or external 10 MHz / 1 PPS) that anchors absolute scaling.
  • Edge-time definition: a stable trigger point (threshold/hysteresis/conditioning) so event times are repeatable.
  • Calibration + uncertainty: corrections (timebase offset, TDC linearity, channel skew) plus validity flags to bound error.

Data path (what happens to the input edge)

  • Input conditioning: turns an analog transition into a repeatable digital edge by defining a threshold and adding noise immunity. The practical outcome is a stable event time; unstable crossings inflate timing uncertainty across every mode.
  • Optional prescaler/divider: reduces effective edge rate so internal counting/interpolation can run safely. This block must expose error states (overdrive, saturation, missed edges) so results are not silently corrupted.
  • Coarse counter (timebase-referenced): counts whole timebase ticks or cycles during gates/intervals and provides the absolute time scale. If the timebase is invalid, absolute accuracy is not guaranteed.
  • TDC interpolation (fine time ε): estimates the sub-tick remainder between an edge and the nearest timebase tick. This improves resolution, but requires calibration (linearity/temperature) to avoid “ps-looking” numbers that are not accurate.
  • Processor + statistics: converts raw counts and intervals into frequency/period/Δt, then reports mean, variation (σ), sample count N, and validity flags so the number is interpretable.
  • Display + logs: presents the result and records the evidence: mode, gate settings, reference state, overflow/missing-edge counters, and calibration versions.

Validity flags (the evidence that prevents silent failure)

Flag / counter Meaning Action
RefValid / Locked The timebase/reference input is present and in a valid locked state. If false, treat absolute results as untrustworthy; log the condition.
Holdover Reference was lost and the instrument is maintaining time using internal holdover. Mark data windows; compare drift and avoid strict absolute claims.
Overflow (FIFO / counters) Event rate exceeded capture or storage capacity; some data was dropped. Do not post-process as if all events exist; reduce rate or increase headroom.
Missing-edge counter Detected gaps inconsistent with expected edge stream (or failed arming). Use to prove data integrity; investigate input conditioning and dead time.
CalVersion / CalAge Which calibration tables are applied and whether they are current. Record with measurements; update or re-calibrate if outside allowed limits.

Architecture checklist (quick self-audit)

  • Reference: confirm RefValid/Locked status (internal or external) before trusting absolute numbers.
  • Edge definition: verify stable trigger threshold and adequate edge slew at the crossing point.
  • Prescaler: if enabled, confirm no saturation/overdrive flags and that edge integrity remains intact.
  • TDC: confirm calibration table/version is applied and valid over the current temperature range.
  • Statistics: always read mean + σ + N, and attach validity flags to exported results.
  • Timestamping: monitor FIFO overflow and missing-edge counters; otherwise post-processing can be misleading.
System architecture from input edge to a trusted number End-to-end block diagram: input conditioning, optional prescaler/divider, coarse counter referenced to a timebase, TDC interpolation, processor/statistics, and display/logs. A timebase/reference branch injects into counter and TDC, and a validity box shows ref valid, overflow, missing edge, and calibration version. Architecture: input edge → trusted number Data path + timebase injection + validity evidence Input conditioning threshold hysteresis Prescaler / Divider optional rate limit Coarse counter (timebase) tick count gate / Δt TDC interpolation fine ε needs cal Processor / Stats mean • σ • N validity flags UI + log export evidence Timebase / Reference branch External 10 MHz / 1 PPS Ref selector internal / external feeds counter feeds TDC Validity evidence RefValid / Locked Overflow / Drop Missing edge CalVersion Export results together with reference status, calibration version, and overflow/missing-edge evidence.

H2-4 · Timebase & reference: accuracy, stability, and what “low jitter” means here

The timebase is the counter’s ruler. External references such as 10 MHz and 1 PPS can improve comparability across instruments, but the key is understanding which specification affects absolute accuracy versus short-term stability. “Low jitter” in this context means the timebase contributes minimal short-term time noise to interval and timestamp measurements.

Accuracy (calibration offset)
  • Affects absolute frequency and absolute interval scaling.
  • Improved by calibration and a traceable external reference.
  • Report with measurements: cal date/version and ref source.
Stability (short-term noise)
  • Affects repeatability and the jitter floor of time-interval/TOA.
  • Linked to phase noise → time jitter over the measurement window.
  • Guides gate time / averaging choices (short window = stability-limited).

External reference inputs (10 MHz and 1 PPS): how to use them safely

  • 10 MHz reference: sets the frequency scale for the timebase. Use when multiple instruments must agree on absolute frequency/interval results.
  • 1 PPS: provides a second boundary marker for alignment and long-term timing consistency when supported by the instrument.
  • Lock/validity matters: always record RefValid/Locked (and Holdover if applicable). Reference loss must be logged and data flagged.
  • Switching policy: internal/external/auto selection should leave an evidence trail; treat the switching moment as a data boundary.

Reading the right metrics (without diving into oscillator internals)

Metric What it tells How to apply
Allan deviation (ADEV) Stability vs averaging time τ (short-term noise vs longer-term drift). Match τ to gate/averaging window; short τ reflects the floor for fast updates.
Time-interval jitter floor Minimum repeatable time noise achievable with ideal input edges. If measured jitter is far above the floor, investigate edge definition and conditioning first.
Reference status flags Whether the timebase is valid, locked, or in holdover. Export flags with data; treat ref loss/switching as a boundary in reports.

Timebase checklist (fast decisions)

  • Need absolute agreement across labs? use external 10 MHz (and 1 PPS if supported) and record the ref source in logs.
  • Fast updates or short gates? stability dominates; use ADEV at matching τ to estimate achievable variance.
  • Long averaging windows? accuracy and drift become visible; attach calibration metadata to the report.
  • Any ref loss or switching? flag and separate affected data windows; never mix them silently.
Timebase and external reference injection for a counter Diagram showing external 10 MHz and 1 PPS entering a reference selector, then a timebase divider feeding the counter and TDC. Side cards compare accuracy versus stability, and a status strip shows ref valid, locked, holdover, and ref missing. Timebase & reference (what “low jitter” means) External ref → selector → divider → feeds counter and TDC External references 10 MHz frequency scale 1 PPS timing marker Ref selector internal / external / auto Timebase divider generates tick / phases Counter TDC Accuracy calibration offset absolute scaling Stability phase noise → jitter Status flags RefValid Locked Holdover RefMissing Rule: record reference source + RefValid/Locked/Holdover with every measurement export.

H2-5 · TDC interpolation: how sub-clock resolution is achieved (and its real limits)

Sub-clock timing is not “magic.” A counter reaches fine resolution by splitting time into two layers: a coarse tick count that provides the traceable scale, and a fine interpolator (TDC) that estimates the remainder between ticks. The practical limit is set by linearity, jitter floor, and calibration validity, not by a headline “ps resolution” number.

Core model (coarse + fine)
Δt = N · Tclk + ε
  • N · Tclk comes from the coarse counter referenced to the timebase.
  • ε is the fine time remainder estimated by the TDC (interpolation between ticks).

Common TDC concepts (implementation-agnostic)

Delay-line taps
A chain of small delays creates “time bins.” Interpolation picks the tap index near the edge. Sensitive to bin-width variation and temperature drift → calibration required.
Vernier method
Uses two close delays and measures their difference to refine resolution. Can achieve fine bins but linearity and stability still depend on characterization.
Multi-phase sampling
Several clock phases sample the edge to estimate the fractional position. Phase imbalance and drift turn into non-linearity unless corrected.
FPGA carry-chain
Uses fast internal carry elements as delay bins. Great for integration and timestamping throughput, but needs ongoing calibration to control DNL/INL and temperature effects.

Error terms that define the real limit

  • Bin width variation → DNL/INL: time bins are not perfectly equal; this creates non-linearity in ε unless corrected.
  • Temperature drift: delay elements move with temperature; a valid calibration table must match the operating range.
  • Calibration table (mapping): ε needs a correction map (and a known version/age) to be used as an accurate quantity.
  • Interpolation residuals: even after correction, repeatability hits a floor (jitter floor) set by the whole measurement chain.

“Trusted resolution” criteria (how to read specs correctly)

What to check Why it matters Practical evidence
Linearity (DNL/INL) Determines whether ε is proportional across the full tick range. Spec plots, correction tables, or a stated linearity budget.
Jitter floor (repeatability) Sets the minimum spread of repeated Δt measurements, even with ideal edges. Reported σ at a known N, plus the instrument’s floor condition.
Calibration interval + validity Defines how long the correction remains accurate over temperature and time. CalVersion/CalAge, “cal applied” flags, and operating temp validity states.

Quick validation workflow (user-facing)

  1. Confirm the measurement exports σ and N (not only a mean value).
  2. Verify a TDC calibration table is present and current (version/age available).
  3. Check for temperature validity or drift warnings during long runs.
  4. Compare measured spread to the stated jitter floor under a clean edge condition.
TDC interpolation: coarse counter plus fine delay taps Diagram shows coarse counter producing N·Tclk and a TDC delay-tap structure producing ε. A calibration table corrects bin width variation (DNL/INL). The output combines coarse and fine parts to form Δt. TDC interpolation: coarse + fine Δt = N·Tclk + ε (ε requires linearity + calibration) Coarse counter counts whole ticks N · Tclk TDC delay taps (fine bins) sub-tick interpolation DNL INL Cal table Fine time ε corrected remainder Output Δt trusted only with cal Rule: “ps resolution” is not enough — verify linearity, jitter floor, and calibration validity.

H2-6 · Input edge timing uncertainty: noise-to-time conversion & time-walk

In real measurements, the limiting factor is often the input edge, not the internal TDC. Any voltage noise around the trigger threshold becomes timing noise. Slow edges and varying amplitude create larger uncertainty, even if the instrument advertises very fine interpolation.

Noise → time conversion (key relationship)
σt ≈ σv / (dV/dt)
Larger slope at the crossing (faster edge) reduces timing jitter; smaller slope (slower edge) amplifies timing jitter.

Time-walk: amplitude changes create systematic timing shifts

  • Fixed threshold: if amplitude varies, the edge crosses the same threshold at different times → a deterministic offset (time-walk).
  • Constant-fraction concept: triggering at a fixed fraction of the pulse height reduces amplitude-dependent shifts (conceptual method).
  • Amplitude gating/windowing: discarding out-of-range pulses improves consistency and prevents biased statistics.

Hysteresis trade-off: stability vs trigger-point shift

  • Benefit: hysteresis prevents chatter when noise causes multiple crossings near the threshold.
  • Cost: two thresholds (Vth+ / Vth−) mean the effective trigger point can shift with direction and waveform conditions.
  • Practical rule: use enough hysteresis to suppress chatter, but not so much that threshold-dependent offset dominates.

Input criteria (what must be controlled)

Parameter Why it matters What to do
Minimum amplitude at threshold Improves SNR at the crossing point and reduces time-walk sensitivity. Set amplitude windowing; avoid operating near trigger noise limits.
Minimum slew rate (dV/dt) Directly controls σt via noise-to-time conversion. Improve edge shaping/termination; select a cleaner crossing region.
Threshold policy A poor threshold can sit on a slow portion of the edge and magnify jitter. Choose a threshold where slope is highest and stable.
Hysteresis / conditioning Prevents chatter but changes the effective trigger point. Use only as needed; confirm offsets do not dominate the measurement target.

Diagnostics: separating edge issues from timebase/TDC limits

  1. Move the threshold: large changes in mean/σ indicate edge slope and time-walk dominate.
  2. Increase edge slope: if σ drops sharply, the measurement is edge-limited (not TDC-limited).
  3. Apply amplitude windowing: if mean shifts, time-walk was present in the unfiltered dataset.
  4. Compare to jitter floor: only near-floor results justify blaming timebase/TDC as the limiter.
Input edge uncertainty: slow edge jitter and amplitude time-walk Top panel compares fast and slow rising edges crossing the same threshold; the slow edge shows a larger timing uncertainty band. Bottom panel shows two pulses with different amplitudes crossing the same threshold at different times (time-walk). A small hysteresis window is indicated conceptually. Edge timing uncertainty: jitter + time-walk Same threshold, different slope/amplitude → different time uncertainty Slope effect (noise-to-time conversion) Threshold small large Fast edge Slow edge Amplitude effect (time-walk) Threshold time-walk Hysteresis Vth+ / Vth− (concept) Practical focus: maximize slope at the crossing and control amplitude distribution to reduce jitter and time-walk.

H2-7 · Prescalers, dividers, and high-frequency front-end choices (counter viewpoint)

At high input frequencies, the first priority is reliable edge capture. A prescaler/divider reduces the edge density so the counter or timestamp engine stays within its internal bandwidth. This improves robustness, but it also introduces new constraints: added timing uncertainty, threshold sensitivity, and overdrive/limiting side effects.

Why prescalers exist (counter viewpoint)
  • Bandwidth protection: keeps the edge rate inside the counter/timestamp processing limit.
  • Cleaner counting: reduces missed edges caused by internal re-arming and synchronization pressure.
  • Evidence-friendly: enables stable operation states (ratio, flags, counters) that can be logged.

What prescaling can break (and how it shows up)

Added jitter
Divider logic and front-end conditioning add timing uncertainty. The visible symptom is a higher spread in interval/TOA results compared to a clean direct path.
Threshold sensitivity
High-frequency edges are judged at a threshold crossing. If the slope is weak or noisy at that point, small voltage noise becomes large timing noise.
Overdrive & limiting
Overdriven inputs can trigger limiters or recovery behavior. This can distort the edge shape and increase the chance of missing edges or double counting.

Missing edges vs double counts (pulse-train reliability)

  • Missing edges are typically driven by insufficient edge quality (low slope at the threshold), overload recovery, or internal re-arming pressure.
  • Double counts often come from multiple threshold crossings (noise or ringing) when hysteresis/conditioning is insufficient.
  • Practical rule: treat edge quality and overdrive status as part of the measurement record, not as “setup details.”

Selection guide: ratio + threshold policy (no oscilloscope front-end details)

Input condition Preferred action Evidence to log
Very high frequency / dense edges Enable prescaler/divider to keep edge rate within capture bandwidth. Prescaler ratio, missing-edge count, overflow flags.
Marginal edge quality (slow/rounded crossing) Adjust threshold to a steeper region; apply minimal hysteresis to prevent chatter. Threshold setting, edge-quality status, double-count detector (if available).
Possible overdrive / limiting Avoid operating in limiter recovery; confirm stable triggering under the chosen ratio. Overdrive flag, missing-edge count, validity flags for the run window.

Minimum “trust record” fields for high-frequency counting

  • Ratio: prescaler/divider setting applied to the run.
  • Trigger policy: threshold value and hysteresis state (if used).
  • Integrity counters: missing-edge and/or double-trigger indicators, if provided.
  • Overdrive status: limiter/overload indicator if available.
High-frequency front end: limiter, comparator, and prescaler into counter Block diagram showing input passing through limiter and comparator into a prescaler/divider and then the counter. A side panel highlights Max fin, Edge quality, and Overdrive as key selection constraints. Prescalers & dividers: reliable high-frequency counting Reduce edge density to stay within capture bandwidth — then verify integrity flags Input Limiter Overdrive Comparator Threshold Prescaler Divider ÷2 ÷4 ÷8 Counter reliable edge capture Missing-edge / Double-count flags Key constraints Max fin capture limit Edge quality slope at Vth Overdrive limiter state Record ratio + threshold + integrity flags to make high-frequency counts auditable.

H2-8 · Dead time, arming/holdoff, and throughput: when counters miss events

Missing events usually come from blind time and throughput limits, not from “random errors.” After a measurement window ends, the instrument may require time to re-arm, clear state, process timestamps, or export results. In high-rate timestamping, the limiting chain is often capture rate → FIFO depth → output bandwidth.

Dead time (definition)
Dead time is the interval between the end of one capture window and the moment the next capture is fully armed. Events inside this blind zone are not measured and must be treated as unobserved, not “zero.”

Throughput model: capture → FIFO → processing/output

  • Capture rate: how fast edges can be timestamped and written into the FIFO.
  • FIFO depth: the burst buffer that absorbs short spikes in event rate.
  • Processing/output bandwidth: how fast timestamps can be reduced, logged, or exported without backpressure.

“No-miss” criteria (practical, auditable)

Criterion Meaning What to monitor
Max event rate < capture capacity Edges must be timestamped fast enough in real time. Capture status, dropped counter (if provided).
FIFO headroom during bursts Short spikes must not overflow the buffer. FIFO fill level and overflow flags.
Output bandwidth ≥ sustained event rate Export/processing must keep up over long runs. Backpressure indicators, overflow count over time.

Mandatory record fields for timestamp runs

  • Windowing: window length, re-arm/holdoff settings.
  • Counts: Ncaptured, Ndropped (if available), OverflowCount.
  • Validity: overflow flag state per window and any “data valid” indicator.

Quick diagnosis map

  1. Overflow rises: FIFO/output throughput is the bottleneck — reduce event rate or increase buffering/export capacity.
  2. Missing-edge rises without overflow: front-end trigger policy or holdoff is the limiter — adjust threshold/conditioning.
  3. Both rise: event density is beyond the full chain capacity — prescale/divide and re-check integrity counters.
Dead time and throughput: windows, blind zones, FIFO overflow Timeline shows measurement window, processing, dead time, and next window. A FIFO fill bar illustrates burst buffering and an overflow flag when the buffer saturates. Dead time & throughput: why events get missed Window → process → dead time → next window, plus FIFO headroom and overflow flags Timeline (capture windows and blind zones) Window Process Dead time Window end re-arm start Timestamp FIFO (burst buffering and overflow) FIFO fill level Full Overflow flag ON No-miss rule: monitor OverflowCount / DroppedCount per window and keep FIFO headroom during bursts. Treat dead time as unobserved time; never merge overflowed windows into “clean” statistics.

H2-9 · Multi-channel alignment: skew, common timebase, and channel-to-channel interval

Multi-channel time-interval work is only trustworthy when two different problems are managed at once: time scale and channel zero alignment. A common timebase keeps the scale consistent, but it does not guarantee that two channels share the same effective “time zero.” Channel-to-channel results must therefore be corrected by a skew calibration table and validated across temperature and time.

Common timebase ≠ perfect channel alignment
  • Common timebase: makes time units consistent and traceable (scale).
  • Channel skew: fixed and drifting offsets from path delay and per-channel interpolation behavior (zero).
  • Delta measurements: channel-to-channel interval accuracy is dominated by skew calibration and its drift.

Where channel skew comes from (counter viewpoint)

  • Path delay: each channel has its own conditioning and threshold-crossing path, creating a constant offset.
  • Interpolation mismatch: independent TDCs rarely share identical bin behavior; residual nonlinearity becomes skew.
  • Trigger policy differences: threshold/hysteresis inconsistencies shift the effective time pick-off between channels.

Practical skew calibration workflow (pulse split method)

  1. Split one pulse edge to CH1 and CH2 so both channels see the same physical event.
  2. Capture timestamps t1 and t2 using the same timebase and the intended trigger policy.
  3. Estimate skew as skew12 = (t2 − t1) using enough samples to get a stable mean and spread.
  4. Store a calibration entry with temperature tag, date, and version: SkewCalTable[Temp].
  5. Correct future results with Δt12(corrected) = (t2 − t1) − skew12_table(Temp).

Acceptance checks (what proves alignment is “good”)

Check Target behavior Evidence fields
Before vs after correction Δt12 error shrinks and becomes stable after applying the skew table. SkewMean, SkewSigma, CalVersionID.
Repeatability Distribution stays narrow under the intended measurement mode. Δt12 mean, σ (or p-p), Nsamples.
Temperature hold Skew remains within its validity envelope across the temperature range in use. TempTag, CalAge, ValidityFlag (over-temp / expired).

Minimum “trust record” fields for channel-to-channel intervals

  • Timebase status: locked/valid state and the reference selection status.
  • SkewCalTable: CalVersionID, CalAge, TempTag (or declared validity window).
  • Trigger policy summary: threshold/hysteresis consistency across channels.
  • Integrity flags: overflow/dropped/missing-edge indicators for the capture window.
Multi-channel skew calibration with pulse split One pulse edge is split to CH1 and CH2, each channel feeds its own conditioning and TDC, timestamps t1 and t2 are used to compute channel-to-channel interval Δt12 with a skew calibration table applied. Multi-channel alignment: measure and correct skew Split the same edge → timestamp per channel → compute Δt12 with a skew table Pulse In Pulse Split CH1 Condition TDC1 t1 timestamp CH2 Condition TDC2 t2 timestamp Compute Δt12 = t2 − t1 apply skew correction Skew Cal Table Skew12 mean + spread Temp tag CalVersion CalAge Log timebase state + skew table version/age + validity flags with every Δt12 report.

H2-10 · Calibration & uncertainty budget: how to make results traceable

Traceable results require a clear calibration structure and a repeatable uncertainty budget. Calibration is best treated as two layers: (1) timebase calibration that sets the time scale, and (2) delay/interval calibration that removes interpolation nonlinearity, channel skew, and trigger-related offsets. An uncertainty budget then documents what still remains and how it is combined into a defensible final number.

Layer 1 · Timebase
Calibrates the reference time scale so frequency and interval results remain traceable over time. The essential output is a correction state and a validity window.
Layer 2 · Delay / Interval
Calibrates interpolation linearity, channel skew, and threshold-related offsets so intervals and TOA results remain consistent.

Minimal uncertainty budget template (practical and auditable)

Budget item What it represents Typical evidence source
Reference accuracy Residual time scale error after timebase calibration and validity checks. Calibration report + timebase status logs.
Interpolation error Residual linearity and correction-table mismatch in the TDC path. Interval calibration results + correction table version.
Trigger uncertainty Threshold-crossing timing noise driven by edge slope and noise. Repeat captures under the intended trigger policy.
Repeatability Observed scatter that remains after corrections (includes short-term drift and noise). Run statistics (σ / p-p) with validity flags confirmed.
Combining the budget (engineering rule)
When contributors are treated as independent, they are commonly combined using a root-sum-of-squares concept. The dominant term should be identified and improved first; in real setups, trigger uncertainty often dominates when edge slope is marginal.

Self-test and calibration interval management

  • Reference injection: periodically inject a known reference edge/period to validate the full measurement chain.
  • Known delay check: verify delay/interval correction using a known delay element or a certified periodic source.
  • Calibration reminders: store CalInterval and raise a validity flag when the interval is exceeded.
  • Drift tracking: log skew/zero trends over time to predict when recalibration is needed.

Mandatory report fields (traceability checklist)

  • CalVersionID, CalDate, CalAge, CalInterval.
  • Timebase status (valid/locked) and reference selection state.
  • Uncertainty summary (budget items + combined value) and the measurement mode used.
  • Validity flags (over-temp, expired calibration, overflow/dropped events).
Uncertainty budget and calibration interval timeline Stacked bar represents reference accuracy, interpolation error, trigger uncertainty, and repeatability contributing to the final uncertainty. A timeline shows calibration date, validity window, reminder, and expiry. Calibration & uncertainty: make the number traceable Budget contributors + calibration interval control and validity flags Uncertainty budget (minimal template) Ref accuracy Interpolation Trigger uncertainty Repeatability Final uncertainty = combined contributors Record CalVersionID + timebase status + validity flags with every report. Calibration interval timeline Cal date Valid window Reminder Expired CalVersionID · CalAge ValidityFlag: reminder / expired Separate timebase calibration from interval calibration, then publish a minimal uncertainty budget.

H2-11 · Validation checklist & field evidence: proving performance and catching latent faults

Validation should produce a repeatable evidence chain: each metric is measured with a defined setup, an explicit pass gate, and a record of instrument state (reference lock, overflow, missed edges, temperature). The checklist below is written to be executable on a lab bench and auditable later.

Output artifacts (recommended)
  • CSV summary + plot screenshots (σ vs gate time, TI residuals)
  • Calibration/version tags (CalVersionID, CalAge, ValidityFlag)
  • Internal health logs (RefLock, RefMissing, Overflow, MissedEdge)
Example bench building blocks (replaceable)
10 MHz / 1PPS standard: SRS FS725 (internally uses PRS10) • Pulse/burst source: Keysight 81150A • Split/terminate/attenuate: ZFRSC-42-S+, ANNE-50+, BW-S10W2+/BW-S12W2+ • Ultra-fast pass-through (optional): PSPL5542

1) Frequency accuracy vs a known 10 MHz / 1PPS reference

  • Purpose: confirm traceable scale when locked to an external reference (and detect lock/missing events).
  • Setup: drive the counter’s external reference input from a stable standard (example: SRS FS725 10 MHz, or SRS PRS10 module). Use proper splitting/termination as needed (example: ZFRSC-42-S+ + ANNE-50+).
  • Procedure: measure the same input under (a) internal timebase, then (b) external reference locked. Record both the mean error and time trend.
  • Evidence: Δf/f summary, RefSelected, RefLock, RefMissingCount, CalVersionID.
  • Pass gate: only accept results when RefLock=true and RefMissingCount=0; accuracy must meet the instrument’s stated spec under external lock.

2) Resolution vs gate time: sweep gate time and verify σ convergence

  • Purpose: prove that longer observation windows reduce scatter until a physical noise floor is reached.
  • Setup: stable repetitive input (example generator: Keysight 81150A). Keep amplitude/threshold policy fixed.
  • Procedure: sweep gate time over multiple decades. At each gate time, collect N repeats and compute σ of the reported value.
  • Evidence: plot σ vs gate time, mode tag (gated/reciprocal/timestamp), RefLock status, Nsamples.
  • Pass gate: σ should improve with gate time and then plateau; an early plateau indicates a limit dominated by trigger uncertainty or interpolation residuals.

3) Time-interval linearity: sweep known delays and check residuals

  • Purpose: verify interval accuracy across multiple delay points (not just “best case” at one point).
  • Setup: split one edge into two channels (example splitter: ZFRSC-42-S+). Use coax length difference as known delay; control reflections with proper termination (ANNE-50+) and optional fixed attenuation (BW-S10W2+ / BW-S12W2+).
  • Procedure: measure Δt at several delay points; fit measured vs expected and plot residuals.
  • Evidence: Δt residual plot, temperature tag, CalVersionID for the skew/interval correction tables.
  • Pass gate: residuals must stay within the declared TI uncertainty envelope across the swept points and temperatures of interest.

4) Jitter floor: measure the instrument-limited timing noise

  • Purpose: quantify the lowest achievable timing scatter with a stable input and a controlled trigger policy.
  • Setup: use a stable edge source (example: Keysight 81150A) and ensure clean signal handling (proper termination ANNE-50+; optional inline attenuation BW-S10W2+). For ultra-fast pulse fidelity, an optional broadband bias tee (PSPL5542) can help pass fast edges while providing DC biasing/blocks.
  • Procedure: capture timestamps over long runs; report RMS and peak-to-peak TI jitter. Repeat under internal vs external reference lock to separate contributions.
  • Evidence: TI jitter summary (RMS/p-p), RefLock state, threshold policy, temperature.
  • Pass gate: floor must match the declared instrument limit for the chosen mode; any drift with temperature must remain within validity assumptions.

5) Miss / overflow stress: pulse bursts and maximum event-rate throughput

  • Purpose: prove that the instrument either captures events reliably within the specified rate, or flags overflow/drop conditions reliably above it.
  • Setup: use burst/pattern generation (example: Keysight 81150A) and run multiple event-rate tiers. Keep logging enabled.
  • Procedure: for each tier, capture for a fixed duration and record overflow/missed-edge counters plus FIFO high-water mark.
  • Evidence: OverflowCount, MissedEdgeCount, DroppedTimestampCount, FIFOLevelHighWatermark, ValidityFlag.
  • Pass gate: below the declared throughput limit, missed/overflow must remain zero; above the limit, the overflow/drop flags must be explicit and logged (no “silent failure”).

Field evidence & internal logs (counter-observable only)

Field faults often appear as intermittent shifts, missing events, or “impossible” results. A counter should therefore emit a minimal set of internal evidence fields so each report is traceable to a system state.

Reference & lock evidence
  • RefSelected (internal / external 10 MHz / external 1PPS)
  • RefLock (locked / holdover / unlocked)
  • RefMissingCount + timestamp of last missing event
Throughput & data-path evidence
  • OverflowCount, DroppedTimestampCount
  • MissedEdgeCount (edge integrity indicator)
  • FIFOLevelHighWatermark (approaching capacity)
Environment & reset evidence
  • Temperature (internal sensor), SupplyOK / UVLO flag
  • ResetCause (WDT / BOR / CPU fault)
  • CalVersionID, CalAge, ValidityFlag (expired / over-temp)
Validation matrix: tests, metrics, and pass gates A checklist-style validation matrix with five lab tests on the left, metrics in the middle, and pass gates on the right. Designed for frequency/time interval counter verification and field readiness. Validation matrix (evidence-ready) Test items → outputs to capture → pass gates (checkbox-ready) Test item Output metrics PASS gate Accuracy vs 10 MHz / 1PPS reference Δf/f + time trend + RefLock/RefMissing RefLock=true & within spec Gate time sweep (resolution proof) σ vs gate time + mode tag σ improves then plateaus TI linearity sweep (known delays) residual plot + CalVersionID residual ≤ TI budget Jitter floor (stable input) TI jitter RMS/p-p + temp tag matches declared floor Burst stress (miss/overflow detection) Overflow/MissedEdge/FIFO high-water no silent drops Field evidence: log RefLock/RefMissing + Overflow/MissedEdge + Temperature + ResetCause with every report.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12 · FAQs – Frequency / Time Interval Counter Architecture & Calibration

Each answer focuses on counter-observable behavior and actionable settings. For high confidence, pair every result with evidence fields such as reference lock state, overflow/missed-edge counters, and calibration/version tags.

1) Why is reciprocal counting usually more accurate at low frequency?
Reciprocal counting measures the signal’s period with a high-resolution timebase, then computes frequency from that period. At low frequency, gated counting may observe only a few cycles, so quantization error becomes a large fraction of the result. Reciprocal mode improves low-frequency resolution until the limit becomes trigger timing uncertainty or the instrument’s jitter floor. Evidence: Mode, Gate/Avg time, σ, RefLock.
2) How long should gate time be to balance resolution and speed?
Gate time should be chosen from a measured convergence curve, not a fixed rule. Sweep several gate times, repeat each setting N times, and plot σ versus gate time. σ should drop as gate time increases and eventually plateau at a noise floor dominated by trigger uncertainty or interpolation residuals. Select a gate time near the “knee” before the plateau for the best speed/resolution trade. Evidence: GateTimeSweep, σ, N, ValidityFlag.
3) When should time-interval mode be used instead of frequency mode?
Time-interval (TI) mode is best when the quantity of interest is an event-to-event timing difference: pulse spacing, time-of-arrival (TOA), phase drift, or jitter distribution. Frequency mode is better for long-term average rate and stable trend reporting. TI/timestamping exposes transient behavior that frequency mode may average away. For high event rates, ensure throughput headroom and watch for overflows or dropped timestamps. Evidence: Mode, Δt distribution, Overflow/MissedEdge.
4) Does a “1 ps TDC resolution” spec mean true 1 ps accuracy?
No. Resolution is the smallest reporting step, while accuracy depends on calibration, linearity (residual DNL/INL), temperature drift, reference stability, and trigger uncertainty. Real performance is bounded by the jitter floor and the quality/age of correction tables. The most reliable proof is a time-interval linearity sweep using known delays plus a residual plot, combined with long-run jitter statistics under a stable setup. Evidence: CalVersionID/CalAge, residuals, JitterRMS, TempTag.
5) Why can slow edge signals make jitter suddenly much worse?
Timing jitter at a threshold is often dominated by noise-to-time conversion: σt ≈ σv / (dV/dt). When edge slope (dV/dt) is small, the same voltage noise produces much larger time scatter. This can look like a sudden degradation after adding filtering, reducing amplitude, or shifting the threshold to a low-slope region. Improve edge quality, stabilize amplitude, and keep threshold policies consistent across runs. Evidence: Threshold setting, input amplitude, slew class, TI σ.
6) What causes time-walk, and how can it be reduced without complex circuitry?
Time-walk occurs when a fixed threshold intersects edges of different amplitude at different times. It behaves like a systematic offset, not random noise. Without adding complex circuits, reduce amplitude variation (stable drive, proper termination, controlled overdrive), keep threshold/hysteresis policies consistent, and use validity checks that flag out-of-range amplitude/overdrive as invalid samples. If available, correlate time error with amplitude tags to confirm time-walk is the dominant term. Evidence: Overdrive/Amplitude tag, ValidityFlag, offset trend vs amplitude.
7) Can a prescaler “pollute” jitter measurements, and how to tell if it is the bottleneck?
A prescaler/divider can add sensitivity to edge quality, amplitude overdrive, and internal switching noise, which may raise the measured jitter floor. The most practical check is an A/B comparison: measure jitter with the same source at a frequency where direct counting is possible, then enable the prescaler path and repeat under identical threshold policy. If jitter increases sharply or becomes amplitude-dependent, the prescaler path is limiting. Always monitor missed-edge/overload indicators during high-frequency tests. Evidence: Prescaler setting, JitterRMS, OverdriveFlag, MissedEdgeCount.
8) What errors does dead time create, and how can logs prove no events were missed?
Dead time is a blind interval between measurement windows or during re-arming/processing when new events may not be captured. It can cause silent under-counting, biased statistics, or missing timestamps. A robust counter should make event integrity observable by exposing FIFO high-water mark, overflow/dropped timestamp counters, and missed-edge indicators. Prove “no misses” by showing these counters remain zero below the declared throughput limit, and that any over-limit condition is explicitly flagged. Evidence: FIFOLevelHighWater, OverflowCount, DroppedTimestampCount, MissedEdgeCount, ValidityFlag.
9) For multi-channel phase/interval measurements, how is channel-to-channel skew calibrated and maintained?
A common timebase aligns the time scale, but it does not remove channel-specific offsets. Skew calibration typically splits the same physical edge into two channels, measures Δt12 over many samples, and stores a skew correction table tagged by temperature and version. Maintain accuracy by enforcing a calibration validity window and re-checking skew across the temperature range used in practice. Every reported Δt12 should include the skew table version/age and a validity flag. Evidence: SkewMean/SkewSigma, TempTag, CalVersionID, CalAge, ValidityFlag.
10) How should Allan deviation be read on a counter page, and how can it guide gate-time selection?
Allan deviation describes stability versus averaging time (τ). Short τ is often dominated by noise (jitter and trigger uncertainty), while very long τ may be dominated by drift. For gate-time selection, treat gate/average time as the practical τ and choose a region where Allan deviation meets the stability target without sacrificing responsiveness. If a “best τ” region exists, it often aligns with the knee seen in σ vs gate-time tests. Evidence: τ (or gate/avg time), Allan points, RefLock.
11) After connecting an external 10 MHz / 1PPS reference, which states must be monitored for reliability?
Reliability requires more than “a cable is connected.” The reference selection must be correct, the lock state must be true, and reference-missing events must be observable. At minimum, monitor RefSelected, RefLock, and RefMissingCount, and treat results as invalid during unlock/holdover transitions unless explicitly supported. For traceability, bind these states to every measurement record along with calibration tags and temperature. Evidence: RefSelected, RefLock, RefMissingCount, Holdover/Unlocked flag, ValidityFlag, CalVersionID/CalAge.
12) What is a 30-minute minimum “go-live” acceptance test set?
A practical 30-minute set is: (1) confirm external reference selection and lock (RefSelected/RefLock), (2) run two gate times (short/long) and verify σ improves as expected, (3) sanity-check TI with one or two known delays and record residuals, (4) measure jitter floor under a stable input (RMS and p-p), (5) run a short burst/throughput stress and confirm no silent drops (Overflow/Dropped/MissedEdge), and (6) export a report bundle with CalVersionID, CalAge, ValidityFlag, temperature, and reset cause. Evidence: all listed counters/flags + exported logs.