Frequency / Time Interval Counter Architecture & Calibration
← Back to: Test & Measurement / Instrumentation
A frequency/time-interval counter turns input edges into a trusted, traceable number by combining a calibrated timebase with gated/reciprocal/timestamp measurement and controlled trigger timing. The key to reliable results is managing timing uncertainty (edge slew, time-walk, TDC limits, dead time/throughput) and proving performance with logs, calibration tags, and a repeatable validation checklist.
H2-1 · What a Frequency/Time-Interval Counter really measures
A frequency/time-interval counter converts signal edges into time-referenced numbers. Instead of “showing a waveform,” it anchors measurements to a timebase (internal or external 10 MHz / 1 PPS) and uses counting + interpolation + statistics to deliver repeatable, traceable results.
Measurement menu (what the instrument can output)
| Quantity | Definition (practical) | Primary limits | Best use |
|---|---|---|---|
| Frequency (f) | Cycles per second inferred from counts or time intervals. | Gate-time quantization (gated), timebase accuracy, edge noise. | Clock verification, divider chain checks, stable oscillators. |
| Period (T) | Time between adjacent like edges (rising→rising). | Trigger-point uncertainty, slew-rate limits, interpolation residual. | Low-frequency high-resolution measurement; jitter studies. |
| Time interval (Δt) | Start–Stop delay across one or two channels. | Channel skew, arming/holdoff behavior, trigger threshold mismatch. | Propagation delay, latency validation, interval timing. |
| Timestamp / TOA | Time-of-arrival for each event, relative to the timebase. | FIFO throughput, overflow, missing-edge handling, timebase stability. | Event streams, diagnostics, burst analysis, missing-edge proof. |
| Totalize / event count | Accumulated number of events (optionally within a gate window). | Gate boundary errors, dead time, input conditioning errors. | Production counts, pulse accumulation, gated event statistics. |
Why a counter is not an oscilloscope (engineering reasons)
- Timebase anchoring: results are explicitly referenced to a timebase (internal/external). This supports calibration and long-term comparability.
- Interpolation + statistics: counters combine coarse counting with fine timing interpolation and report stable averages/variation, which helps interpret “how trustworthy” a number is.
- Edge-time definition matters: the measurement is about the event timing (crossing a threshold), so trigger-point stability is a first-class design constraint.
Practical checklist (to avoid misleading numbers)
- Define the quantity: frequency vs period vs interval vs timestamps vs totalize.
- Confirm edge quality: ensure adequate slew rate at the trigger threshold; slow/noisy edges inflate timing uncertainty.
- Confirm reference state: internal vs external 10 MHz / 1 PPS, and whether the instrument indicates “locked/valid.”
- Choose the right mode: gated is fast; reciprocal improves low-frequency resolution; timestamping exposes missing events.
H2-2 · Measurement modes: gated vs reciprocal vs timestamping
Measurement “mode” defines what is counted, what is timed, and which error term dominates. Three practical modes cover most use cases: gated counting, reciprocal counting, and timestamping.
1) Gated counting (cycle count within a fixed window)
- Mechanism: count cycles N during a gate time Tgate, then estimate f ≈ N / Tgate.
- Dominant limit: quantization of ±1 count, especially at low frequency (small N).
- Best for: quick checks, mid/high frequencies, and applications where speed matters more than ultra-fine resolution.
- Key knob: longer Tgate improves resolution but reduces update rate.
2) Reciprocal counting (time a period/interval, then invert)
- Mechanism: measure time for one or multiple cycles Δt, then compute f ≈ N / Δt (or T = Δt / N).
- Why it wins at low frequency: low-frequency signals have larger periods, so fine time interpolation becomes a smaller fraction of the total.
- Dominant limit: edge timing uncertainty at the trigger point (noise-to-time conversion, time-walk).
- Key knobs: number of cycles N, averaging strategy, and trigger threshold/conditioning quality.
3) Timestamping (time-stamp every event, then post-process)
- Mechanism: assign a timestamp to each edge/event tk, then derive period/frequency/jitter/missing edges offline or in firmware.
- Why it is powerful: exposes event-level anomalies (bursts, missing edges, overflows) that average-based modes can hide.
- Dominant limit: throughput and data integrity (FIFO depth, overflow flags, missing-edge counters).
- Key knobs: event rate limits, holdoff/arming rules, and overflow reporting (must be monitored).
Mode selection table (rules, not slogans)
| Need / constraint | Recommended mode | What to set / verify |
|---|---|---|
| Fast updates and adequate resolution at mid/high f | Gated | Set Tgate; confirm stable edge detection; watch for gate-boundary errors. |
| High resolution at low frequency or long periods | Reciprocal | Choose N cycles; optimize trigger threshold; ensure adequate slew rate at threshold. |
| Event stream diagnostics (missing edges, bursts) | Timestamping | Verify FIFO headroom; monitor overflow + missing-edge counters; log validity flags. |
| Edge quality is poor (slow/noisy crossings) | Fix the edge first | Improve slew rate / conditioning; otherwise uncertainty is dominated by trigger-point jitter regardless of mode. |
Practical pitfalls (the ones that silently ruin results)
- Low-frequency “too good to be true”: short gate time makes ±1 count dominate; increasing Tgate should reduce variance predictably.
- Reciprocal mode disappointment: slow edges increase timing jitter; the root cause is usually low dV/dt at the trigger threshold.
- Timestamp data lies: overflow/missing-edge flags not monitored; post-processing assumes all events are present.
- Hidden dead time: re-arming/holdoff causes blind intervals; verify with known pulse trains and missed-edge counters.
H2-3 · System architecture: from input edge to a trusted number
A trustworthy counter result is built by a chain of decisions: where an edge is defined, which timebase anchors the measurement, and how calibration and validity flags travel with the number. The architecture below is a practical mental model for reading datasheets, configuring modes, and interpreting logs.
- Reference timebase: a valid, known timebase (internal or external 10 MHz / 1 PPS) that anchors absolute scaling.
- Edge-time definition: a stable trigger point (threshold/hysteresis/conditioning) so event times are repeatable.
- Calibration + uncertainty: corrections (timebase offset, TDC linearity, channel skew) plus validity flags to bound error.
Data path (what happens to the input edge)
- Input conditioning: turns an analog transition into a repeatable digital edge by defining a threshold and adding noise immunity. The practical outcome is a stable event time; unstable crossings inflate timing uncertainty across every mode.
- Optional prescaler/divider: reduces effective edge rate so internal counting/interpolation can run safely. This block must expose error states (overdrive, saturation, missed edges) so results are not silently corrupted.
- Coarse counter (timebase-referenced): counts whole timebase ticks or cycles during gates/intervals and provides the absolute time scale. If the timebase is invalid, absolute accuracy is not guaranteed.
- TDC interpolation (fine time ε): estimates the sub-tick remainder between an edge and the nearest timebase tick. This improves resolution, but requires calibration (linearity/temperature) to avoid “ps-looking” numbers that are not accurate.
- Processor + statistics: converts raw counts and intervals into frequency/period/Δt, then reports mean, variation (σ), sample count N, and validity flags so the number is interpretable.
- Display + logs: presents the result and records the evidence: mode, gate settings, reference state, overflow/missing-edge counters, and calibration versions.
Validity flags (the evidence that prevents silent failure)
| Flag / counter | Meaning | Action |
|---|---|---|
| RefValid / Locked | The timebase/reference input is present and in a valid locked state. | If false, treat absolute results as untrustworthy; log the condition. |
| Holdover | Reference was lost and the instrument is maintaining time using internal holdover. | Mark data windows; compare drift and avoid strict absolute claims. |
| Overflow (FIFO / counters) | Event rate exceeded capture or storage capacity; some data was dropped. | Do not post-process as if all events exist; reduce rate or increase headroom. |
| Missing-edge counter | Detected gaps inconsistent with expected edge stream (or failed arming). | Use to prove data integrity; investigate input conditioning and dead time. |
| CalVersion / CalAge | Which calibration tables are applied and whether they are current. | Record with measurements; update or re-calibrate if outside allowed limits. |
Architecture checklist (quick self-audit)
- Reference: confirm RefValid/Locked status (internal or external) before trusting absolute numbers.
- Edge definition: verify stable trigger threshold and adequate edge slew at the crossing point.
- Prescaler: if enabled, confirm no saturation/overdrive flags and that edge integrity remains intact.
- TDC: confirm calibration table/version is applied and valid over the current temperature range.
- Statistics: always read mean + σ + N, and attach validity flags to exported results.
- Timestamping: monitor FIFO overflow and missing-edge counters; otherwise post-processing can be misleading.
H2-4 · Timebase & reference: accuracy, stability, and what “low jitter” means here
The timebase is the counter’s ruler. External references such as 10 MHz and 1 PPS can improve comparability across instruments, but the key is understanding which specification affects absolute accuracy versus short-term stability. “Low jitter” in this context means the timebase contributes minimal short-term time noise to interval and timestamp measurements.
- Affects absolute frequency and absolute interval scaling.
- Improved by calibration and a traceable external reference.
- Report with measurements: cal date/version and ref source.
- Affects repeatability and the jitter floor of time-interval/TOA.
- Linked to phase noise → time jitter over the measurement window.
- Guides gate time / averaging choices (short window = stability-limited).
External reference inputs (10 MHz and 1 PPS): how to use them safely
- 10 MHz reference: sets the frequency scale for the timebase. Use when multiple instruments must agree on absolute frequency/interval results.
- 1 PPS: provides a second boundary marker for alignment and long-term timing consistency when supported by the instrument.
- Lock/validity matters: always record RefValid/Locked (and Holdover if applicable). Reference loss must be logged and data flagged.
- Switching policy: internal/external/auto selection should leave an evidence trail; treat the switching moment as a data boundary.
Reading the right metrics (without diving into oscillator internals)
| Metric | What it tells | How to apply |
|---|---|---|
| Allan deviation (ADEV) | Stability vs averaging time τ (short-term noise vs longer-term drift). | Match τ to gate/averaging window; short τ reflects the floor for fast updates. |
| Time-interval jitter floor | Minimum repeatable time noise achievable with ideal input edges. | If measured jitter is far above the floor, investigate edge definition and conditioning first. |
| Reference status flags | Whether the timebase is valid, locked, or in holdover. | Export flags with data; treat ref loss/switching as a boundary in reports. |
Timebase checklist (fast decisions)
- Need absolute agreement across labs? use external 10 MHz (and 1 PPS if supported) and record the ref source in logs.
- Fast updates or short gates? stability dominates; use ADEV at matching τ to estimate achievable variance.
- Long averaging windows? accuracy and drift become visible; attach calibration metadata to the report.
- Any ref loss or switching? flag and separate affected data windows; never mix them silently.
H2-5 · TDC interpolation: how sub-clock resolution is achieved (and its real limits)
Sub-clock timing is not “magic.” A counter reaches fine resolution by splitting time into two layers: a coarse tick count that provides the traceable scale, and a fine interpolator (TDC) that estimates the remainder between ticks. The practical limit is set by linearity, jitter floor, and calibration validity, not by a headline “ps resolution” number.
- N · Tclk comes from the coarse counter referenced to the timebase.
- ε is the fine time remainder estimated by the TDC (interpolation between ticks).
Common TDC concepts (implementation-agnostic)
Error terms that define the real limit
- Bin width variation → DNL/INL: time bins are not perfectly equal; this creates non-linearity in ε unless corrected.
- Temperature drift: delay elements move with temperature; a valid calibration table must match the operating range.
- Calibration table (mapping): ε needs a correction map (and a known version/age) to be used as an accurate quantity.
- Interpolation residuals: even after correction, repeatability hits a floor (jitter floor) set by the whole measurement chain.
“Trusted resolution” criteria (how to read specs correctly)
| What to check | Why it matters | Practical evidence |
|---|---|---|
| Linearity (DNL/INL) | Determines whether ε is proportional across the full tick range. | Spec plots, correction tables, or a stated linearity budget. |
| Jitter floor (repeatability) | Sets the minimum spread of repeated Δt measurements, even with ideal edges. | Reported σ at a known N, plus the instrument’s floor condition. |
| Calibration interval + validity | Defines how long the correction remains accurate over temperature and time. | CalVersion/CalAge, “cal applied” flags, and operating temp validity states. |
Quick validation workflow (user-facing)
- Confirm the measurement exports σ and N (not only a mean value).
- Verify a TDC calibration table is present and current (version/age available).
- Check for temperature validity or drift warnings during long runs.
- Compare measured spread to the stated jitter floor under a clean edge condition.
H2-6 · Input edge timing uncertainty: noise-to-time conversion & time-walk
In real measurements, the limiting factor is often the input edge, not the internal TDC. Any voltage noise around the trigger threshold becomes timing noise. Slow edges and varying amplitude create larger uncertainty, even if the instrument advertises very fine interpolation.
Time-walk: amplitude changes create systematic timing shifts
- Fixed threshold: if amplitude varies, the edge crosses the same threshold at different times → a deterministic offset (time-walk).
- Constant-fraction concept: triggering at a fixed fraction of the pulse height reduces amplitude-dependent shifts (conceptual method).
- Amplitude gating/windowing: discarding out-of-range pulses improves consistency and prevents biased statistics.
Hysteresis trade-off: stability vs trigger-point shift
- Benefit: hysteresis prevents chatter when noise causes multiple crossings near the threshold.
- Cost: two thresholds (Vth+ / Vth−) mean the effective trigger point can shift with direction and waveform conditions.
- Practical rule: use enough hysteresis to suppress chatter, but not so much that threshold-dependent offset dominates.
Input criteria (what must be controlled)
| Parameter | Why it matters | What to do |
|---|---|---|
| Minimum amplitude at threshold | Improves SNR at the crossing point and reduces time-walk sensitivity. | Set amplitude windowing; avoid operating near trigger noise limits. |
| Minimum slew rate (dV/dt) | Directly controls σt via noise-to-time conversion. | Improve edge shaping/termination; select a cleaner crossing region. |
| Threshold policy | A poor threshold can sit on a slow portion of the edge and magnify jitter. | Choose a threshold where slope is highest and stable. |
| Hysteresis / conditioning | Prevents chatter but changes the effective trigger point. | Use only as needed; confirm offsets do not dominate the measurement target. |
Diagnostics: separating edge issues from timebase/TDC limits
- Move the threshold: large changes in mean/σ indicate edge slope and time-walk dominate.
- Increase edge slope: if σ drops sharply, the measurement is edge-limited (not TDC-limited).
- Apply amplitude windowing: if mean shifts, time-walk was present in the unfiltered dataset.
- Compare to jitter floor: only near-floor results justify blaming timebase/TDC as the limiter.
H2-7 · Prescalers, dividers, and high-frequency front-end choices (counter viewpoint)
At high input frequencies, the first priority is reliable edge capture. A prescaler/divider reduces the edge density so the counter or timestamp engine stays within its internal bandwidth. This improves robustness, but it also introduces new constraints: added timing uncertainty, threshold sensitivity, and overdrive/limiting side effects.
- Bandwidth protection: keeps the edge rate inside the counter/timestamp processing limit.
- Cleaner counting: reduces missed edges caused by internal re-arming and synchronization pressure.
- Evidence-friendly: enables stable operation states (ratio, flags, counters) that can be logged.
What prescaling can break (and how it shows up)
Missing edges vs double counts (pulse-train reliability)
- Missing edges are typically driven by insufficient edge quality (low slope at the threshold), overload recovery, or internal re-arming pressure.
- Double counts often come from multiple threshold crossings (noise or ringing) when hysteresis/conditioning is insufficient.
- Practical rule: treat edge quality and overdrive status as part of the measurement record, not as “setup details.”
Selection guide: ratio + threshold policy (no oscilloscope front-end details)
| Input condition | Preferred action | Evidence to log |
|---|---|---|
| Very high frequency / dense edges | Enable prescaler/divider to keep edge rate within capture bandwidth. | Prescaler ratio, missing-edge count, overflow flags. |
| Marginal edge quality (slow/rounded crossing) | Adjust threshold to a steeper region; apply minimal hysteresis to prevent chatter. | Threshold setting, edge-quality status, double-count detector (if available). |
| Possible overdrive / limiting | Avoid operating in limiter recovery; confirm stable triggering under the chosen ratio. | Overdrive flag, missing-edge count, validity flags for the run window. |
Minimum “trust record” fields for high-frequency counting
- Ratio: prescaler/divider setting applied to the run.
- Trigger policy: threshold value and hysteresis state (if used).
- Integrity counters: missing-edge and/or double-trigger indicators, if provided.
- Overdrive status: limiter/overload indicator if available.
H2-8 · Dead time, arming/holdoff, and throughput: when counters miss events
Missing events usually come from blind time and throughput limits, not from “random errors.” After a measurement window ends, the instrument may require time to re-arm, clear state, process timestamps, or export results. In high-rate timestamping, the limiting chain is often capture rate → FIFO depth → output bandwidth.
Throughput model: capture → FIFO → processing/output
- Capture rate: how fast edges can be timestamped and written into the FIFO.
- FIFO depth: the burst buffer that absorbs short spikes in event rate.
- Processing/output bandwidth: how fast timestamps can be reduced, logged, or exported without backpressure.
“No-miss” criteria (practical, auditable)
| Criterion | Meaning | What to monitor |
|---|---|---|
| Max event rate < capture capacity | Edges must be timestamped fast enough in real time. | Capture status, dropped counter (if provided). |
| FIFO headroom during bursts | Short spikes must not overflow the buffer. | FIFO fill level and overflow flags. |
| Output bandwidth ≥ sustained event rate | Export/processing must keep up over long runs. | Backpressure indicators, overflow count over time. |
Mandatory record fields for timestamp runs
- Windowing: window length, re-arm/holdoff settings.
- Counts: Ncaptured, Ndropped (if available), OverflowCount.
- Validity: overflow flag state per window and any “data valid” indicator.
Quick diagnosis map
- Overflow rises: FIFO/output throughput is the bottleneck — reduce event rate or increase buffering/export capacity.
- Missing-edge rises without overflow: front-end trigger policy or holdoff is the limiter — adjust threshold/conditioning.
- Both rise: event density is beyond the full chain capacity — prescale/divide and re-check integrity counters.
H2-9 · Multi-channel alignment: skew, common timebase, and channel-to-channel interval
Multi-channel time-interval work is only trustworthy when two different problems are managed at once: time scale and channel zero alignment. A common timebase keeps the scale consistent, but it does not guarantee that two channels share the same effective “time zero.” Channel-to-channel results must therefore be corrected by a skew calibration table and validated across temperature and time.
- Common timebase: makes time units consistent and traceable (scale).
- Channel skew: fixed and drifting offsets from path delay and per-channel interpolation behavior (zero).
- Delta measurements: channel-to-channel interval accuracy is dominated by skew calibration and its drift.
Where channel skew comes from (counter viewpoint)
- Path delay: each channel has its own conditioning and threshold-crossing path, creating a constant offset.
- Interpolation mismatch: independent TDCs rarely share identical bin behavior; residual nonlinearity becomes skew.
- Trigger policy differences: threshold/hysteresis inconsistencies shift the effective time pick-off between channels.
Practical skew calibration workflow (pulse split method)
- Split one pulse edge to CH1 and CH2 so both channels see the same physical event.
- Capture timestamps t1 and t2 using the same timebase and the intended trigger policy.
- Estimate skew as skew12 = (t2 − t1) using enough samples to get a stable mean and spread.
- Store a calibration entry with temperature tag, date, and version: SkewCalTable[Temp].
- Correct future results with Δt12(corrected) = (t2 − t1) − skew12_table(Temp).
Acceptance checks (what proves alignment is “good”)
| Check | Target behavior | Evidence fields |
|---|---|---|
| Before vs after correction | Δt12 error shrinks and becomes stable after applying the skew table. | SkewMean, SkewSigma, CalVersionID. |
| Repeatability | Distribution stays narrow under the intended measurement mode. | Δt12 mean, σ (or p-p), Nsamples. |
| Temperature hold | Skew remains within its validity envelope across the temperature range in use. | TempTag, CalAge, ValidityFlag (over-temp / expired). |
Minimum “trust record” fields for channel-to-channel intervals
- Timebase status: locked/valid state and the reference selection status.
- SkewCalTable: CalVersionID, CalAge, TempTag (or declared validity window).
- Trigger policy summary: threshold/hysteresis consistency across channels.
- Integrity flags: overflow/dropped/missing-edge indicators for the capture window.
H2-10 · Calibration & uncertainty budget: how to make results traceable
Traceable results require a clear calibration structure and a repeatable uncertainty budget. Calibration is best treated as two layers: (1) timebase calibration that sets the time scale, and (2) delay/interval calibration that removes interpolation nonlinearity, channel skew, and trigger-related offsets. An uncertainty budget then documents what still remains and how it is combined into a defensible final number.
Minimal uncertainty budget template (practical and auditable)
| Budget item | What it represents | Typical evidence source |
|---|---|---|
| Reference accuracy | Residual time scale error after timebase calibration and validity checks. | Calibration report + timebase status logs. |
| Interpolation error | Residual linearity and correction-table mismatch in the TDC path. | Interval calibration results + correction table version. |
| Trigger uncertainty | Threshold-crossing timing noise driven by edge slope and noise. | Repeat captures under the intended trigger policy. |
| Repeatability | Observed scatter that remains after corrections (includes short-term drift and noise). | Run statistics (σ / p-p) with validity flags confirmed. |
Self-test and calibration interval management
- Reference injection: periodically inject a known reference edge/period to validate the full measurement chain.
- Known delay check: verify delay/interval correction using a known delay element or a certified periodic source.
- Calibration reminders: store CalInterval and raise a validity flag when the interval is exceeded.
- Drift tracking: log skew/zero trends over time to predict when recalibration is needed.
Mandatory report fields (traceability checklist)
- CalVersionID, CalDate, CalAge, CalInterval.
- Timebase status (valid/locked) and reference selection state.
- Uncertainty summary (budget items + combined value) and the measurement mode used.
- Validity flags (over-temp, expired calibration, overflow/dropped events).
H2-11 · Validation checklist & field evidence: proving performance and catching latent faults
Validation should produce a repeatable evidence chain: each metric is measured with a defined setup, an explicit pass gate, and a record of instrument state (reference lock, overflow, missed edges, temperature). The checklist below is written to be executable on a lab bench and auditable later.
- CSV summary + plot screenshots (σ vs gate time, TI residuals)
- Calibration/version tags (CalVersionID, CalAge, ValidityFlag)
- Internal health logs (RefLock, RefMissing, Overflow, MissedEdge)
1) Frequency accuracy vs a known 10 MHz / 1PPS reference
- Purpose: confirm traceable scale when locked to an external reference (and detect lock/missing events).
- Setup: drive the counter’s external reference input from a stable standard (example: SRS FS725 10 MHz, or SRS PRS10 module). Use proper splitting/termination as needed (example: ZFRSC-42-S+ + ANNE-50+).
- Procedure: measure the same input under (a) internal timebase, then (b) external reference locked. Record both the mean error and time trend.
- Evidence: Δf/f summary, RefSelected, RefLock, RefMissingCount, CalVersionID.
- Pass gate: only accept results when RefLock=true and RefMissingCount=0; accuracy must meet the instrument’s stated spec under external lock.
2) Resolution vs gate time: sweep gate time and verify σ convergence
- Purpose: prove that longer observation windows reduce scatter until a physical noise floor is reached.
- Setup: stable repetitive input (example generator: Keysight 81150A). Keep amplitude/threshold policy fixed.
- Procedure: sweep gate time over multiple decades. At each gate time, collect N repeats and compute σ of the reported value.
- Evidence: plot σ vs gate time, mode tag (gated/reciprocal/timestamp), RefLock status, Nsamples.
- Pass gate: σ should improve with gate time and then plateau; an early plateau indicates a limit dominated by trigger uncertainty or interpolation residuals.
3) Time-interval linearity: sweep known delays and check residuals
- Purpose: verify interval accuracy across multiple delay points (not just “best case” at one point).
- Setup: split one edge into two channels (example splitter: ZFRSC-42-S+). Use coax length difference as known delay; control reflections with proper termination (ANNE-50+) and optional fixed attenuation (BW-S10W2+ / BW-S12W2+).
- Procedure: measure Δt at several delay points; fit measured vs expected and plot residuals.
- Evidence: Δt residual plot, temperature tag, CalVersionID for the skew/interval correction tables.
- Pass gate: residuals must stay within the declared TI uncertainty envelope across the swept points and temperatures of interest.
4) Jitter floor: measure the instrument-limited timing noise
- Purpose: quantify the lowest achievable timing scatter with a stable input and a controlled trigger policy.
- Setup: use a stable edge source (example: Keysight 81150A) and ensure clean signal handling (proper termination ANNE-50+; optional inline attenuation BW-S10W2+). For ultra-fast pulse fidelity, an optional broadband bias tee (PSPL5542) can help pass fast edges while providing DC biasing/blocks.
- Procedure: capture timestamps over long runs; report RMS and peak-to-peak TI jitter. Repeat under internal vs external reference lock to separate contributions.
- Evidence: TI jitter summary (RMS/p-p), RefLock state, threshold policy, temperature.
- Pass gate: floor must match the declared instrument limit for the chosen mode; any drift with temperature must remain within validity assumptions.
5) Miss / overflow stress: pulse bursts and maximum event-rate throughput
- Purpose: prove that the instrument either captures events reliably within the specified rate, or flags overflow/drop conditions reliably above it.
- Setup: use burst/pattern generation (example: Keysight 81150A) and run multiple event-rate tiers. Keep logging enabled.
- Procedure: for each tier, capture for a fixed duration and record overflow/missed-edge counters plus FIFO high-water mark.
- Evidence: OverflowCount, MissedEdgeCount, DroppedTimestampCount, FIFOLevelHighWatermark, ValidityFlag.
- Pass gate: below the declared throughput limit, missed/overflow must remain zero; above the limit, the overflow/drop flags must be explicit and logged (no “silent failure”).
Field evidence & internal logs (counter-observable only)
Field faults often appear as intermittent shifts, missing events, or “impossible” results. A counter should therefore emit a minimal set of internal evidence fields so each report is traceable to a system state.
- RefSelected (internal / external 10 MHz / external 1PPS)
- RefLock (locked / holdover / unlocked)
- RefMissingCount + timestamp of last missing event
- OverflowCount, DroppedTimestampCount
- MissedEdgeCount (edge integrity indicator)
- FIFOLevelHighWatermark (approaching capacity)
- Temperature (internal sensor), SupplyOK / UVLO flag
- ResetCause (WDT / BOR / CPU fault)
- CalVersionID, CalAge, ValidityFlag (expired / over-temp)
H2-12 · FAQs – Frequency / Time Interval Counter Architecture & Calibration
Each answer focuses on counter-observable behavior and actionable settings. For high confidence, pair every result with evidence fields such as reference lock state, overflow/missed-edge counters, and calibration/version tags.