123 Main Street, New York, NY 10001

Trigger & Timestamp for ADC Systems (Sync I/O + PTP Timing)

← Back to:Analog-to-Digital Converters (ADCs)

Trigger defines a shared event start; timestamp places samples and events onto a unified epoch so multi-card or multi-node data can be aligned and merged. Reliable timing comes from controlling both paths (deterministic trigger + disciplined timebase) and proving alignment with end-to-end Δt verification.

What this page solves (Trigger & Timestamp)

Trigger defines when capture starts; timestamp defines what time each sample or event belongs to. Multi-card alignment requires both a deterministic trigger path and a shared timebase, with a clear definition of where the timestamp is taken.

  • “Multi-card / multi-host data does not line up in time.” Typical causes: different timebases, unequal trigger distribution delay, or mismatched tagging points.
  • “Trigger arrives, but waveforms do not align.” Typical causes: trigger skew, different arming/qualification rules, or clock-domain effects around event capture.
  • “What is owned by trigger vs timestamp?” Trigger = shared start condition; Timestamp = time label tied to an epoch/counter.

Scope boundary: JESD204 subclass/SYSREF details, clock phase-noise-to-SNR derivations, and high-speed link eye/eq topics belong to separate pages.

Event-to-Time pipeline: trigger path and timebase path Block diagram showing trigger path and timebase path meeting at a capture point, then producing tagged data to host. Trigger Path Timebase Path Trigger In Gate Arm Trigger Matrix Ref / PTP Clock Counter Epoch Timestamp Latch Capture Tag / Metadata Data Packet Host / Recorder Skew Tag point

Definitions: trigger, gate, arm, timestamp, epoch, skew, latency

Clear terminology prevents misdiagnosis. The terms below separate what varies (jitter), what differs between channels (skew), and what is simply delayed (latency).

  • Trigger: the event condition that starts capture (edge/level/qualified pattern).
  • Gate: an enable that allows trigger events to pass only when intended.
  • Arm: the ready state that accepts one valid trigger (often with re-arm rules).
  • Timestamp: the time label captured at a defined point (sample, frame, or event boundary).
  • Epoch: the shared reference (time-of-day) that makes timestamps comparable across devices.
  • Latency: trigger-in to capture action delay (can be large, but must be bounded/predictable).
  • Skew: channel-to-channel or card-to-card timing difference under the same trigger/timebase.
  • Jitter: statistical variation around the mean timing (definition only on this page).
  • Resolution vs Accuracy: resolution is the timestamp LSB step; accuracy is the error vs the true shared time.

Boundary note: clock phase-noise and jitter-to-ENOB/SNR derivations belong to a dedicated clocking page; this page uses jitter only as a timing-variation definition.

Timing vocabulary map: latency, skew, jitter, quantization Diagram with a time axis showing trigger and two channel capture points, highlighting latency, skew, jitter spread, and timestamp quantization by counter latching. Time Trigger Ch A Ch B Latency Skew Jitter Counter Latch on event Quantization

Why triggers fail: non-determinism sources (software vs hardware)

A trigger can be detected while the actual capture start time shifts from run to run. The root cause is usually a non-deterministic trigger path: scheduling, buffering, and transport stages introduce variable latency that cannot be bounded tightly. Hardware-triggered capture reduces variation by placing the trigger decision and arming logic close to the capture boundary.

  • Software trigger latency is variable: user space timing is shaped by OS scheduling and driver paths, then widened by buffering (queues, DMA rings) and transport.
  • Hardware trigger latency is bounded: a qualified trigger is routed through deterministic logic (gate/arm/qualify) into the capture start point.
  • Practical symptom: average delay may look stable, but the run-to-run latency spread breaks repeatability and multi-card alignment.

Scope boundary: this section focuses on timing determinism (OS/driver/buffering/transport). High-speed signal integrity and link eye/eq topics belong to a dedicated link-integrity page.

Software vs hardware trigger latency stack Two stacked trigger paths: software path with variable latency stages and hardware path with bounded latency stages, plus a right-side comparison of mean latency and latency spread. Software path variable Hardware path bounded User space OS scheduler Driver / ISR Buffer / DMA Bus (PCIe / USB) Trigger call Device capture GPIO / Sync In Qualifier FPGA trigger Capture start Mean Spread SW HW

Reference time architecture: timebase, discipline, and epoch

A timestamp is only comparable across cards and hosts when it is tied to a shared epoch and driven by a stable timebase. Multi-card systems often keep a local high-resolution counter and continuously discipline it to an external reference. The design goal is predictable time across drift, holdover, and topology changes.

  • Syntonization: frequency alignment (rate match) to reduce drift.
  • Synchronization: time alignment to the same epoch (shared time-of-day).
  • Counter + Epoch: counter provides fine resolution; epoch provides meaning across devices.
  • Disciplined clock: a control loop keeps local time close to the reference without exposing protocol details.

Scope boundary: this section explains timebase architecture and discipline. Detailed PTP message flows and network planning are not expanded here.

Clock discipline loop: PTP servo to local time counter Block diagram showing a PTP servo controlling a VCXO/PLL/DPLL to generate a disciplined clock feeding a local time counter and epoch alignment, with outputs 10MHz, 1PPS, and ToD. PTP / PPS Reference PTP Servo Offset / rate VCXO / DPLL Disciplined Local Counter Epoch (ToD) control clock 10MHz 1PPS ToD Discipline ties local time to a shared reference Epoch + Counter → comparable timestamps

Trigger I/O design: sync I/O, GPIO gate, levels, isolation, fan-in/out

Reliable multi-card triggering is built from three elements: (1) a clear definition of a valid trigger (edge/level/pulse rules), (2) deterministic control of when triggers are allowed (gate + arm), and (3) predictable distribution and routing (fan-in/out). Long cables and field wiring require level compatibility and basic protection, without turning the trigger input into an EMC problem.

  • Valid-trigger rules: edge/level mode, polarity, minimum pulse width, windowing, and debounce for noisy sources.
  • Gate + Arm: gate controls the acceptance window; arm defines readiness and re-arm / re-trigger behavior.
  • Fan-out: star distribution keeps path matching simple and reduces card-to-card trigger skew.
  • Levels + field wiring: confirm logic levels and add basic protection (TVS/RC); consider isolation for ground shifts.
  • Fan-in: when multiple sources can trigger, define priority or logic (OR/AND) inside the trigger matrix.

Scope boundary: this section covers trigger I/O qualification, gating/arming, and distribution with basic protection roles. Detailed surge/EMC design and high-speed link integrity belong to separate pages.

Trigger matrix with gate/arm qualification and star fan-out Block diagram showing trigger input edge/level, gate, arm, qualifier and debounce blocks feeding a trigger matrix, then trigger out to star fan-out for multiple cards, with small icons for isolator and TVS/RC protection. Trigger In Input pin Edge Level Qualification Gate Arm Qualifier Debounce Trigger Matrix Trigger Out Star fan-out Card A Card B Card C ISO TVS RC

Timestamping methods: where to tag and what errors appear

Timestamp quality is primarily determined by where tagging happens. Tagging closer to the physical capture boundary reduces uncertainty from software, buffering, and transport. Tagging closer to the host simplifies integration but amplifies non-determinism. A usable design must also define how to handle counter rollover and clock-domain crossings.

  • At ADC (or capture boundary): minimal path ambiguity; error is dominated by quantization and rollover handling.
  • In FPGA / MCU: strong control of pipelines; watch CDC uncertainty and pipeline tagging offsets.
  • In host: simplest integration; exposed to buffering and bus/OS timing variability.
Common error terms to budget
  • Quantization: timestamp LSB step (resolution limit).
  • CDC uncertainty: event crosses clock domains before tagging.
  • Pipeline tagging error: tag happens after variable or configurable stages.
  • Buffering / bus jitter: queueing and transport variability (dominant near host).
  • Rollover: counter wrap handling and epoch/counter re-binding rules.

Scope boundary: this section focuses on tagging locations and error terms. Detailed PTP protocol flows and clock phase-noise-to-ENOB/SNR derivations are not expanded here.

Three tagging points and typical error terms Pipeline diagram from ADC to FPGA/MCU to transport to host with three tag markers at ADC, FPGA, and host; each tag shows short error term labels such as quantization, CDC, pipeline, buffering, and bus jitter. ADC FPGA / MCU Transport Host Tag Quant Rollover Tag CDC Pipeline Quant Tag Buffering Bus jitter OS Tag closer to capture → less system uncertainty Define tag point · budget error terms

PTP-aligned multi-card timing: practical alignment patterns

Multi-card alignment requires a single time reference and a verifiable end-to-end mapping from trigger to timestamp. PTP provides time-of-day and disciplined time, while PPS and 10 MHz are often combined to harden epoch alignment and reduce drift. A successful implementation is validated by comparing timestamps for the same trigger event across cards, not by reading a single offset number.

  • PTP: disciplined time and ToD for comparable timestamps across hosts and cards.
  • PPS: epoch edge alignment for deterministic second boundaries.
  • 10 MHz: syntonization to reduce drift and improve holdover behavior.
  • End-to-end check: for one trigger event, compare multi-card timestamp deltas (mean and spread).
Why “PTP offset looks good” but data still misaligns
  • Trigger distribution paths are unmatched (fan-out skew dominates).
  • Tagging point differs across cards (ADC vs FPGA vs host).
  • Pipeline or CDC uncertainty shifts the effective tag boundary.
  • Host buffering and transport variability leaks into timestamps.
  • Epoch/counter binding differs during lock or restart transitions.

Scope boundary: this section does not expand PTP protocol details or network planning. It focuses on how trigger and timestamp land on a unified PTP time and how alignment is verified end to end.

PTP + PPS + 10MHz hybrid timing tree Timing tree from grandmaster through switch to two acquisition cards. Each card shows PTP clock to counter, PPS for epoch alignment, and 10MHz for syntonization, with a small end-to-end delta t check box. Hybrid timing tree PTP · PPS · 10MHz Grandmaster PTP Switch PTP PTP Card A Card B PTP clock Counter PPS Epoch align 10MHz Syntonize Tag PTP clock Counter PPS Epoch align 10MHz Syntonize Tag Δt check Mean Spread

Skew budget & calibration: cable delay, asymmetry, path matching

Timestamp deltas across cards can be decomposed into a mean offset (often calibratable) and a spread (variation to reduce). The largest contributors are usually trigger distribution path mismatch, cable delay differences, and cross-domain timing boundaries around tagging. PTP asymmetry can shift the absolute time reference and appear as a stable bias even when local offset readings look healthy.

Skew sources commonly seen in multi-card systems
  • Cable delay: length and medium differences create deterministic offsets.
  • Path matching: fan-out topology and routing differences add trigger skew.
  • I/O threshold: receiver threshold and conditioning shift effective edge timing.
  • CDC: clock-domain crossings add uncertainty near capture and tagging.
  • PTP asymmetry: directional delay imbalance biases epoch alignment.
Practical calibration loop
  • Inject a known event (test pulse or loopback) that reaches all cards through the same intended paths.
  • Measure timestamp deltas (Δt) and separate mean offset from spread.
  • Write per-card/per-channel compensation values, then re-run the Δt check to verify improvement.

Scope boundary: this section focuses on skew attribution and calibration using trigger and timestamp paths. It does not expand protocol-level PTP network design or detailed EMC/surge practices.

Skew sources map and calibration loopback Two parallel channels showing trigger distribution and timestamp capture paths, annotated with cable delay, path matching, IO threshold, CDC, and PTP asymmetry, plus a loopback arrow for measuring and writing compensation values. Skew map Mean offset + spread Trigger dist Capture / Tag Timestamp Trigger dist Capture / Tag Timestamp Cable delay Path match IO thr CDC Asymmetry Test / Loopback Measure Δt Write offset calibrate Mean Spread

Verification: measuring trigger latency/skew and timestamp accuracy

Verification should produce repeatable outputs for both trigger and timestamp paths. Trigger testing focuses on latency (absolute delay) and skew (card-to-card difference). Timestamp testing focuses on offset (mean delta) and spread (variation over runs), using the same event observed by multiple cards. A good acceptance plan validates end to end rather than relying on a single PTP offset readout.

What to report
  • Trigger latency: Trigger In → capture start or Trigger Out.
  • Trigger skew: difference between cards for the same distribution event.
  • Timestamp offset: mean of multi-card timestamp deltas (calibratable bias).
  • Timestamp spread: run-to-run delta variation (stability to reduce).
Computing skew/offset from captured data
  • Pick a reference card (e.g., Card A) and align events by sequence number or the same trigger ID.
  • For each event i, compute deltas: ΔtB[i] = tB[i] − tA[i], similarly for other cards.
  • Report mean(Δt) as offset and a distribution metric (spread) such as standard deviation or percentile width.
  • If counters roll over, unwrap before computing deltas; if tag points differ (ADC/FPGA/host), deltas are not comparable.

Scope boundary: this section describes practical validation setups and delta computations. Detailed oscilloscope metrology, EMC troubleshooting, and protocol-level PTP analysis are not expanded here.

Validation setups: scope, time-interval, and self-test loopback Three method cards: scope measuring two trigger outputs, time interval counter measuring start/stop, and self-test loopback from trigger out to trigger in with timestamp comparison. Validation setups 3 methods Method 1 Scope Method 2 Time interval Method 3 Loopback Card A Card B Scope CH1/2 Δt Time Interval Start / Stop Trig A Trig B Δt Trigger Out Trigger In TS compare Mean Spread

Engineering checklist for trigger + timestamp systems

A robust trigger-and-timestamp design is easier to deliver when requirements, implementation details, calibration hooks, and validation steps are captured as a single checklist. The groups below are written as fillable engineering items for reviews and release gates.

Requirements
  • Max skew (card-to-card)
  • Max trigger latency
  • Timestamp resolution (tick)
  • Holdover time
  • Operating temperature range
Hardware
  • I/O levels (logic standard)
  • Isolation needed (yes/no)
  • Fan-out topology (star preferred)
  • Cable plan (length match, labeling)
  • Protection role (TVS/RC/limit)
Timebase
  • PTP role (clock source / client)
  • PPS usage (epoch align)
  • 10 MHz usage (syntonize)
  • Discipline mode (lock / holdover)
  • Asymmetry plan (measure / compensate)
Firmware / FPGA
  • Gate / arm logic (state machine)
  • Re-arm rules and trigger qualification
  • CDC handling (synchronizers)
  • Rollover handling (unwrap, rebind)
  • Tag point (ADC / FPGA / host) and format
Validation
  • Loopback tests (end-to-end Δt)
  • Cable skew test (instrument-based)
  • Temperature drift test (offset vs temp)
  • Long-run stability (mean/spread over time)
  • Restart / relock behavior (epoch binding)

Scope boundary: the checklist lists integration and verification items for trigger and timestamp behavior. It does not replace dedicated EMC/surge or protocol-level timing design documents.

Checklist pipeline: spec to deploy Left-to-right pipeline showing Spec, Implement, Calibrate, Verify, Deploy with small tags under each stage. Checklist pipeline Spec → Implement → Calibrate → Verify → Deploy Spec Implement Calibrate Verify Deploy skew lat gate fan-out cable offset Δt temp monitor logs

Applications: where trigger + timestamp actually matter

Trigger is used to start a shared capture window or define a common “event zero”. Timestamp is used to place samples and events onto a unified time axis so that multiple cards, nodes, or machines can merge, compare, and correlate data. The examples below describe only the timing roles, without expanding into full application systems.

  • Distributed DAQ: trigger aligns capture start across nodes; timestamps allow multi-node streams to merge into one timeline.
  • Radar / SDR capture: trigger marks burst / frame boundaries; timestamps align multi-card channels for coherent processing.
  • Power transient recorder: trigger freezes fast windows around events; timestamps correlate transients with external logs and instruments.
  • Time-correlated sensor fusion: trigger enables synchronous capture when needed; timestamps align heterogeneous sensors to a shared epoch.
Application archetypes for trigger and timestamp Four tiles: distributed DAQ, radar/SDR capture, power transient recorder, and time-correlated sensor fusion. Each tile shows short module names and arrows for trigger and time alignment. Application archetypes Trigger → shared start · Timestamp → shared time axis Distributed DAQ Radar / SDR capture Power transient recorder Time-correlated fusion Event Trigger Node A Node B Time axis tag Burst Trigger Card A Card B Time axis tag Signal Trigger Rec Tag Sensor A Sensor B Trigger Time axis tag

IC selection logic: inquiry fields and decision rules (with example part numbers)

Selection should be driven by verifiable timing requirements rather than by a part number first. The most effective output for procurement and vendor communication is a structured list of specification fields plus pass/fail decision rules. Example part numbers below serve as capability anchors for inquiry comparisons and are not endorsements.

Trigger (I/O and control path) — fields to ask
  • Input type (edge / level / pulse width)
  • Qualifier / debounce support (presence and granularity)
  • Re-arm / re-trigger rules (minimum interval, lockout, latch)
  • Trigger matrix capabilities (fan-in, fan-out counts)
  • Propagation delay (typ/max) and channel-to-channel skew spec
  • Distribution topology support (star fan-out practicalities)
Timestamp (tagging and time representation) — fields to ask
  • Resolution (tick size) and update rate
  • Accuracy target (relative alignment vs absolute ToD)
  • Where-tag option (ADC boundary / FPGA / host)
  • Format (epoch + counter, nanoseconds, sequence ID)
  • Rollover behavior (period and unwrap method)
  • CDC handling expectations (deterministic vs best-effort)
PTP / timebase — fields to ask
  • IEEE 1588 profile support (declared compatibility)
  • Hardware timestamping availability (yes/no and location)
  • Holdover behavior (spec and mode)
  • PPS / 10 MHz I/O availability (ports and direction)
  • Asymmetry plan (measure / compensate / document)
System-level evidence — fields to demand
  • Multi-card alignment method (PTP only / PTP+PPS / PTP+PPS+10 MHz)
  • Calibration support (cable/path offsets, per-card tables)
  • Temperature drift metrics (offset vs temperature)
  • Long-run stability metrics (mean/spread over time)
  • Restart / relock behavior (epoch binding consistency)
Selection rules (pass/fail gates)
  • End-to-end Δt verification (mean + spread) must be feasible with the chosen tag point.
  • Tag point must be consistent across cards and documented (ADC boundary / FPGA / host).
  • Deterministic or bounded trigger latency must be supported for multi-card alignment.
  • Calibration hooks must exist (offset tables, loopback paths, measurable inputs).
  • Holdover mode must be specified for loss-of-lock conditions.
  • Test evidence must be available (loopback Δt, temperature drift, long-run stability, restart behavior).
Example part numbers for inquiry anchoring (not endorsements)
  • PTP / DPLL / disciplined clock: Renesas 8A34001 · ADI AD9545 / AD9548 · Microchip ZL30793
  • Clock / SYSREF distribution (example class): TI LMK04828 · ADI HMC7044
  • Low-skew fan-out buffer (example class): TI CDCLVD1212
  • PTP-capable Ethernet building blocks (example class): Microchip KSZ8463 · Microchip VSC8572
  • High-speed ADC with deterministic sync hooks (example class): TI ADC12DJ3200
Spec-to-part filter funnel A funnel diagram showing Need to Key specs to Must-have features to Test evidence, representing a selection workflow from requirements to proof. Selection funnel Need → Key specs → Must-have → Evidence Need Key specs Must-have Test evidence skew latency holdover where-tag resolution PPS I/O calibration loopback Δt temp long-run restart

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs: Trigger & Timestamp (multi-card / multi-node timing)

These FAQs focus on practical alignment: separating trigger vs timestamp responsibilities, reducing non-determinism, binding to a unified epoch, and verifying results end to end using measurable Δt (mean and spread).

1) What is the difference between trigger and timestamp? When is only trigger enough?
Trigger defines a shared event start (a common “time zero” for capture windows). Timestamp assigns time labels to samples/events so data from multiple cards or nodes can be merged on the same timeline. Trigger alone is sufficient when the system only needs a one-shot aligned start and downstream correlation is not required. Timestamp becomes mandatory when data must be aligned across devices after transport, buffering, restarts, or when time-of-day correlation is required.
2) What do gate, arm, qualifier, and debounce solve in practice?
Gate controls whether triggers are allowed to pass (often used to open/close acceptance windows). Arm is the state that enables a trigger to be accepted once preconditions are satisfied (useful for deterministic “ready to capture” behavior). Qualifier applies extra conditions (level, width, sequence, source selection) so that only valid events fire. Debounce rejects short glitches or bounce-like noise on slow or long-cable trigger lines. Together, these blocks prevent false triggers, reduce re-trigger chaos, and make the trigger path testable and bounded.
3) Why is software triggering often “not precise”? What makes latency vary on the same machine?
Software triggering crosses components that are not time-deterministic: OS scheduling, interrupt handling, driver queues, DMA setup, and transport buffering (PCIe/USB/Ethernet). Any of these stages can add variable waiting time depending on system load, contention, and driver behavior. Precision improves when triggering is moved to hardware (GPIO/FPGA trigger logic) where the delay is bounded and measured, and software only arms/configures the capture instead of “creating time.”
4) Skew vs latency vs jitter: how are they different for alignment work?
Latency is an absolute delay along one path (Trigger In → capture/tag). Skew is the difference between two paths for the same event (Card B minus Card A). Jitter is variation over repeated runs (a spread) rather than a fixed bias. For multi-card alignment, the most actionable outputs are (1) skew/offset as the mean delta (often calibratable) and (2) spread as stability (to be minimized). Avoid mixing these terms: a good mean offset does not guarantee small spread, and a small spread does not guarantee absolute time-of-day accuracy.
5) Timestamp “resolution” vs “accuracy”: why can high resolution still align poorly?
Resolution is the timestamp tick size (how fine the counter steps are). Accuracy is how close the tag represents the true event time on the intended timebase. A fine tick does not help if the tag is taken far from the physical event (e.g., in host software after buffering), or if cross-domain boundaries (CDC), pipeline boundaries, and transport variability dominate. Alignment improves by tagging closer to the event and using a disciplined, shared timebase with bounded capture paths.
6) Where should timestamps be taken: at ADC boundary, in FPGA, or in host?
Tagging closer to the physical event generally reduces system error. ADC-boundary tagging (or immediately adjacent capture logic) minimizes software/bus uncertainty. FPGA tagging is often the best balance when capture and gating are in FPGA, because the tag can be tied to deterministic internal state transitions. Host tagging is simplest but usually worst for precision alignment because buffering and OS scheduling add unpredictable delays. The chosen tag point must be consistent across cards, documented, and verifiable via loopback Δt tests.
7) What does “epoch” mean, and how do ToD and local counters relate?
Epoch is the agreed reference origin used to interpret counters (for example, a second boundary or a time-of-day anchor). A local counter provides fine-grain ticks, while ToD (time-of-day) provides the absolute label. A disciplined clock binds them: the counter runs from a stable frequency source, and epoch/ToD updates keep the counter aligned to the shared reference. Alignment problems occur when epoch binding differs across cards during lock/restart or when epoch updates are applied inconsistently.
8) How should PTP, PPS, and 10 MHz be combined for multi-card alignment?
A common robust pattern is: PTP provides time discipline and ToD, PPS aligns second boundaries (epoch alignment), and 10 MHz reduces drift (syntonization) so holdover is stable and counter rate matches across cards. “PTP offset looks good” can still produce misaligned data if trigger distribution is unmatched, tag points differ, epoch binding is inconsistent during restart, or capture/tag boundaries are not deterministic. Verification must be based on end-to-end Δt across cards.
9) How should external triggers be distributed to multiple cards? Why is “star fan-out” common?
Multi-card trigger distribution should minimize path mismatch. A star fan-out topology reduces accumulated skew compared with daisy chains, because each card sees a similar propagation path and fewer intermediate stages. Cable length differences create deterministic delays that appear as mean skew (often calibratable). Distribution design should include known propagation delay bounds, channel-to-channel skew specifications, and a calibration plan (measured offsets written into compensation tables).
10) How should alignment be verified: oscilloscope, time-interval measurement, or self-test loopback?
Oscilloscope checks physical I/O timing (trigger edges, fan-out skew) and is ideal for validating distribution and thresholds. Time-interval measurement targets precise Δt between two signals (useful for cable/path calibration). Self-test loopback validates the full chain end to end: trigger out → trigger in → capture/tag → timestamp compare, producing acceptance metrics (mean offset and spread) directly from captured timestamps. A complete plan typically uses loopback for sign-off plus instrument checks for root-cause isolation.
11) How are offset/skew/spread computed from recorded data? What about rollover and restarts?
First align the same events across cards (sequence ID, trigger ID, or matched capture index). Choose a reference card A and compute Δt for each event: ΔtB[i] = tB[i] − tA[i]. Report mean(ΔtB) as offset (calibratable bias) and a distribution metric as spread (stability). If counters roll over, unwrap timestamps before computing deltas. After restarts or relock transitions, verify epoch binding consistency; if epoch rebind occurs, compute deltas within consistent epochs and treat rebind as a separate verification case.
12) What should be asked in vendor inquiries to avoid “spec says yes, acceptance fails”?
Require fields that map to acceptance tests. For trigger: input type, qualify/debounce, re-arm rules, matrix fan-in/out, propagation delay bounds, and skew spec. For timestamp: resolution, relative vs absolute accuracy targets, tag location, format, rollover handling, and CDC expectations. For timebase/PTP: declared profile support, hardware timestamping location, holdover behavior, PPS/10 MHz I/O, and an asymmetry plan. For system sign-off: a loopback Δt report (mean + spread), temperature drift results, long-run stability, and restart/relock behavior evidence. Without evidence, alignment success is not predictable.