123 Main Street, New York, NY 10001

PTP Hardware Timestamping for Industrial Ethernet & TSN

← Back to: Industrial Ethernet & TSN

PTP hardware timestamping accuracy is decided by where time is taken and how it is delivered and corrected (one-/two-step, E2E/P2P, residence/correction).

This page provides a datasheet-to-bring-up playbook to build and verify a stable timestamp path (PHC/TSU matching, queues, association) and to close the error budget to ±X ns with measurable pass criteria.

H2-1 · Scope, Assumptions, and Success Criteria

This section locks the page boundary: only hardware timestamping paths (ingress/egress capture and delivery) and correction usage (correctionField, residence-time impact, and E2E/P2P measurement hooks) are covered. Anything outside that boundary is referenced only, never expanded.

In scope (this page covers)
  • Where time is taken: ingress/egress timestamp points across PHY/PCS/MAC/DMA stages.
  • How time is delivered: timestamp queues, descriptors, interrupts, mapping keys, and overflow behavior.
  • How time is used: one-step/two-step field updates, correctionField/residence-time relationship, and E2E vs P2P hook requirements.
  • What limits accuracy: quantization, jitter, queue sensitivity, domain crossings, and asymmetry (defined as error sources, not servo algorithms).
Out of scope (referenced only)
  • SyncE jitter templates and holdover loop design.
  • White Rabbit sub-ns calibration and bi-directional frequency lock details.
  • TSN scheduling (Qbv/Qci/GCL parameterization and time-slot tables).
  • Industrial stacks (PROFINET/EtherCAT/CIP) implementation and certification internals.
  • Protection co-design (TVS/CMC/surge return paths) device-level selection and layout recipes.
  • Security (MACsec/DTLS/TLS) key flows and cryptographic plumbing.
Boundary rule: this page may mention the items above only to state dependencies or handoffs—no deep dives.
Assumptions (engineering prerequisites)
  • Transport: PTP event messages run over L2 Ethernet or UDP (IPv4/IPv6). Matching rules must reflect the actual transport.
  • Timestamp source: timestamps are generated at a defined hardware stage (PHY/PCS/MAC/DMA) and delivered via a deterministic driver interface.
  • Time base: a PHC/TSU timecounter exists with known resolution and stable increment behavior (time alignment method is treated as a separate concern).
  • Network elements: boundary/transparent clocks may exist; this page only uses their effects on correction and residency, not their full control-plane behavior.
Success criteria (quantified, measurable)
  • End-to-end time error: offset ≤ ±X ns over a Y-second evaluation window (define window and statistic upfront).
  • Device timestamp jitter: RMS timestamp jitter < X ns (measured at the hardware delivery interface, not at user-space capture time).
  • Reliability: missing timestamp rate < X per 106 event packets; timestamp queue overflow = 0 in steady state.
  • Metric definitions: offset/delay/wander must declare sampling window, aggregation (RMS/peak/p95), and time base used for comparisons.
Implementation note: algorithms (servo, filtering, BMCA) are intentionally excluded; only measurement interfaces and correctness gates are defined here.
Terminology & measurement definitions
Term What it is (1-line) Where it lives Observed via Common pitfall
PHC Hardware time base used to timestamp PTP event packets. MAC/SoC timer block (timecounter). Registers / driver clock API. Using system time as a proxy for PHC.
TSU Timestamping unit that matches packets and captures ingress/egress time. MAC/PCS sideband logic. Timestamp queue / descriptor. Timestamp captured but not delivered (queue overflow / mapping loss).
Ingress / Egress TS Time taken when a frame enters/leaves a defined datapath stage. PHY/PCS/MAC/DMA boundary. Driver event timestamp interface. Assuming a mirrored/SPAN capture time equals ingress TS.
CF / Residence Packet-field correction and per-hop time spent inside a device. PTP header (correctionField) and device datapath. Packet decode + device counters. Mixing E2E vs P2P correction semantics without declaring profile.
PDV Delay variation caused by queueing and variable processing time. Network + device datapath. Latency histograms / timestamp deltas. Attributing PDV to clock drift instead of queue sensitivity.
Figure 1 · Scope map (what is covered vs referenced only)
HW Timestamping PHC / Timebase TSU / Match Ingress/Egress capture + Delivery (Queue/Desc/IRQ) TSN Scheduling referenced only SyncE referenced only White Rabbit referenced only Industrial Stacks referenced only Protection referenced only Security referenced only

The center block defines the only deep-dive area: timestamp capture points, timestamp delivery, and correction usage. Surrounding domains are intentionally treated as external dependencies.

H2-2 · What Hardware Timestamping Really Means (Where the Time is Taken)

Core definition

Hardware timestamping records an event time at a defined ingress/egress point inside the datapath. It is not the time a CPU, sniffer, or mirror port happens to observe a packet.

Why the timestamp point sets the accuracy ceiling
  • Closer to the wire (PHY/PCS): less queue sensitivity and fewer software-induced delays, typically tighter determinism.
  • Closer to the CPU (DMA/driver): delivery is easier to implement, but queueing, batching, and backpressure can inflate variation.
  • Engineering rule: choose a timestamp point that stays stable under worst-case traffic, not just in an idle lab capture.
Ingress vs egress: two paths, different failure modes
  • Ingress (RX): time is taken when a frame crosses a selected boundary (PHY/PCS/MAC). The main risk is losing the timestamp on the way out (queue overflow or mapping loss).
  • Egress (TX): time is taken when a frame leaves a selected boundary. The main risk is variable queueing before the stamp point, plus conflicts with offloads that modify the frame.
  • Consistency gate: the stamp point must be declared and held constant across builds, tests, and field captures.
The 3-piece model (what to verify on every platform)
  1. Time base (PHC): resolution and increment behavior are known; reading PHC is distinct from reading system time.
  2. Match point (TSU): event packets are correctly classified (L2 EtherType or UDP port; VLAN/IPv6 awareness if used).
  3. Delivery (interface): timestamps are exported via queue/descriptor/IRQ with a stable mapping key and defined overflow behavior.
Boundary reminder: scheduling and shaping (TSN Qbv/Qci) affect queueing, but their parameterization is handled on the TSN Switch/Bridge page.
Timestamp Point Capability Matrix
Timestamp point Determinism Queue sensitivity Typical resolution One-step readiness Delivery interface Common failure mode
PHY MDI-side Highest (closest to wire) Low sub-ns to ns Depends (needs rewrite path) Sideband FIFO / registers TS present but not exported reliably
PCS boundary High Low ns Often feasible Event queue / descriptor Mismatch with VLAN/IPv6 match rules
MAC boundary Medium–High Medium ns Common (but verify offloads) Descriptor + IRQ TX timestamp mis-mapped to wrong frame
DMA / driver boundary Lowest High ns to tens of ns Rare / risky Software capture timestamps Looks OK on bench, breaks under load

Use the matrix to decide what must remain stable under worst-case traffic: the chosen timestamp point and the delivery semantics (queue depth, mapping key, overflow behavior).

Figure 2 · RX/TX pipeline and candidate timestamp points
Ingress (RX) Egress (TX) PHY PCS MAC FIFO DMA CPU TS TS TS Hint: closer to PHY/PCS usually means less queue sensitivity. CPU DMA FIFO MAC PCS PHY TS TS Delivery matters: Ensure a stable mapping key (sequence/descriptor), defined queue depth, and overflow behavior for event timestamps.

The same packet can be “observed” at many points, but only one chosen ingress/egress event defines the hardware timestamp. Declare that point and keep it invariant across tests.

H2-3 · PTP Message Types and Timestamp Requirements (Sync / Follow_Up / Delay_*)

Each PTP exchange is only verifiable when every required hardware timestamp is accounted for and mapped to the correct packet instance. This section binds message types to t1–t4, then binds t1–t4 to hardware hooks (classification, capture point, delivery, and mapping keys).

Message taxonomy (why some packets must be stamped)
  • Event messages require hardware timestamps because they define the timing events (typical: Sync, Delay_Req).
  • General messages usually carry complementary data and primarily require correct association (typical: Follow_Up, Delay_Resp, optional Delay_Resp_Follow_Up).
  • Verification rule: if an event packet is observed on the wire, the platform must prove a matching timestamp was captured and delivered without ambiguity.
t1–t4 mapping (E2E baseline, no algorithm details)
  • t1: Master TX timestamp of Sync at the chosen egress capture point.
  • t2: Slave RX timestamp of Sync at the chosen ingress capture point.
  • t3: Slave TX timestamp of Delay_Req at the chosen egress capture point.
  • t4: Master RX timestamp of Delay_Req at the chosen ingress capture point.
Interpretation gate: each t-value must represent a hardware-defined event (ingress/egress point), not a CPU observation time or a mirrored capture timestamp.
L2 vs UDP: what hardware must match (fields only)
L2 (EtherType)
  • EtherType: 0x88F7
  • VLAN awareness: single/double tag presence must be handled consistently.
UDP/IPv4/IPv6 (L3/L4 parse)
  • UDP ports: 319 (event), 320 (general)
  • IPv4 protocol / IPv6 next header: UDP (17)
  • VLAN/QinQ: classification must reach the UDP header through tags.
PTP header (optional deeper filter)
  • messageType (Sync / Delay_Req / Follow_Up / Delay_Resp)
  • domainNumber (multi-domain coexistence)
  • sequenceId (association + timestamp mapping key)
Common failure signature: PTP packets appear in captures, but timestamps are missing because the classifier matches the wrong transport (L2 vs UDP) or cannot parse VLAN/IPv6 layers to reach the expected fields.
Verification closure (what must be provable)
  • Hook correctness: event packets must hit the hardware filter with the intended transport and domain.
  • Capture correctness: each required t-value must be taken at the declared ingress/egress point (PHY/PCS/MAC).
  • Delivery correctness: timestamps must be exported through queue/descriptor/IRQ without loss under load.
  • Association correctness: timestamps and related general messages must be mapped using stable keys (sequenceId + domain + port + direction).
Message → Timestamp(s) → Hardware Hook (verification matrix)
Message Category Direction Required TS Timestamp point Hardware hook Mapping key Sanity check
Sync Event Master TX / Slave RX t1 (TX), t2 (RX) Declared PHY/PCS/MAC boundary L2: 0x88F7 or UDP: 319 sequenceId + domain + port + direction No missing TS for Sync under steady load
Follow_Up General Master TX → Slave RX Association to t1 N/A (usually not stamped) UDP: 320 (general) or PTP header filter sequenceId + domain (+ sourcePortIdentity) Follow_Up count must match Sync count per stream
Delay_Req Event Slave TX / Master RX t3 (TX), t4 (RX) Declared PHY/PCS/MAC boundary L2: 0x88F7 or UDP: 319 sequenceId + domain + port + direction No missing TS for Delay_Req under steady load
Delay_Resp General Master TX → Slave RX Association to t4 N/A (usually not stamped) UDP: 320 or PTP header filter sequenceId + domain (+ requester identity) Delay_Resp count must match Delay_Req per stream
Delay_Resp_Follow_Up General (optional) Master TX → Slave RX Association to Delay_Resp N/A UDP: 320 or PTP header filter sequenceId + domain (+ identity) Enable only if the device advertises the feature

Practical use: the matrix is a bring-up checklist. If any required event timestamp is missing or mapped incorrectly, the exchange becomes non-verifiable regardless of higher-level servo tuning.

Figure 3 · Master/Slave exchange with t1–t4 annotations
Master Slave Sync Follow_Up Delay_Req Delay_Resp t1 t2 t3 t4 HW timestamp general message (usually no TS)

The exchange is verifiable only if all required event timestamps (t1, t2, t3, t4) are captured at declared ingress/egress points and delivered without loss or ambiguity.

H2-4 · One-Step vs Two-Step: What Changes in Hardware

The hardware distinction is simple but strict: one-step requires a deterministic on-the-fly frame update on the TX path, while two-step avoids TX rewrite and instead requires reliable TX timestamp delivery and correct Follow_Up association.

One-step (TX rewrite required)
  • What changes: the TX datapath rewrites timing fields (precise timestamp / correctionField) before the frame is finalized.
  • Hard requirement: rewrite must occur with deterministic latency and before link-layer finalization (CRC/FCS generation).
  • Compatibility gate: checksum/segmentation offloads must not conflict with the rewrite point and update order.
Two-step (no TX rewrite, mapping required)
  • What changes: Sync is sent first, then Follow_Up carries the precise origin timestamp derived from the TX timestamp.
  • Hard requirement: TX timestamp must be exported reliably and associated to the correct Sync instance.
  • Failure mode: queue overflow, reordering, or weak mapping keys cause incorrect associations even when packets are present on the wire.
Engineering pitfalls (detectable signatures)
  • Checksum offload conflict: one-step rewrite order vs checksum/TSO path produces sporadic integrity errors or inconsistent fields.
  • Cut-through datapaths: field updates and residence accounting must remain consistent across different forwarding behaviors.
  • Follow_Up mapping loss: Sync count ≠ Follow_Up count, or TX timestamp count ≠ Sync count under load.
One-step vs Two-step (capability / risk / dependency / best-fit)
Dimension One-step Two-step
TX frame modification Required (on-the-fly rewrite) Not required
Primary dependency Deterministic rewrite point + finalization order Reliable TX timestamp delivery + mapping keys
Offload compatibility Sensitive (checksum/TSO/LSO interaction) Less sensitive (but mapping still required)
Failure signature Field inconsistency / integrity errors under load Sync–Follow_Up mismatch / missing TX TS
Best-fit Deterministic TX paths that support rewrite Platforms with strong timestamp queues and mapping
Figure 4 · TX pipeline split: one-step rewrite vs two-step mapping
Shared TX pipeline CPU DMA FIFO MAC PCS PHY TS TX timestamp capture One-step branch (rewrite) Two-step branch (map + Follow_Up) Rewrite CF / ToD Finalize order must be deterministic TX TS Queue Follow_Up Map One-step risk: offload / rewrite order Two-step risk: mapping / overflow event timestamp determinism gate association gate

Decision rule: choose one-step only when the platform can guarantee a deterministic rewrite point compatible with offloads; choose two-step when timestamp delivery and sequence mapping remain lossless under worst-case traffic.

H2-5 · Master/Slave Roles and Clock Types (OC / BC / TC) — Only What Affects Timestamping

Clock roles matter here only through what must be timestamped, how correction is accounted, and how mapping is managed. This section keeps the scope on hardware-visible requirements and avoids full PTP network theory.

Scope boundary (timestamping-only)
  • Covers: ingress/egress timestamps, port-level mapping, residence time, and correctionField update points.
  • References only (no expansion): BMCA, servo/filter design, full topology planning, and TSN scheduling details.
Ordinary Clock (OC): endpoint completeness
  • Primary requirement: required event timestamps are never missing (capture + delivery remain lossless under load).
  • Consistency requirement: ingress/egress capture points remain stable across link speed, VLAN mode, and traffic class.
  • Mapping requirement: TX timestamps associate to the correct packet instance (sequenceId + domain + direction).
Verification cue: in a steady stream, Sync count, Follow_Up count (if used), and available TX TS count must remain consistent per stream.
Boundary Clock (BC): multi-port management
  • Port-scoped timestamps: each port must maintain independent ingress/egress timestamp integrity.
  • Mapping scope expands: mapping keys must include port identity to prevent cross-port mis-association.
  • Re-timestamp / re-send paths: egress timestamp capture must remain on the actual transmit path used under load.
Common pitfall: per-port queues share a timestamp FIFO or IRQ budget; under bursts this can produce overflow, reordering, or port mixing unless the driver binds queues strictly to ports.
Transparent Clock (TC): residence time into correction
  • Must measure: ingress event timestamp and egress event timestamp for the same packet instance.
  • Residence time: Δt = (egress TS − ingress TS) under the declared capture-point definition.
  • Correction update: correctionField must be incremented consistently using Δt, without breaking integrity checks or association.
Verification cue: the distribution of correction increments should be stable for a fixed configuration; strong load-coupled shifts often indicate a capture-point mismatch or an inconsistent forwarding path.
If the clock type is known, focus on these 3 items
OC (endpoint)
  • Event TS completeness (no missing TS under worst-case traffic).
  • Capture-point stability (ingress/egress definition does not drift).
  • TX TS association correctness (sequenceId + domain + direction).
BC (multi-port)
  • Port isolation in mapping keys (avoid cross-port mixing).
  • Timestamp FIFO/queue overflow behavior (bursts must not reorder mappings).
  • Re-send path determinism (timestamps must come from the actual egress path).
TC (forwarding + correction)
  • Ingress + egress TS for the same packet instance.
  • Residence-time definition (capture-point boundary must be explicit).
  • Consistent correction update point (ΔCF update must not conflict with integrity generation).
Clock type → required timestamps/correction → typical pitfalls
Clock type Ingress TS Egress TS Residence Correction update Mapping scope Must-have counters Typical pitfalls
OC Required Required Not used Endpoint-only Per stream TS missing / overflow / map miss Wrong classifier, missing TS, weak association keys
BC Required (per port) Required (per port) Not used Per-port re-send Per port + stream Per-port TS missing / overflow / reorder Cross-port mixing, shared FIFO overflow, path non-determinism
TC Required Required Required (Δt) Add residence to CF Per hop CF update count / TS pair miss / map miss Capture-point mismatch, inconsistent forwarding path, CF update timing conflicts
Figure 5 · OC vs BC vs TC (what changes for timestamps and correction)
OC BC TC Ingress TS Egress TS TS Queue PHC / TSU Port TS (RX/TX) Port Mapping Per-port Queues Re-send Path Ingress TS Egress TS Residence Δt CF Update Packet flow +ΔCF Focus: timestamp completeness (OC), port isolation (BC), residence + correction increment (TC)

Transparent clock behavior is timestamp-sensitive: residence time requires a consistent ingress/egress capture-point definition and a stable correction update point on the forwarding path.

H2-6 · Delay Mechanisms: E2E vs P2P Corrections (What Hardware Must Support)

E2E and P2P differ most in where the timing responsibility lives. E2E places the burden on endpoints (Delay_Req/Resp), while P2P shifts critical requirements to each hop through peer-delay exchanges and (often) transparent-clock correction.

E2E (endpoint-centered)
  • Core exchange: Delay_Req / Delay_Resp (event timing must be captured reliably).
  • Required event TS: endpoint TX/RX timestamps that complete the E2E timing set (t3/t4 in addition to Sync path).
  • PDV reality: delay variation is observed as jittery measurements; the requirement here is consistent timestamp capture and export, not filtering theory.
P2P (hop-centered)
  • Core exchange: Pdelay_* messages (peer delay relies on timestamps at each hop).
  • Hardware dependency: switches/bridges often must support hop-level timestamping and stable correction accounting.
  • Correction coupling: peer delay and residence time are both timestamp-defined; inconsistent capture points create non-physical corrections.
Mixed network: downgrade principles (no topology theory)
  • Capability-gated choice: if a hop cannot provide the required peer-delay/timestamp features, prefer an E2E-valid configuration over a partially-verifiable P2P path.
  • Consistency first: avoid mixing incompatible correction definitions along a single critical path.
  • Observability priority: choose the mechanism with the strongest counters and timestamp visibility under worst-case load.
Verification gates (hardware metrics)
  • Gate A — Timestamp completeness: no missing event timestamps (Delay_* or Pdelay_*), even during bursts.
  • Gate B — Association integrity: sequenceId/domain/port mapping does not reorder or mix streams.
  • Gate C — Correction stability: correction increments remain consistent for a stable configuration (no unexplained load-coupled jumps).
E2E vs P2P (messages, timestamp sets, BC/TC dependency)
Mechanism Core messages Required event TS Relies on BC Relies on TC Failure signature Best-fit
E2E Delay_Req / Delay_Resp Endpoint Delay_* event TS (t3/t4) + Sync path Optional (multi-port endpoints) Not required Missing Delay_* TS, mapping loss under load Endpoints with strong timestamp integrity
P2P Pdelay_Req / Pdelay_Resp / Pdelay_Resp_Follow_Up Hop-level Pdelay_* event TS + stable per-hop mapping Often (multi-port devices) Often (residence + correction) Hop TS missing, CF increments non-physical Networks with controlled switching hops
Figure 6 · E2E vs P2P topology view (endpoint vs hop responsibility)
E2E P2P Master Switch Switch Slave TS TS switch hops not required Master TC Residence TC Residence Slave Pdelay Pdelay Pdelay endpoint TS hop correction peer delay exchange

E2E remains verifiable when endpoints provide lossless event timestamping; P2P becomes verifiable only when each hop can support peer-delay timing and stable correction accounting.

H2-7 · Hardware Architecture: PHC/TSU, Clock Domains, and Timestamp Delivery

Hardware timestamping succeeds only when three pieces form a verifiable loop: a stable timebase (PHC), a deterministic capture trigger (TSU), and a lossless delivery + association path from hardware to the driver.

Scope (implementation-facing, not servo theory)
  • Included: PHC/timecounter capabilities, TSU match/capture rules, clock-domain crossing (CDC) jitter sources, timestamp queues, delivery methods, and overflow/association failure signatures.
  • Excluded: servo algorithms, profile tuning, BMCA, and full protocol stack implementation details.
PHC / timecounter (the timebase)
  • Resolution & increment: the tick/increment definition sets the quantization floor and determines wrap/rollover behavior.
  • Timescale alignment: capability to represent and align timescale (TAI/UTC) matters for system-wide consistency.
  • Step vs slew support: step changes risk discontinuities; slew support enables controlled rate adjustments without breaking monotonic time.
  • Capture I/O (if available): PPS in/out or capture pins are only useful when they share the same PHC domain and are documented with clear latency/precision.
Verification cue: monotonicity and continuity are properties to validate in hardware/driver behavior, not assumptions.
TSU (trigger + capture)
  • Match rules: EtherType, UDP ports (319/320), VLAN tags/PCP, and L2/L3 mode selection define which frames become timestamp events.
  • Capture point: MAC/PCS/PHY-side capture determines the best-achievable precision and the sensitivity to queueing.
  • Event indexing: sequenceId/domain/port identity must be available to correlate captured timestamps back to the correct packet instance.
Failure signature: packets are visible in software, but hardware timestamps are missing → classification mismatch, wrong mode (L2 vs UDP), or VLAN parsing gaps.
Clock domains & CDC jitter sources
  • Typical domains: PHY/PCS, MAC, TSU/PHC, bus/DMA, and CPU/driver domains can differ.
  • CDC effect: the risk is not just added latency, but latency variation through async FIFOs, arbitration, and bus contention.
  • Correlation checks: timestamp delivery jitter that grows with CPU load or DMA traffic indicates CDC/bus-pressure coupling.
Timestamp delivery (export + association)
1) Capture
Latch behavior must be deterministic; event timestamps must exist for every matched frame.
2) Queue
Event queue depth and overflow policy (drop oldest/newest) must be understood and measurable.
3) Association
Mapping keys (sequenceId, port, domain) must survive reordering and descriptor reuse.
4) Reporting
Interrupt batching and polling can delay visibility; delays must not cause association timeouts or misses.
Practical rule: a “correct” timestamp is not useful unless it is delivered and associated to the correct event without loss.
Bring-up: minimal verification loop (hardware metrics)
  • Gate A — Match hits: TSU hit counters scale with the intended PTP event rate; miss counters remain explained (e.g., non-PTP traffic).
  • Gate B — No missing timestamps: event TS missing < X per 106 events (placeholder).
  • Gate C — No association errors: mapping misses/reorders remain < X per 106 events (placeholder).
  • Gate D — Stress stability: under CPU/DMA load and VLAN/priority changes, counters remain stable and no overflows occur.
Datasheet must-check list (selection + driver readiness)
Block Field Why it matters How to verify
PHC Timebase frequency, increment, resolution Quantization floor and wrap/rollover behavior Read PHC; confirm monotonic increments; check wrap documentation
PHC Step/slew capability Continuity control and system-time alignment behavior Apply controlled adjustments; confirm no unexpected discontinuities
TSU Match rules (L2 EtherType, UDP port, VLAN) Ensures event frames are actually timestamped Compare hit/miss counters vs known event rate
TSU Capture point location Determines best-case precision and queue sensitivity Check documentation; validate jitter under controlled load
Delivery Event queue depth + overflow policy Prevents missing TS during bursts Stress traffic; confirm overflow counters and behavior
Delivery Export method (descriptor/FIFO) + mapping keys Correct association under reorder/reuse conditions Track map-miss/reorder counters; verify per-port/domain isolation
Figure 7 · Hardware timestamp path (PHC → TSU → queues → driver)
PHY / PCS domain MAC / TSU domain CPU / driver domain PHY / PCS RX/TX SerDes MAC Pipeline TSU Match TS Latch Event Queue depth / overflow PHC / Timecounter DMA / Descriptor PTP Event API Association Counters Packet path TS path ! ! ! markers indicate common variability points (CDC / bus pressure / batching)

A complete design exposes hit/miss/overflow/association counters alongside timestamps, enabling deterministic bring-up and fault isolation.

H2-8 · Accuracy & Error Budget for Hardware Timestamping (From ns to System Offset)

The fastest path to stable system offset is an explicit error budget: break the result into measurable, hardware-linked terms, then isolate the dominant contributor using counters and controlled tests.

Measurement scope (define the accounting)
  • Offset: evaluate over a declared window and statistic (RMS / p99 / peak-to-peak) with placeholders: ±X ns / ±Y ns.
  • Delay variation (PDV): track distribution (p99/p999) rather than averages; do not hide tail behavior.
  • Wander: long-window drift metric using fixed sampling intervals; keep the window definition explicit.
1) Timestamp quantization (resolution floor)
  • Set by PHC increment and timestamp format.
  • Dominates when resolution is coarse relative to the target offset requirement.
  • Mitigation: higher-resolution timebase, closer-to-line capture point, or better PHC granularity.
2) Timestamp jitter (capture variability)
  • Introduced by capture-point uncertainty, CDC, arbitration, and internal pipeline variability.
  • Dominates when jitter grows with load, DMA pressure, or multi-queue activity.
  • Mitigation: consistent capture point, reduce CDC hops, isolate traffic classes, and validate under stress conditions.
3) Queueing / delivery variation (visibility + association)
  • May not change the timestamp value, but can cause missing timestamps or wrong association.
  • Dominates when event queues near full, overflows occur, or batching increases association misses.
  • Mitigation: sufficient queue depth, conservative batching, deterministic mapping keys, and robust counters.
4) Asymmetry (systematic bias)
  • Caused by non-identical TX/RX paths, media differences, and PHY internal delay asymmetry.
  • Dominates when offset shows stable bias rather than noise, especially across media or cable types.
  • Mitigation: explicit calibration/accounting boundaries and consistent physical configurations for validation.
5) Residence time estimation error (TC / switch path)
  • Occurs when ingress/egress capture points are inconsistent, or forwarding paths change under load.
  • Dominates when correction increments vary strongly with traffic load in an otherwise static configuration.
  • Mitigation: explicit capture-point definition, stable forwarding behavior, and correction increment distribution checks.
Error budget template (hardware-focused; fill with measured values)
Error term Mechanism Measurement method Typical magnitude Dominance trigger Mitigation knob Pass criteria
Quantization PHC increment / timestamp format Read PHC; compute LSB; validate floor in quiet tests X ns RMS (placeholder) Tight offset targets, coarse LSB Higher resolution / closer capture point LSB < X ns (placeholder)
Capture jitter Latch/CDC/arbitration variability Controlled load sweep; correlate jitter with CPU/DMA pressure X ns p99 (placeholder) Load / multi-queue / VLAN Reduce CDC, isolate traffic class p99 < X ns (placeholder)
Delivery variation Queue near-full, batching, map misses Track overflow/map-miss counters under bursts X events missing / 106 (placeholder) Burst traffic / IRQ batching Increase queue, reduce batching missing < X / 106 (placeholder)
Asymmetry Non-identical TX/RX delays Swap direction / cable type; measure bias shift X ns bias (placeholder) Media/cable/PHY changes Calibration + consistent media bias within ±X ns (placeholder)
Residence error Capture-point mismatch / path change Check correction increment distribution vs load X ns p99 (placeholder) Load changes / gate changes Stable capture points + forwarding ΔCF stable within X (placeholder)
Figure 8 · Offset as an accumulation of error terms (concept view)
Quantization Capture Jitter Delivery Var Asymmetry Residence Err System Offset RMS / p99 / peak-to-peak Target: ±X ns Identify dominant term via counters + sweeps

The budget is actionable when each term has a measurement method, a dominance trigger, and a mitigation knob tied to hardware capability or driver behavior.

H2-9 · Design Hooks & Pitfalls (PHY/MAC/Switch Interactions, VLAN/QoS, Offloads)

The fastest way to debug hardware timestamping is to collapse failures into three observable classes—missing, wrong, or noisy—then map each class to the first hardware check and the smallest corrective action.

Quick classification (start here)
  • Timestamp missing: PTP frames exist in software, but hardware event timestamps are absent or incomplete.
  • Timestamp wrong: timestamps exist, but association or on-wire field updates (one-step) are incorrect.
  • Timestamp noisy: average looks acceptable, but tail jitter (p99/p999) grows under load or topology changes.
Offloads vs one-step (field updates on TX)

One-step requires deterministic on-wire field updates at transmit time. Offloads can shift where checksums/segmentation occur, creating invisible mismatches between “what software thinks it sent” and “what actually went on the wire.”

Must-disable / must-verify checklist
  • Must-disable by default (one-step validation): segmentation/large-send style TX offloads (TSO/LSO/GSO class), and any TX checksum path that is incompatible with post-edit fields.
  • Must-verify if kept: RX checksum/LRO-style receive aggregation features that may alter classification visibility or event association timing.
  • Pass criteria: TX event timestamp missing < X per 106 events; checksum/CRC-related error counters remain 0 (or < X) during one-step operation.

First check: compare TSU “TX event captured” counters with the expected Sync rate; then validate association correctness under identical traffic with offloads toggled.

VLAN / QinQ / QoS tagging (classification failures)
  • Common pitfall: the TSU parser matches EtherType/UDP ports, but VLAN insertion (or QinQ double tags) changes parsing expectations and breaks match rules.
  • First check: TSU hit/miss counters must track the known PTP event rate; test “no VLAN → single VLAN → QinQ” to isolate parsing gaps.
  • Fix principle: explicitly cover VLAN presence and QinQ presence in match rules; keep PTP traffic VLAN/PCP consistent with QoS policy and validate across all intended tag modes.
  • Pass criteria: hit counter slope remains linear vs event rate; miss counters are explainable (non-PTP traffic) and stable.
Store-and-forward vs cut-through (residence + jitter tails)
  • Observable behavior: store-and-forward tends to add latency with more stability; cut-through reduces latency but often increases sensitivity to congestion and internal arbitration.
  • First check: under controlled load sweeps, monitor correction increment distribution width and timestamp jitter p99/p999.
  • Fix principle: treat congestion as a mandatory validation scenario; verify tails, not only averages.
Multi-port and bridge cases (domain/port/sequence association)
  • Common pitfall: association keys do not include enough identity (port + domain + sequence), causing timestamps to “cross” between ports or domains.
  • First check: ability to break counters per port and per domain; association misses/reorders should cluster to the problematic port if isolation exists.
  • Fix principle: isolate event queues per port when supported; ensure mapping keys include at least port identity and sequenceId, and keep domain separation explicit.
  • Pass criteria: per-port missing and map-miss counters remain < X per 106 events (placeholder).
Pitfall → Symptom → First check → Fix (quick table)
Pitfall Symptom First check Fix Pass criteria
TX offloads with one-step Wrong fields on wire / sporadic association Compare TSU TX hit vs expected Sync rate; toggle offloads Disable segmentation-class TX offloads; validate checksum path missing < X/106
VLAN/QinQ parser gaps Timestamp missing only with tags Hit/miss counters vs tag mode sweep Explicit match coverage for VLAN presence/QinQ hit tracks rate
Cut-through under congestion Tail jitter explodes (p99/p999) Load sweep + p99/p999 monitoring Validate congestion cases; tune buffering/priority p99 < X ns
Port/domain association gaps Wrong timestamps on some ports Per-port counters; map-miss clustering Queue isolation; include port+seq in mapping map-miss < X/106
Figure 9 · Failure tree: missing vs wrong vs noisy (first checks)
Timestamp Missing Timestamp Wrong Timestamp Noisy Match Rules VLAN / QinQ Parse Queue Overflow IRQ / Batching One-step Modify Checksum Offload Mapping Key Port Identity CDC / Bus Pressure Cut-through Load Queue Near-full Parser Mode Use counters + controlled sweeps to confirm the first check before changing multiple knobs.

A reliable workflow isolates one branch at a time: verify match → verify capture → verify queue → verify association → then validate tails under load.

H2-10 · Validation & Measurement Plan (How to Prove Your HW Timestamping Works)

Validation must be staged and repeatable: lab proof, production reproducibility, and field diagnosability all rely on the same counters, the same accounting windows, and explicit pass thresholds.

Four-stage validation path (gated)
  • Stage 1 — Bring-up: existence + completeness (hit, missing, association correctness).
  • Stage 2 — Accuracy: prove stability vs a reference method using explicit windows and statistics.
  • Stage 3 — Stress: load sweeps and congestion to expose tail jitter and overflow risks.
  • Stage 4 — Environmental + Production: temperature/power disturbance and a minimal production test set.
Stage 1 · Bring-up (existence + integrity)
  • Completeness: RX/TX event timestamps present; missing < X per 106 events.
  • Association: sequence-based mapping stable; map-miss/reorder < X per 106.
  • Classification: hit counters track known event rate under L2 and UDP modes (as applicable).
Stage 2 · Accuracy (method without brand lock-in)
  • Two-device exchange: fixed roles; evaluate offset using declared statistics (RMS/p99/peak-to-peak) over a declared window.
  • PPS/ToD alignment: verify PHC alignment behavior and stability trends (capability/consistency, not servo tuning).
  • Interval checks: validate that timestamp intervals match expected periodicity and that the quantization floor is visible in quiet runs.
Stage 3 · Stress (load sweeps + congestion)
  • Load sweep: 10% → 50% → 90% → congestion (placeholder) while tracking p99/p999 jitter and queue watermarks.
  • Counter focus: near-full, overflow, late delivery, map-miss, reorder—record together with throughput and CPU/DMA pressure.
  • Pass criteria: tails remain within X ns (placeholder) and missing/overflow remain 0 (or < X).
Stage 4 · Environmental + Production (repeatability)
  • Temperature: low/room/high sweeps while tracking PHC stability trend and missing rate.
  • Power disturbance: ripple/transients (principle-level) while monitoring overflow and late delivery counters.
  • Production minimal set: select a small scenario set that triggers known failure modes (VLAN, batching, burst, thermal).
Test matrix (scenario × observables × pass threshold)
Scenario Setup knob Observables Pass criteria Failure signature
Baseline (no tags) Offloads known-good; batching minimal hit/miss, missing, map-miss, offset RMS/p99 missing < X/106 Missing
VLAN / PCP mode Single VLAN; fixed PCP hit slope, miss stability, missing hit tracks rate Missing
QinQ Double tags on/off sweep miss counters, missing missing < X/106 Missing
High load + congestion 90%→congestion; batching sweep queue near-full, overflow, p99/p999 jitter p99 < X ns Noisy
Temperature sweep Low/room/high (placeholder) PHC stability trend, missing, offset stats missing < X/106 Noisy / Missing
Figure 10 · Validation flow: bring-up → accuracy → stress → production
Bring-up Hit/Miss Missing Mapping Accuracy Offset RMS p99 Window Stress p99/p999 Near-full Overflow Production Minimal Repeatable Logs G1 G2 G3 Each gate requires explicit pass thresholds missing < X / 10^6 · map-miss < X / 10^6 offset target ±X ns · tail jitter p99/p999 < X ns

A field-ready system logs port/domain/sequence identity and exports counters so lab failures can be reproduced and isolated in production and service.

H2-11 · Engineering Checklist (Design → Bring-up → Production)

Goal: turn hardware timestamping into a gated, evidence-based workflow. Every checklist item has an owner, a method, required evidence, and a pass criterion (X placeholder).

How to use Use gates to prevent late-stage surprises
  • Design Gate: confirm PHC/TSU/match/queue/offload constraints in the datasheet before committing to PCB + software architecture.
  • Bring-up Gate: prove t1–t4 (or peer-delay set) is complete, ordered, and correctly associated under realistic load.
  • Production Gate: enforce minimal self-test + black-box logging so field failures remain diagnosable.
  • Evidence rule: if an item has no saved evidence, treat it as NOT verified.
Design Gate capability confirmation (datasheet-first)

Freeze these items at design time to avoid “hardware cannot do it” failures later:

  • PHC baseline: resolution, increment model, step/slew controls, and “time scale” (TAI/UTC/free-run) interface.
  • TSU match rules: what packet fields can be matched in hardware (L2 EtherType / UDP ports / VLAN presence / messageType cues).
  • Timestamp storage: event queue depth, overflow counters, and whether timestamps are keyed by sequenceId / hash / descriptor index.
  • TX path constraints: one-step field update path vs two-step association path; checksum offloads interaction must be documented.
  • Delay mode support: E2E only vs P2P (peer delay events) impacts what needs to be timestamped and where.
  • External timing hooks (optional): PPS/ToD/capture pins and their relationship to PHC for correlation tests.

Must-check datasheet fields (minimal set)

Field Why it matters Evidence to save Pass criteria (X)
PHC resolution / increment sets quantization floor register map + config note ≤ X ns
TSU match fields prevents missed timestamps match rule table snapshot hit-rate ≥ X%
Event queue depth caps loss under load queue size + overflow bit overflow = 0
One-step TX field update path requires in-flight update datasheet section + constraints tx-update deterministic
Two-step association key prevents Follow_Up mismatch sequenceId mapping note mismatch ≤ X / hour

Example parts to cross-check “Design Gate” fields (verify in datasheet)

  • PTP-capable PHY examples: TI DP83640, TI DP83630, TI DP83869HM, Microchip LAN8814, NXP (Automotive) TJA1103.
  • PTP-capable switch examples: Microchip KSZ9477, NXP SJA1105P/Q/R/S.
  • PTP-capable NIC/controller example: Intel I210 family.

Note: examples are for capability reference and field names, not a recommended shopping list.

Bring-up Gate completeness + association + stress safety
  1. Timestamp completeness: capture the required set (e.g., t1–t4 for E2E, or peer-delay set for P2P) with zero “missing” events in a controlled run.
  2. Ordering sanity: per-port timestamps must be monotonic and consistent with RX/TX direction (no swapped ingress/egress).
  3. Two-step association: validate sequenceId mapping; record mismatch counters and verify mapping under loss/retry.
  4. One-step constraints: verify offloads that can break in-flight field update (checksum/segmentation); document required on/off settings.
  5. Queue stress: raise traffic + interrupt batching; ensure no overflow and jitter stays within X (p99/p999).
Production Gate minimal self-test + black-box fields
  • Self-test: enable loopback / event-trigger sanity; confirm timestamps are present and plausible.
  • Counters: log per-port hit/miss/overflow/mismatch counts (plus drops/CRC if available) with a trace-id.
  • Environment tags: log temperature, supply status, and link mode to correlate jitter tails.
  • Pass/fail: treat any non-zero overflow or persistent mapping mismatch as a hard fail (threshold X if needed).

Checklist table (Owner / Method / Evidence / Pass criteria)

Gate Item Owner Method Evidence Pass (X)
Design PHC resolution confirmed HW datasheet + register map field snapshot ≤ X ns
Design TSU match fields cover target PTP mode HW + Driver match rule review rule table no unsupported fields
Design Event queue depth & overflow counters exist Driver register discovery reg notes overflow visible
Bring-up Required timestamps present (t1–t4 / peer set) Driver + QA controlled traffic run pcap + log missing = 0
Bring-up Two-step association stable (seqId mapping) Driver loss/retry injection mismatch counters ≤ X/hr
Bring-up Offload settings documented for PTP mode Driver A/B compare config diff no regressions
Bring-up Queue stress: overflow remains zero QA high throughput + batching overflow log overflow = 0
Production Timestamp sanity self-test enabled Factory loopback + event check test report pass
Production Black-box fields logged with trace-id FW log schema audit schema snapshot required fields present
Figure 11 — Three-Gate Checklist Flow (Design → Bring-up → Production)
Design Gate Bring-up Gate Production Gate PHC TSU Match Queue 1-step / 2-step t1–t4 / peer Association Offloads Overflow=0 Self-test Counters Trace-ID Env tags Evidence required at every gate

H2-12 · Applications & IC Selection Logic (What to Choose, Not a Shopping List)

Keep selection deterministic: start from application requirements, map to hardware capabilities, then verify with measurable hooks (timestamps present, association stable, overflow=0, jitter within X).

Application buckets only what directly affects hardware timestamping

Industrial control loop (determinism-first)

  • What matters: low timestamp jitter tails under load.
  • Must-have: deep event queue + overflow counters + stable association.
  • Verify: p99/p999 jitter within X; overflow=0 at target throughput.

Power / grid sync (continuity + stability)

  • What matters: PHC continuity (step/slew behavior) and clean observability.
  • Must-have: clear PHC controls + capture hooks (optional PPS/ToD) + robust logging.
  • Verify: holdover/step events recorded; drift bounded within X over Y.

Gateway / switch / multi-port (BC/TC aware)

  • What matters: per-port timestamp domains + residence/correction correctness.
  • Must-have: per-port event handling + stable port identity + TC/BC hooks.
  • Verify: per-hop correction/residence monotonic; mismatch ≤ X/hr.

Requirement → Needed hardware capability (with verification hook)

Requirement Needed HW capability How to verify Pass (X)
Low ns-level floor PHC resolution + stable increment read PHC config + measure quantization ≤ X ns
No missing timestamps under load deep event queue + overflow counters stress traffic + check overflow overflow=0
Two-step association robust sequenceId mapping key + counters loss/retry + mismatch counter ≤ X/hr
One-step required TX in-flight update path + offload constraints A/B offloads + verify correction update no checksum conflict
P2P / TC-heavy topology peer-delay events + residence/correction support per-hop verification + correction sanity bounded error X

Example IC part numbers (capability reference; validate mode support)

  • PHY-level PTP timestamping examples: TI DP83640, TI DP83630, Microchip LAN8814, NXP TJA1103 (automotive).
  • Industrial GbE PHY often used in real-time systems (check TS support path): TI DP83869HM.
  • PTP/TSN-aware switch examples: Microchip KSZ9477, NXP SJA1105P/Q/R/S.
  • PTP-capable Ethernet controller example: Intel I210 family (hardware timestamping support depends on SKU/driver path; validate with test matrix).

Selection rule: never trust a “PTP supported” bullet alone—require queue depth, overflow counters, match rules, and a reproducible verification plan (H2-10).

Figure 12 — Selection Decision Tree (Requirements → Capability Packs)
Requirements Need 1-step? P2P / TC-heavy? No-miss under load? Need ext PPS/ToD? Pack A P2P + TC hooks per-port domain residence sanity Pack B deep queue overflow counters stress-proof delivery Pack C 1-step TX path offload constraints update determinism NO YES YES YES

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-13 · FAQs (10–12) — Hardware Timestamping Troubleshooting

Scope: only long-tail issues inside the hardware timestamping path (PHC/TSU matching, queues, delivery, association, and correction accounting). Format per FAQ: Likely cause / Quick check / Fix / Pass criteria (X placeholders).

Two-step is enabled, but offset becomes more “wandery”

Likely cause: sequenceId association mismatch, or timestamp delivery jitter caused by event-queue pressure/overflow.

Quick check: read TSU stats: assoc_mismatch_count, event_queue_overflow, and distribution of “timestamp_delivery_latency”.

Fix: enforce per-port association keys (portIdentity + sequenceId), verify driver reads the correct queue/index, and reduce IRQ batching or increase event queue depth if overflow is observed.

Pass criteria: assoc_mismatch_rate ≤ X/hour, event_queue_overflow = 0, offset_p99 ≤ X ns over Y s window.

One-step enabled → packets fail checksum / link stops working

Likely cause: checksum/segmentation offload conflicts with in-flight field update (correctionField / originTimestamp), or TX path updates do not recompute checksums consistently.

Quick check: A/B test with checksum offload and TSO/GSO disabled; verify whether the precise timestamp update still occurs and frames pass CRC/checksum.

Fix: keep conflicting offloads disabled for PTP event frames, or switch to two-step if hardware cannot guarantee consistent TX update + checksum handling.

Pass criteria: PTP frame error_rate ≤ X/10⁶ frames, correction update verified on-wire, no checksum/CRC failures in Y-minute run.

Timestamps are occasionally missing only under high traffic load

Likely cause: IRQ batching/coalescing delays timestamp readout, event queue depth is insufficient, or DMA backpressure delays delivery until entries are dropped/overwritten.

Quick check: monitor event_queue_level, event_queue_overflow, and driver ring occupancy while sweeping traffic load and IRQ coalesce settings.

Fix: reduce batching for event frames, prioritize event queue servicing, increase queue depth (if configurable), and separate PTP event handling from bulk traffic completion paths.

Pass criteria: ts_missing_rate ≤ X/10⁶ events at target throughput, event_queue_overflow = 0, jitter_p99 ≤ X ns.

t1/t2 exist, but t3/t4 are intermittently missing

Likely cause: Delay_* event classification rules do not match VLAN/Q-in-Q, IPv6, or the selected transport (L2 vs UDP).

Quick check: dump TSU match rules and confirm coverage for: VLAN present, EtherType (L2), UDP ports (IPv4/IPv6), and messageType mapping for Delay_* frames.

Fix: extend match rules to include VLAN/IPv6 variants, or force a single transport mode consistently across the system (and reflect it in TSU rules + software config).

Pass criteria: t3/t4 missing = 0 over Y minutes, TSU hit_rate ≥ X%, no rule coverage gaps for active mode.

E2E is stable, but P2P becomes unstable

Likely cause: peer-delay event frames are not recognized by hardware, or TC/residence support is partial/inconsistent across hops.

Quick check: verify Pdelay_Req/Resp/_Follow_Up timestamps are produced at each hop; compare per-hop correction/residence behavior for monotonicity and consistency.

Fix: ensure all intermediate devices support P2P + TC correctly; otherwise, downgrade to E2E (or constrain topology) to avoid mixed-mode instability.

Pass criteria: peer_delay_missing = 0, per-hop correction delta stable within X ns, offset_p99 ≤ X ns in P2P mode.

Replacing a switch shifts offset by a constant amount

Likely cause: residence/correction accounting differs (implementation or configuration), or forwarding mode (cut-through vs store-and-forward) changes fixed latency.

Quick check: measure whether the shift is load-independent (fixed bias) or load-dependent (jitter tail); compare correction/residence stats per hop.

Fix: standardize switch configuration and timing mode across deployments; include “switch model + forwarding mode” as a required variable in the validation matrix.

Pass criteria: offset bias change ≤ X ns after switch swap, residence stats consistent within X ns across Y-minute run.

Jitter increases after temperature rises

Likely cause: PHC clock source drift increases, clock-domain crossing adds extra timing uncertainty, or timestamp delivery tails worsen under thermal stress.

Quick check: log phc_drift_ppb (or equivalent), jitter_p99/p999, and delivery latency distribution at two temperatures.

Fix: lock PHC to a lower-drift reference if available, reduce cross-domain sensitivity (shorten CDC path / avoid extra buffering for event frames), and enforce thermal test as a gate item.

Pass criteria: jitter_p99 increase ≤ X ns across temperature range, phc_drift_ppb ≤ X, ts_missing_rate ≤ X/10⁶ events.

PTP packets are visible in capture, but hardware timestamps are always empty

Likely cause: TSU event matching never hits (wrong mode: L2 vs UDP), or match fields/ports are not configured for the active PTP transport.

Quick check: compare active transport (L2 EtherType vs UDP ports) against TSU match rules; check tsu_hit_count vs tsu_miss_count.

Fix: configure TSU rules to match the chosen PTP mode and VLAN/IPv6 variants; keep the entire system on a single consistent transport mode.

Pass criteria: tsu_hit_rate ≥ X%, event timestamps non-empty for all required message types over Y-minute run.

Follow_Up exists, but its timestamp does not match the Sync

Likely cause: sequenceId reuse/reordering across ports, wrong queue/index readout in the driver, or domain/port identity mixing in multi-port bridge setups.

Quick check: correlate by (portIdentity, sequenceId) and verify monotonic association; audit which event queue each port uses and whether indices overlap.

Fix: enforce per-port queues/domains, use a strict association key, and add a mismatch counter in logs (do not silently “best-effort” map events).

Pass criteria: assoc_mismatch_rate ≤ X/hour, reorder events = 0, offset_p99 ≤ X ns.

Port-mirror capture “timestamps” look inconsistent / untrustworthy

Likely cause: mirrored packets are observed after forwarding/mirroring delay, not at the true ingress/egress timestamp point.

Quick check: confirm the measurement point: “mirror port capture time” vs “hardware event timestamp delivered via descriptor/sideband FIFO”.

Fix: use hardware-provided timestamp readout paths (descriptor/PHC/TSU event FIFO) for validation; treat mirrored captures as protocol visibility only.

Pass criteria: validation uses true ingress/egress timestamps; mirror-port capture not used for timestamp accuracy claims.

Two devices both behave like “master” and timing becomes unstable

Likely cause: role/state churn causes inconsistent timestamp sample sets and sudden correction/offset discontinuities (do not expand BMCA details here).

Quick check: log role state transitions and correlate them with step-like changes in offset and timestamp delivery patterns.

Fix: enforce a single timing domain master policy (static roles where required) and ensure only one active master per domain during validation.

Pass criteria: role_flaps = 0 over Y minutes, offset_step_events ≤ X/day, offset_p99 ≤ X ns.

Offset looks fine, but real-world triggering is not accurate

Likely cause: application reads system clock instead of PHC, or PPS/ToD alignment uses the wrong time scale/epoch (TAI/UTC/offset applied incorrectly).

Quick check: verify clock source in the trigger path (PHC vs system time), and validate ToD scale/epoch mapping used by the application.

Fix: bind triggering to PHC time, standardize the time scale/epoch conversion once, and log (phc_time, system_time, applied_offset) for every trigger event.

Pass criteria: trigger_error_p99 ≤ X ns, clock_source = PHC, ToD alignment validated against PPS within X ns.

Structured data (FAQPage JSON-LD) is included below for SEO. Keep the visible answers and JSON-LD answers identical when editing.