123 Main Street, New York, NY 10001

Industrial Ethernet Slave/Master (EtherCAT/PROFINET/SercosIII)

← Back to:Interfaces, PHY & SerDes

Industrial Ethernet slave/master controllers exist to make cyclic process-data exchange deterministic: bounded forwarding + hardware timestamps + disciplined timebase, so multi-node systems hit tight latency/jitter budgets in real cabinets.

This page turns “determinism” into measurable, testable acceptance criteria (p99 offset/jitter, per-hop delay, recovery time) and maps them to the controller architecture, host DMA/IRQ strategy, and dual-port switching behaviors.

Definition & Scope: Industrial Ethernet Slave/Master (What it is / isn’t)

Intent Define the canonical boundary: focus on determinism, dual-port forwarding, hardware timestamping, and synchronization inside the slave/master controller—exclude PHY, TSN switching/bridging, and protection parts.
What is an “industrial Ethernet slave/master controller” (engineering definition)

An industrial Ethernet slave/master controller is the real-time engine above MAC/PHY and below the host CPU/SoC that hardens deterministic behavior: it implements dual-port forwarding (line/ring topologies), hardware timestamp capture and timebase discipline, process-data staging (process image / DPRAM), and diagnostic hooks that keep control cycles predictable across load, topology changes, and production variance.

Dual-port switching: what it means in real systems (line/ring determinism)
  • Forwarding path control: predictable per-node added delay (fast path vs buffered path) and bounded queueing under load.
  • Topology visibility: port role/state, link-change timestamps, and ring/line recovery evidence for root-cause analysis.
  • Diagnostics-first behavior: counters and watermarks that correlate burst jitter and stalls to congestion or recovery state transitions.
Hardware timestamps: what they must guarantee (sync error budget, control-loop impact)
  • Capture-point consistency: ingress/egress timestamp locations must be consistent across ports and test setups to avoid “same DUT, different topology → different accuracy”.
  • Timebase discipline: time correction, drift control, and restart behavior must be observable (logs/counters), not assumed.
  • Budgetable residuals: quantization + capture point + queueing + discipline residuals should roll up into a measurable p99 offset/jitter target.
  • Control consequence: sync instability manifests as sampling-phase variance, which translates to output noise or oscillation risk in tight servo loops.
What this page delivers (engineering outputs)
  • Determinism accounting templates (latency/jitter decomposition with pass criteria placeholders).
  • Dual-port forwarding behavior checklist (fast path vs buffered, queueing, recovery evidence).
  • Timestamp capture-point validation checklist (ingress/egress correlation and station-to-station alignment).
  • Process-image/DMA/IRQ realism notes (how host contention becomes cycle jitter).
  • Bring-up → production hooks (what to log and how to gate pass/fail).
Not in scope (link out to sibling pages)
  • Ethernet PHY / Magnetics: signal integrity, EEE, magnetics selection → Ethernet PHY
  • TSN Switch/Bridge: 802.1AS/Qbv/Qbu scheduling and switching fabric → TSN Switch / Bridge
  • Protection (ESD/Surge/EMI): IEC paths, TVS/CM chokes → Protection & Test
Measurable targets (placeholders + how to validate)
Cycle time target
Target: X µs / X ms
Quick validation: measure cycle period error on the master (p50/p99) from “cycle start” to “process-image commit”. Pass criteria: p99 period error < X and no missed cycles within X minutes.
Allowed end-to-end jitter
Target: X ns / X µs
Definition discipline: specify the jitter type (cycle-to-cycle, timestamp-offset, or sampling-phase) and keep capture points identical. Pass criteria: p99 jitter < X and burst duration < X cycles.
Per-node forwarding delay
Target: X ns (or X µs)
Quick validation: measure per-hop added delay under a stated payload load (e.g., X% line rate) and log queue watermarks. Pass criteria: p99 per-hop delay < X with no queue watermark overflow.
Scope Boundary Map Central controller scope with four in-scope blocks and four link-out boundary blocks around it. Ethernet PHY (link out) TSN Switch/Bridge (link out) Protection ESD / Surge / EMI Magnetics (link out) This Page Scope Slave/Master Controller HW Timestamp Dual-Port Forward Process Image (DPRAM) Diagnostics Counters
Scope boundary map: this page focuses on controller determinism (dual-port forwarding, hardware timestamps, process image, diagnostics). PHY, TSN switching/bridging, magnetics, and protection are linked out.

System Placement: Where the Controller Sits in the Stack

Intent Clarify responsibilities across PHY/MAC/controller/host/app, and show where determinism is won or lost—this is not a protocol software tutorial.
Stack view (who owns what)
  • PHY/MAC: electrical signaling and frame I/O (linked out for details).
  • Controller core: deterministic forwarding, timestamp capture, timebase discipline, process-data staging, and diagnostic evidence.
  • Host/CPU: configuration, application logic, firmware update, logging, and non-real-time services.
Reference architectures (determinism implications)
MCU/SoC-hosted
Strength: high integration and cost efficiency. Risk: IRQ/DMA contention, bus arbitration, and cache effects can convert host load spikes into cycle jitter.
FPGA-assisted
Strength: tightly controlled timing and fixed-phase data movement. Risk: higher verification and maintainability cost; determinism is earned through test rigor, not assumptions.
Discrete controller/ESC + host
Strength: real-time paths are hardened in silicon, reducing host-induced jitter. Risk: host interface bandwidth and driver behavior can become the bottleneck if sizing is incorrect.
Two data paths that must not be mixed up
  • Fast path (forwarding): port RX → forwarding engine → port TX. Goal: bounded and predictable latency/jitter, minimally impacted by host load.
  • Host path (process/config): port RX → process image/DPRAM → DMA/IRQ → host CPU → application. Goal: consistent process-data semantics, diagnostics, and maintainability.

Engineering rule: if real-time behavior depends on host-path timing, the design must prove bounded jitter under worst-case host load.

Where determinism is won or lost (quick checks)
  • Capture-point mismatch: verify ingress/egress timestamp points are identical across ports and testers.
  • Queueing variance: log queue watermarks and classify stalls as congestion vs recovery states.
  • DMA/IRQ contention: correlate cycle jitter bursts with host interrupts, DMA bursts, and memory bandwidth saturation.
  • Restart phase jumps: check timebase initialization and latch phase after warm reset (offset “step” behavior).
Sizing placeholders (bandwidth + line-rate context)
Host interface bandwidth
Target: X Mbps / X MB/s
Practical sizing: Required_BW ≈ (ProcessDataBytes_per_cycle × Cycles_per_second) × X_overhead where X_overhead = X1.2…X1.5 to cover protocol/driver overhead and diagnostics. Pass criteria: bandwidth margin ≥ X% under worst-case host load.
End-to-end payload rate requirement
Target: X% of line rate
Test context must be stated: node count N, process data size X bytes, cycle time X, and background traffic (mailbox/acyclic). Pass criteria: at X% line rate, p99 jitter < X and no determinism-window violations.
Not in scope for this section

Protocol software stacks, configuration tooling, and full TSN scheduling details are intentionally excluded here. This section is strictly about system placement and deterministic data paths.

Stack & Data Path PHY/MAC blocks feed a controller core with fast-path forwarding and a host-path process image connected to host CPU/app. PHY/MAC A PHY/MAC B Controller Core Determinism + HW TS Fast path Forwarding Process Image (DPRAM / Mailbox) Host path Host IF SPI/QSPI/PCIe… Host CPU / App Config / Logic / Logs
Placement view: deterministic forwarding happens in the controller fast path; process data and configuration traverse the host path via the process image (DPRAM) and host interface.

Protocol Timing Models (EtherCAT / PROFINET / SercosIII) — Determinism View Only

Intent Explain how deterministic behavior is created via cycle windows, process-data windows, synchronization, and bounded jitter—without message formats, object models, or configuration tooling.
One common timing abstraction (shared across protocols)
  • Cycle window: the repeat interval that defines control-loop phase and allowable latency budget.
  • Process-data window: the reserved portion where real-time I/O must be consistent and bounded.
  • Async/acyclic window: background traffic (diagnostics/parameters) that must not steal determinism margin.
  • Synchronization residual: remaining offset/jitter after time discipline; this is what shows up as sampling-phase variance.

Engineering rule: comparisons are only meaningful if the same cycle definition, capture points, and statistics (p50/p99 + burst duration) are used.

EtherCAT: frame-on-the-fly + distributed clocks (timing-path view)
  • On-the-fly data path: forwarding is coupled to in-flight processing, turning “per-node added delay” into a measurable hop contribution.
  • Sync impact chain: timebase discipline → cycle boundary alignment → process-data commit phase → control-loop sampling phase.
  • Common determinism breakers: queueing variance under load, recovery-state transitions, and inconsistent timestamp capture points across setups.
PROFINET: RT/IRT as a window/priority model (without TSN expansion)
  • RT vs IRT concept: determinism comes from how strictly a predictable process-data window is protected inside each cycle.
  • Window ownership: background traffic is allowed only if it cannot violate the reserved process-data phase.
  • Common determinism breakers: uncontrolled async traffic, station-to-station measurement mismatch, and recovery behavior that shifts cycle phase.

Scope guard: details of switching fabrics and TSN scheduling belong to the TSN Switch/Bridge page; this section stays at timing-model level.

SercosIII: cyclic communication + synchronization (cycle/jitter view)
  • Cyclic phase discipline: the value is in stable phase relationships between communication cycles and process-data update points.
  • Jitter translation: residual sync error becomes sampling-phase variation, which directly affects high-bandwidth motion control stability.
  • Common determinism breakers: phase jumps after warm resets, topology changes that alter hop delays, and hidden queueing variance.
Five selection questions (cycle, jitter, topology, redundancy, diagnostics)
1) Cycle
Ask: What is the target cycle time under N nodes and X% line-rate payload?
Why: cycle time sets the phase budget for forwarding, host updates, and control stability.
Evidence: p50/p99 cycle error + “missed cycle” counters over X minutes.
2) Jitter
Ask: Which jitter definition is used (cycle-to-cycle, timestamp offset, or sampling-phase), and where are capture points?
Why: mismatched definitions make “good results” non-transferable between labs and systems.
Evidence: offset histogram (p99) + burst duration in cycles.
3) Topology
Ask: What is the per-hop added delay (p99) for line/ring, and how does it change under load?
Why: hop-to-hop delay accumulation can silently consume cycle margin.
Evidence: per-hop latency logs + queue watermark traces.
4) Redundancy
Ask: What is the recovery time target after link flap/cable pull, and what state evidence is available?
Why: slow or unstable recovery manifests as burst jitter and control faults.
Evidence: link-state timestamp logs + recovery-state snapshots.
5) Diagnostics
Ask: Which counters and “last-fault snapshots” exist (CRC, queue watermarks, offset alarms, restart phase)?
Why: determinism claims are only actionable if failures can be correlated to measurable evidence.
Evidence: counter set + timestamped event log fields agreed for production.
Measurable placeholders (timing classes + sync accuracy)
Typical cycle time class (three-line placeholder)
  • EtherCAT: X µs … X ms
  • PROFINET: X µs … X ms
  • SercosIII: X µs … X ms

Validation discipline: fix node count N, payload X bytes, background traffic rules, then report p50/p99 cycle error and burst duration.

Synchronization accuracy target
Target: X ns / X µs

Measurement must specify capture points and statistics: p99 offset, temperature drift, and re-lock time < X ms after disturbances.

Not in scope (kept out to prevent topic sprawl)

Message formats, object dictionaries, configuration tooling, and full TSN scheduling are intentionally excluded here. This section only covers timing-model concepts that determine cycle and jitter behavior.

Cycle Timeline Comparison (Concept) Three swimlanes with cycle window, process data window, and async window, with minimal concept tags and icons. Cycle window (repeats) cycle EtherCAT PROFINET SercosIII Process data Async / diagnostics on-the-fly Process data Async / non-critical window / priority Cyclic data Async / service sync phase process-data window async/acyclic window sync point
Concept view: determinism is created by protecting the process-data window inside a repeating cycle and controlling synchronization residuals; protocol specifics are intentionally excluded.

Hardware Architecture Deep Dive: Dual-Port Controller Block Diagram

Intent Provide a practical mental model of the controller internals that maps directly to determinism budgets, timestamp validation, process-data staging, and production diagnostics.
Ports & forwarding engine (fast path)
  • 2× MAC ports: dual-port designs serve line/ring topologies and enable per-hop delay accounting.
  • Forwarding modes: cut-through (lowest latency) vs store-and-forward (buffered integrity) must be measurable and selectable by policy.
  • Queue visibility: watermarks and drop/throttle behavior must be observable to classify burst jitter as congestion vs recovery.
Time engine (clock domains + timestamps + sync timer)
  • Clock domain control: defines timestamp stability and restart behavior across temperature and brownout.
  • Timestamp unit: capture points (ingress/egress) must be explicit and consistent across ports.
  • Sync timer: anchors cycle boundaries and provides the reference for process-data commit phase.
Process image (DPRAM / shared memory / mailbox)
  • Process data staging: separates real-time semantics from host timing noise.
  • Mailbox/acyclic channel: isolates non-critical traffic and prevents determinism window violations.
  • Buffering discipline: size headroom and commit phase are part of the determinism budget (not optional details).
Host interface + IRQ/DMA (where jitter leaks in)
  • Interfaces (category view): SPI/QSPI, parallel bus, PCIe, and other high-throughput links (no vendor naming).
  • IRQ latency budget: p99 interrupt latency must be bounded under worst-case host load.
  • DMA burst shaping: burst size and coalescing settings must be chosen to avoid cycle-phase drift.
Safety & diagnostic hooks (production evidence)
  • Counters: CRC/error counters, queue watermarks, offset alarms, restart events.
  • Error capture: “last fault snapshot” fields to correlate failures with timebase/queue/recovery states.
  • Watchdog + safe-state pins: deterministic fault containment for field reliability and compliance narratives.
Measurable placeholders (map to diagram blocks)
Forwarding latency
Cut-through: X ns  |  Store-and-forward: X µs
Measure per-hop added latency (p99) under X% line rate and log queue watermarks to separate congestion from recovery behavior.
Timestamp resolution
Target: X ns
Verify ingress/egress capture points and report p99 offset; include warm reset and temperature sweeps to expose phase jumps.
Process image size
Target: X KB
Size from bytes/cycle × cycles/sec × buffering; enforce headroom ≥ X% to avoid post-deployment expansion failures.
Interrupt latency budget
Target: X µs
Measure p99 IRQ latency under worst-case host load; correlate jitter bursts with IRQ/DMA activity to validate bounded behavior.
Controller Macro Block Diagram Two ports feed a forwarding engine, with a time engine on top, process image on bottom, host interface on the right, and diagnostics/safety hooks. Port A MAC Port B MAC Controller Core Forwarding Engine cut-through / buffered CT SF Time Engine TS + Sync Timer Process Image DPRAM / Mailbox Host Interface SPI / QSPI Parallel / PCIe IRQ/DMA Diagnostics / Safety Hooks Counters · Snapshots · Watchdog · Safe-state
Macro block model: deterministic forwarding runs in the forwarding engine (cut-through or buffered), time engine stabilizes cycle phase, process image isolates host timing noise, and diagnostics provide production-grade evidence.

Determinism Budget: Latency, Jitter, and Forwarding Delay Accounting

Intent Turn “determinism” into a measurable budget with explicit delay components, jitter categories, and a repeatable measurement order—using card lists (not tables).
Budget anatomy (end-to-end delay accounting)

End-to-end determinism margin is consumed by fixed latency (cycle margin) and variable latency (jitter/bursts). A practical budget splits the path into measurable segments with consistent capture points.

End-to-end segments (listed, not expanded)
  • PHY/MAC (listed only): link layer contribution and capture-point definition.
  • Controller forwarding: fast path (cut-through / buffered) with per-hop accounting.
  • Host copy: DMA/interrupt and memory traffic shaping.
  • Application loop: scheduling + compute + I/O commit timing.

Scope guard: PHY implementation details belong to the Ethernet PHY page; this section only keeps them as named budget items.

Latency decomposition (what to measure per segment)
Controller forwarding
Measure: per-hop added delay using consistent RX/TX capture points (p50/p99 + burst duration).
Signature: p99 rises with load; bursts correlate with queue watermarks.
Host copy (DMA/IRQ)
Measure: IRQ latency distribution (p50/p99) under worst-case CPU/memory load and DMA burst settings.
Signature: cycle-to-cycle jitter bursts align with IRQ/DMA activity peaks.
Application loop
Measure: cycle boundary → compute → commit timing (p99) with the same cycle definition used for network results.
Signature: jitter increases with scheduler contention even if forwarding remains stable.

Engineering rule: averages are insufficient; determinism validation must use p99 and burst duration under defined load and topology.

Jitter taxonomy (root-cause categories that must be separable)
Clock / timebase
Drift and phase jumps translate into offset instability; validate with temperature sweeps and warm-reset comparisons.
Queue contention
Load-dependent queueing raises per-hop p99; must be correlated with watermark/counter evidence.
DMA / IRQ
Host-side bursts appear as cycle-phase variance; classify using IRQ latency p99 and DMA burst traces.
Cache / memory contention
Shared-memory pressure can dominate host copy and application loop timing even when forwarding looks stable.
Ring re-convergence
Recovery behavior produces burst jitter and re-sync settling; validate with disturbance events and settle-time metrics.
Measurement order (repeatable escalation)
Step 1 — Single node baseline
Establish host/timebase baseline without forwarding; record p99 cycle error, p99 IRQ latency, and p99 offset stability.
Step 2 — Two nodes (one hop)
Measure per-hop added delay p50/p99 with defined payload and load; capture queue watermark correlation.
Step 3 — Ring (N nodes)
Validate delay accumulation and recovery behavior; measure re-sync settling in cycles and milliseconds.
Step 4 — Disturbance injection
Inject controlled background load, link flaps, temperature ramps, and host CPU peaks; validate that p99 and burst durations remain bounded.

Repeatability requirement: lock node count (N), payload (X bytes), background rules, and statistics window length before comparing runs.

Measurable placeholders (acceptance-ready)
Per-hop added delay
p50 / p99: X / X
Define capture points (RX ingress → TX egress) and report burst duration when exceeding the p99 threshold.
Cycle-to-cycle jitter
p99: X
Use the same cycle definition across labs; report p99 and burst duration over a fixed observation window.
Re-sync settle time after disturbance
X cycles / X ms
Define “settled” as returning inside p99 offset/jitter limits for a continuous window of X cycles.
Not in scope (kept out to prevent cross-page overlap)

Detailed PHY behavior, protection components, and TSN scheduling mechanisms are intentionally excluded. This budget focuses on measurable end-to-end delay and jitter accounting for controller-based forwarding systems.

Latency Budget Waterfall (Box) Chain of additive latency boxes: port RX, forwarding, port TX, host copy, app loop, with delta placeholders and small icons. End-to-end latency budget (additive) Each block is a measurable segment; use p50/p99 + burst duration. Port RX ingress Δt = X Forward fast path Δt = X Port TX egress Δt = X Host Copy DMA / IRQ Δt = X ! App Loop compute Δt = X Total ΣΔt = X validate p99 + burst under defined load/topology
Budget view: treat each segment as a measurable block with consistent capture points; acceptance should reference p99 and burst duration, not averages.

Hardware Timestamping & Synchronization: What “HW TS” Must Guarantee

Intent Define practical HW timestamp guarantees: consistent capture points, stable timebase discipline, bounded offset stability, and measurable re-lock behavior—while keeping TSN/802.1AS strictly out-of-scope (link out).
Timestamp capture points (ingress/egress consistency)
  • Ingress vs egress: the capture boundary must be specified (before/after queues) and kept identical across ports.
  • Comparability: results must remain consistent across testers and fixtures; otherwise “good offset” numbers are not transferable.
  • Queue separation: capture strategy must allow distinguishing queueing delay from timebase residuals.
Timebase discipline (calibration, drift, temperature)
  • Local clock stability: drift and warm-reset behavior must be characterized, not assumed.
  • Temperature sensitivity: drift across temperature must be measured and budgeted (ppm placeholder).
  • Phase jumps: brownout/restart should be tested for phase discontinuities that masquerade as “random jitter.”
Synchronization loop behavior (update, filtering, burst handling)
  • Update interval: controls how quickly residual offset can be corrected under drift and disturbances.
  • Filtering: must be selected to avoid slow settling while keeping noise bounded.
  • Burst errors: disturbances should trigger bounded re-lock behavior with measurable settle time.
Not in scope (link out)

802.1AS and TSN scheduling (Qbv/Qbu) are intentionally excluded here. They belong to the TSN Switch/Bridge topic page; this section focuses on HW timestamp guarantees that remain valid regardless of TSN policy.

Measurable placeholders (HW TS acceptance-ready)
TS capture granularity
X ns
Capture-point definition must accompany this number; otherwise granularity cannot be compared across devices.
Offset stability (p99)
X ns
Report p99 offset and burst duration under defined load/topology; averages are insufficient for determinism claims.
Drift across temperature
X ppm
Validate drift with temperature sweeps and document the re-lock behavior after thermal steps.
Re-lock time
X ms
Define “locked” as returning inside p99 offset limits for a continuous window of X cycles.
Timestamp Capture Points Diagram Two ports with ingress and egress timestamp points, connected to a timebase and correction loop with minimal labels and icons. Port A Port B RX path TX path RX path TX path Ingress TS Egress TS Ingress TS Egress TS Timebase clock + discipline Correction Loop TS events Sync Timer cycle boundary Legend capture point
Capture-point discipline: define ingress/egress timestamp boundaries consistently across ports, then validate timebase correction behavior via p99 offset stability, drift across temperature, and bounded re-lock time after disturbances.

Dual-Port Switching Behaviors: Cut-through vs Store-and-forward, Queues, and Priorities

Intent Describe dual-port switching as chip-level behavior: forwarding mode selection, queue/priority rules, and congestion/flow-control checkpoints that keep determinism bounded under load.
Forwarding modes (chip-level semantics)
Cut-through
Starts forwarding before full-frame validation completes. Minimizes forwarding latency and can reduce per-hop delay variance when queues remain bounded.
  • Best when: low error rate, bounded congestion, strict cycle budgets.
  • Risk surface: error/CRC knowledge arrives late; policy must define how errors are handled.
Store-and-forward
Buffers the full frame before forwarding. Enables integrity gating and explicit buffering rules, but introduces buffer-dependent latency and queueing tails under load.
  • Best when: integrity gating is required or error containment is prioritized.
  • Risk surface: queue depth variability can dominate p99 delay unless isolated/limited.
Error / CRC boundary (why policies matter)

Full-frame validation completes late in the receive path, so deterministic behavior under errors depends on the controller’s mode and policy. When error counters rise, a robust design must prove whether forwarding stays bounded or transitions to a defined containment mode.

Required checkpoints
  • Mode transition evidence: does error growth correlate with cut-through → buffered switching (or other containment)?
  • Determinism protection: does p99 queueing delay remain bounded under error bursts?
  • Loss reporting: is loss behavior explicit (drop/throttle/alarm) with thresholds?
Queueing model (principles that preserve determinism)
Priority isolation (minimum requirement)
  • Strict priority: protects cycle traffic, but needs explicit controls to avoid starvation tails.
  • Reserved resources: dedicated queue/buffer share for cyclic process data reduces p99 inflation under background load.
  • Acyclic containment: diagnostics/config traffic must be isolated or rate-limited to prevent tail bursts.
Determinism metric (do not use averages)

Queueing must be validated using p99 delay and burst duration under defined load/topology. Average delay can look acceptable while burst tails break control-loop stability.

Congestion behavior (explicit and testable)
Define the outcome under worst-case load
  • Drop: which class drops first, and how is it reported?
  • Throttle: which ingress is rate-limited, and does it preserve cycle traffic bounds?
  • Alarm: which watermark/error thresholds generate alarms for diagnosis and recovery sequencing?
Required pass evidence
  • Bounded p99 queueing delay: remains below X µs with background load.
  • Burst control: exceedance bursts are bounded to X cycles.
  • Recovery behavior: after congestion relief, settle time returns inside limits within X ms.
Backpressure / flow control (determinism checkpoints)

Flow-control events can introduce hidden tail latency and head-of-line blocking. Determinism validation must confirm whether cycle traffic remains bounded when backpressure triggers.

Quick checks
  • Correlation: do jitter bursts align with flow-control counters or pause/backpressure events?
  • Isolation: is cyclic traffic placed in an isolated class/queue that remains bounded during flow-control?
  • Containment: does background traffic throttle first, with explicit thresholds and alarms?
Data placeholders (acceptance-ready)
Internal buffering
X KB
Specify whether the number is total, per-port, or per-queue; validate with watermark p99 under defined background load.
Worst-case queueing delay under load
X µs
Report p99 + burst duration (X cycles) at a fixed load model (payload, background rate, and topology).
Loss behavior
drop / throttle / alarm threshold = X
Define the trigger (watermark or errors/sec) and the required determinism behavior during and after the event.
Not in scope (kept out to prevent cross-page overlap)

TSN scheduling details and full protocol specifications are intentionally excluded. This section focuses on controller forwarding behavior, queueing principles, and explicit congestion/flow-control checkpoints.

Forwarding Mode Selector Two forwarding paths (cut-through and buffered) selected by a policy block using load, errors, and priority signals; includes drop/throttle/alarm actions. Forwarding mode selector (controller view) Port A RX ingress Port B RX ingress Policy mode rules Errors Load Priority Rules Drop / Throttle / Alarm Cut-through path Δt low Buffered path Δt varies Port A TX Port B TX Legend: policy selects mode using errors/load/priority; actions define loss behavior
Controller-level view: forwarding mode is policy-driven (errors, load, priority). Determinism requires bounded queueing delay and explicit loss/flow-control behavior under worst-case load.

Topology & Redundancy Hooks: Line/Ring, Bypass, and Fast Recovery (Controller View)

Intent Provide controller-focused topology and redundancy hooks that keep control loops stable: line/ring delay implications, bypass/relay options, and measurable detect→isolate→recover sequences without expanding full redundancy protocols.
Topology constraints (line vs ring)
Line
  • Determinism is dominated by per-hop delay accumulation (p99 + burst).
  • Forwarding policies must keep queueing tails bounded under background traffic.
Ring
  • Determinism must be validated across normal, fault, and recovery states.
  • Control-loop stability depends on bounded recovery time and bounded re-sync settle time after disturbances.
Redundancy hooks (controller view)
  • Port isolate: isolate a failing segment to prevent error propagation and uncontrolled retries.
  • Bypass/relay (if supported): define how traffic passes through during power loss or fault states.
  • Fast detection signals: link-down and error-rate spikes must map to time-bounded events.
  • Diagnostic snapshots: capture counters/watermarks and timestamps at fault entry for correlation.
Bypass / relay behavior (what must be defined)
  • Trigger: which fault or power condition engages bypass, and what evidence confirms engagement?
  • Delay impact: does bypass introduce additional variable delay that affects cycle budgets?
  • Failure modes: define what happens if bypass fails (alarm and safe-state behavior).
Recovery sequence (detect → isolate → recover → settle)
Link flap / cable swap / port isolate events
  • Detect: time-bounded event generation (link state, errors/sec, timeouts).
  • Isolate: port disable or segment isolation to stabilize the topology.
  • Recover: restore a valid forwarding path with explicit mode and queue policy.
  • Settle: offset/jitter and forwarding delay return inside limits within X cycles.
Not in scope (protocol specifics)

Full MRP/HSR/PRP mechanisms are intentionally excluded. This section only lists controller-level hooks and the measurable questions required to bound recovery time and determinism impact.

Data placeholders (topology & recovery acceptance-ready)
Link-down detection time
X ms
Specify p99 detection time under defined load and temperature; verify event generation and log correlation fields.
Recovery time target
X ms / X cycles
Separate “path restored” time from “settled inside p99 determinism limits” time to avoid false confidence.
Alarm thresholds
errors/sec = X
Define false-positive tolerance and required actions (alarm vs isolate vs bypass) with measurable outcomes.
Ring with Dual-Port Nodes Four-node ring diagram with normal forward arrows, a fault marker, port isolation symbol, and bypass path shown as dashed arrows; includes legend. Ring topology (controller view) Normal forward • fault isolate • bypass Node A Node B Node C Node D bypass Legend normal forward recovery / bypass
Ring validation must include normal, fault, and recovery states. Acceptance should bind detection time, isolate action timing, path-restore time, and settle time back inside p99 determinism limits.

Process Data Exchange Model: DPRAM/Process Image, Mailbox, DMA/IRQ (Engineering Reality)

Intent Turn “process data” into a designable system: coherent snapshots, bounded CPU/DMA/IRQ behavior, and isolation rules that prevent host-side jitter from corrupting deterministic cycles.
Process image (I/O regions) and coherency

The process image is a memory-mapped view of cyclic inputs and outputs. Determinism depends on coherent snapshots: the host must not read a mixed-state image that spans different cycle phases.

Minimum coherency goals
  • Single-cycle snapshot: inputs read correspond to one cycle boundary.
  • Atomic commit: outputs written become active at a defined latch/commit point.
  • Bounded visibility delay: host read/write windows do not drift across cycles.
Coherency mechanisms (double-buffer / latch / versioning)
Double-buffer + latch point

Maintain two images (A/B). While the controller updates one image, the host reads the other. A latch/swap point flips the active pointers at a defined phase, creating a clean “cycle snapshot”.

  • Latch trigger: cycle tick / sync event / internal timer (implementation-defined).
  • Host rule: read/write only inside the allocated window, not continuously.
Optional integrity check (version counter)

A lightweight “version” field can detect mid-update reads: read version → read payload → read version again. If versions differ, the snapshot is not coherent and must be retried within the window.

Mailbox / acyclic channel (use and isolation)

Mailbox traffic carries non-cyclic functions (configuration, diagnostics, and service operations). It must be contained so bursts cannot inflate cyclic p99 delay or disturb host update phase.

Engineering rules
  • Isolation: mailbox work must not share the critical cyclic queue/IRQ path without limits.
  • Rate limits: cap mailbox DMA/IRQ frequency to avoid jitter tails.
  • Correlation: prove mailbox bursts do not change cyclic p99 jitter and burst duration.
DMA vs IRQ (how host-side jitter leaks into cycles)
DMA (bandwidth-friendly, but burst-sensitive)
  • Benefit: reduces per-cycle CPU work, enabling fixed CPU budgets.
  • Risk: large bursts can contend for memory/bus bandwidth and shift update phase.
  • Control: bound burst size and schedule transfers in fixed-phase windows.
IRQ (responsive, but preemption-sensitive)
  • Benefit: fast event notification for latch/commit coordination.
  • Risk: interrupt storms and uncontrolled preemption inflate p99 cycle jitter.
  • Control: use IRQ coalescing and prioritize cyclic handlers above acyclic work.
Practical anti-jitter hooks
  • IRQ coalescing: merge events into a bounded window so update phase stays stable.
  • Fixed-phase update: read/commit inside a defined cycle window, not continuously.
  • Snapshot integrity: align DMA-complete and latch events to avoid mixed-state reads.
Host interface impact on jitter (principles only)

Host interface choice changes transaction granularity, interrupt rate, DMA burst behavior, and bus contention. Determinism requires the interface to support bounded copy time and stable update phase.

Selection questions
  • Does the interface sustain the required copy volume within the cycle update window?
  • Can DMA burst size be bounded to avoid memory/bus stalls?
  • Can IRQ frequency be bounded via coalescing or event grouping?
  • Can cyclic and mailbox paths be separated or rate-limited?
Data placeholders (engineering acceptance-ready)
Update period
X µs
Host-visible coherent snapshot cadence (aligned to latch/commit phase).
CPU budget per cycle
X%
Budget includes cyclic ISR/DMA completion and commit handling; evaluate p99, not average.
DMA burst size
X bytes
Bound to avoid bus stalls; verify phase stability under background traffic and mailbox bursts.
IRQ coalescing
X µs
Coalescing window must reduce IRQ rate without pushing cyclic commit outside the update window.
Not in scope (kept out to prevent cross-page overlap)

Object dictionaries, detailed message formats, and configuration tooling are intentionally excluded. This section focuses on coherent process images and host-side DMA/IRQ behavior that affects determinism.

Process Image Double-Buffer Two buffers A and B for process image snapshots. A latch/swap point flips active pointers. DMA and IRQ connect the memory region to the host CPU. Process image double-buffer (coherent snapshot) two images • latch/swap point • DMA/IRQ to host Controller core RX update Cycle tick phase window DPRAM / Process image Buffer A Input Output Buffer B Input Output Latch / Swap write read Host CPU DMA IRQ v# Legend latch creates coherent snapshot; DMA/IRQ must stay phase-bounded
Double-buffer with a latch/swap point prevents mixed-state reads. Determinism requires bounded DMA bursts and bounded IRQ behavior aligned to a fixed update phase.

Engineering Checklist: Design → Bring-up → Production (Must-have hooks)

Intent A production-minded checklist that defines must-have hooks, logging fields, and pass criteria placeholders, enabling consistent bring-up, station correlation, and field diagnostics.
Design checklist (before layout freeze)
  • Clocking: define reference source, domain boundaries, and cycle phase anchor.
  • Power: bound ripple/transients; verify ground return paths for noisy cabinets.
  • Reset & straps: deterministic boot state for port roles and safe-state behavior.
  • Host bandwidth: interface sustains process-image copies inside update window.
  • Thermal: verify junction margins at target load with realistic airflow.
Bring-up checklist (first power-on to stable cycles)
  • Link establishment: role/port mapping correct; no unexpected flaps.
  • Timestamp sanity: ingress/egress capture points consistent across ports.
  • Cycle stability: measure p99 jitter and burst duration at steady load.
  • Isolation quick test: enable background traffic; prove p99 remains bounded.
  • Snapshots: log offsets, error rates, watermarks at event entry.
Production checklist (repeatable pass/fail + correlation)
  • Station correlation: same DUT across stations/cables/switch peers yields consistent metrics.
  • Logging schema: firmware version, temperature, supply, offsets, jitter, CRC/sec, watermark.
  • Threshold decisions: define re-test rules and gray-zone handling (bounded retries).
  • Yield guard: detect drift via environment fields (temp/humidity/supply) before blaming silicon.
Field diagnostics (minimum must-have hooks)
  • Error rate counters: CRC/sec, drop/throttle/alarm events with timestamps.
  • Last desync snapshot: record offset/jitter + port state at the moment of loss.
  • Port visibility: optional mirror/trace hooks for correlation and root-cause capture.
  • Recovery timeline: detect → isolate → recover → settle timestamps for audits.
Pass criteria placeholders (copy into test specs)
Link stability
no flap in X min
Sync offset
p99 < X ns
Cycle jitter
p99 < X
Error counters
CRC/sec < X
Thermal
junction < X °C at X load
Checklist Flow (Design to Production) Three columns representing Design, Bring-up, and Production, each with small keyword boxes; arrows show progression and a small field diagnostics band indicates minimum hooks. Engineering checklist flow Design → Bring-up → Production (minimum must-have hooks) Design Clock Power Reset Host BW Bring-up Link Role TS sanity Jitter p99 Production Correlation Threshold Logging Yield guard Field diagnostics: error rate • last snapshot • recovery timeline
A checklist is only production-ready when every item has a measurable pass criterion and the system logs enough fields to correlate failures across stations and in the field.

Applications (Concrete Industrial Ethernet Slave/Master Use)

This section stays within the Slave/Master controller boundary: dual-port forwarding, hardware timestamps/synchronization, deterministic cycle updates, process-image exchange, and field diagnostics. It does not teach packet formats, object dictionaries, or configuration tools.

Scope guard (what belongs here)
  • Included: dual-port line/ring forwarding, deterministic timing budget, HW timestamps, process-image update model, diagnostics hooks.
  • Linked-out: Ethernet PHY/magnetics, TSN switch/bridge scheduling (802.1AS/Qbv/Qbu), ESD/surge/TVS, common-mode chokes.

Motion control (drives/servos/synchronous I/O)

  • Primary requirement: cycle-to-cycle determinism so control loops see a stable update cadence (no bursty delivery).
  • Controller focus: bounded per-hop forwarding delay, stable queueing under load, and a predictable “update window.”
  • Diagnostics focus: capture “last desync snapshot” (offset, port state, CRC rate) to correlate timing alarms with network events.

Modular remote I/O (distributed sensors/actuators)

  • Primary requirement: stable process-image coherency (inputs/outputs latched per cycle) as node count increases.
  • Controller focus: deterministic handling of background traffic (acyclic/diagnostic) without stealing time from cyclic I/O.
  • Field reality: error counters (CRC/sec, link flap) must be easy to log and threshold in production tests.

Robotics / factory cell (multi-axis sync + fast root-cause)

  • Primary requirement: tight sync domain (timestamp consistency + stable timebase discipline).
  • Controller focus: bounded re-lock time after disturbance and explicit port-state visibility for maintenance workflows.
  • Operational focus: deterministic recovery targets (in cycles/ms), not “eventually recovers.”

Gateway / bridge (role boundary only)

Slave/Master controllers are often used as deterministic “edges” around a control domain. This page only covers role-driven constraints: cycle timing, dual-port forwarding behavior, and diagnostic artifacts (not protocol translation details).

  • Master-side need: coherent scheduling/logging (timestamps + counters) and consistent cycle generation.
  • Slave-side need: stable process-image exchange and bounded forwarding latency in line/ring topologies.
  • Hard rule: keep deterministic traffic isolated from opportunistic traffic (acyclic/config/diagnostics).

Application sizing placeholders (fill-in for requirements)

  • Node count (in the same sync domain): X
  • Update rate / cycle time: X µs … X ms (add p99 jitter ≤ X)
  • Topology constraint: line / ring; max segment length X; recovery target X ms / X cycles
Diagram — Factory cell topology (Master → dual-port Slaves; line/ring; sync domain)
Control Cabinet Master controller Host CPU Diagnostics log CRC/sec • flap Drive Ctrl Remote IO Ctrl Sensor Ctrl Actuator Ctrl Ring return (optional) Sync domain Deterministic cyclic path Optional ring closure

IC Selection Notes (Key Specs & Decision Flow)

This section targets the selection stage: turn requirements into questions and hard thresholds. It avoids generic “feature lists” and keeps focus on deterministic dual-port behavior, hardware timestamps, process-image exchange, and diagnostics.

1) Protocol support & certification artifacts

  • Single-protocol vs multi-protocol: is one hardware platform required to cover multiple product lines?
  • Conformance evidence: provide artifacts (reports, versioned logs, test IDs) required by the target ecosystem.
  • Role boundary: IO-Device/Slave vs Controller/Master expectations (cycle control + diagnostics).

2) Timing: timestamp resolution, offset stability, re-lock

  • Timestamp capture consistency: ingress/egress points must be defined and repeatable.
  • Stability: specify p99 offset (with a stated measurement window) and worst-case re-lock after disturbance.
  • Timebase drift: quantify ppm across temperature and define calibration/discipline method (high level only).

3) Switching: cut-through, buffering, queues, congestion behavior

  • Cut-through vs store-and-forward: define when the device degrades to buffered mode (CRC, errors, policy).
  • Worst-case queueing delay under load: provide a bound, not “typical.”
  • Loss strategy under overload: drop vs throttle vs alarm; require thresholds.

4) Host interface: bandwidth, drivers, update determinism

  • Host BW must fit within the cycle update window (process image + mailbox overhead).
  • IRQ/DMA behavior must be predictable (coalescing options; fixed-phase update strategy).
  • Driver/SDK maturity matters more than raw peak BW for production maintainability.

5) Robustness (keep metrics here; details linked out)

  • Temperature grade: X (industrial/extended).
  • EMI/ESD/surge strategy: specify targets, then link to protection/TVS/magnetics pages for implementation details.
  • Field diagnostics: require error counters and “last failure snapshot” visibility.

6) Package & thermal (board-level reality)

  • Thermal resistance target: θJA ≤ X °C/W (or define allowed junction rise at X load).
  • Heat path plan: copper + vias + airflow; define acceptance at ambient X °C.
  • Production: thermal margin must be validated under worst-case background traffic.

Hard thresholds (copy/paste into PRD)

  • Timestamp resolution ≤ X ns (define capture point: ingress/egress)
  • p99 sync offset ≤ X ns (measurement window: X s, steady-state)
  • Per-hop added latency (p99) ≤ X ns (background load: X% line rate)
  • Worst-case queueing delay ≤ X µs (define overload scenario)
  • Host BW ≥ X MB/s (must complete within cycle update window)
  • Re-lock time ≤ X ms after disturbance (or ≤ X cycles)
  • Production diagnostics: CRC/sec ≤ X; link flap = 0 in X min; snapshot present

Concrete material numbers (reference examples; verify suffix/package/availability)

These examples are widely used Industrial Ethernet controller/ASIC references for deterministic slave/master designs. Some devices integrate PHY/switch functions; selection should still follow the hard-threshold checklist above.

  • EtherCAT slave controllers (ESC): LAN9252, LAN9253-I/R4X, LAN9255-I/ZMX018 (Microchip). :contentReference[oaicite:0]{index=0}
  • EtherCAT ASIC ESC: ET1100, ET1200-0000-NNNN (Beckhoff). :contentReference[oaicite:1]{index=1}
  • PROFINET IO ASIC (2-port switch class): 6ES7195-0BH02-0XA0 (Siemens ERTEC 200P Step 2). :contentReference[oaicite:2]{index=2}
  • Industrial Ethernet communication controller (accelerator-style): MC-10287BF1-HN4-M1-A (Renesas R-IN32M3-EC). :contentReference[oaicite:3]{index=3}
  • Multiprotocol Industrial Ethernet controller SoCs (fieldbus + Real-Time Ethernet class): NETX 90 (2270.100 / 2270.200 / 2270.300 / 2270.400), NETX 52 (2232.001) (Hilscher). :contentReference[oaicite:4]{index=4}
Quick mapping (how to use these references)
  • Pure ESC/ASIC style: deterministic slave front-end with strong HW timestamp/forwarding behavior.
  • Multiprotocol SoC style: one hardware platform + firmware stacks to cover multiple industrial networks.
  • PROFINET ASIC style: IO-device focused deterministic timing + 2-port behavior (integration varies by device).
Diagram — Selection flow (Yes/No) for deterministic Slave/Master controllers
IC Selection Flow (Determinism-first) Need multi-protocol? Need dual-port forwarding? YES NO Need cut-through under load? Sync accuracy target met? Per-hop latency bound met? Host BW fits in update window? If NO Re-scope requirements or topology Selection checklist Hard thresholds Always require

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs (Troubleshooting Close-Out)

Format Each question uses a fixed 4-line, data-structured answer: Likely cause / Quick check / Fix / Pass criteria (threshold placeholders).
Same DUT, different switch topology → sync offset changes a lot: first correlation check?

Likely cause: timestamp capture points or statistical windows are not consistent; topology changes shift queueing/forwarding phase.

Quick check: lock the measurement window (X s) and compare offset histogram (p50/p99) + cycle variance before/after topology; log queue watermark and per-hop latency proxy.

Fix: align capture points (ingress/egress), enforce fixed-phase update window, and bound worst-case queueing under background load.

Pass criteria: p99 sync offset < X ns over X s; per-hop added latency < X ns; queue watermark < X%.

Cycle is stable on bench, jitter bursts in cabinet: impulsive noise vs IRQ/DMA contention?

Likely cause: host-side contention (DMA bursts / IRQ storms) or cabinet-induced impulsive disturbance causing phase slip and bursty recovery.

Quick check: log IRQ rate, DMA burst size, and bus watermark together with jitter burst timestamps; toggle mailbox/background traffic and see if burst rate changes.

Fix: cap DMA burst to X bytes, enable IRQ coalescing X µs, and enforce fixed-phase update; add cabinet logging for temperature/supply if bursts persist.

Pass criteria: p99 cycle jitter < X; burst duration < X cycles; IRQ rate < X/s; DMA copy completes within X µs window.

Dual-port line works, ring occasionally “stalls”: queueing vs recovery state machine?

Likely cause: ring recovery/isolate state entered intermittently, or queueing under load hits a watermark that throttles forwarding.

Quick check: capture recovery timeline (detect→isolate→recover) and queue watermarks at stall time; compare per-hop latency p99 in line vs ring.

Fix: tighten state transitions (avoid oscillation), raise diagnostics on watermark crossing, and bound background traffic so cyclic queue cannot starve.

Pass criteria: recovery+settle < X ms; worst-case queueing delay < X µs; queue watermark < X%.

HW timestamp looks fine, but the control loop oscillates: capture point mismatch?

Likely cause: timestamps are consistent locally but taken at different semantic points (ingress vs egress), creating phase bias that destabilizes loop timing.

Quick check: run A/B capture at ingress and egress and compare offset histograms; verify that host applies the same reference and update phase each cycle.

Fix: standardize capture point definition, align latch/commit phase, and constrain host update to the same cycle window.

Pass criteria: p99 sync offset < X ns; cycle phase drift < X; oscillation indicator (loop variance) returns within X cycles.

After warm reboot, offset “jumps one step”: timebase re-init vs latch phase?

Likely cause: timebase is re-initialized with a different correction state, or latch/commit phase starts at a different alignment after reboot.

Quick check: compare pre/post reboot: initial offset, re-lock time, and phase-window start; log timebase state, temperature, and supply at reboot.

Fix: define a deterministic re-init sequence: reset correction state, re-acquire lock, then enable latch/commit; avoid partial enable during settling.

Pass criteria: offset step ≤ X ns after warm reboot; re-lock time ≤ X ms; p99 offset ≤ X ns over X s.

CRC errors are near zero, yet slaves desync: what to log first?

Likely cause: determinism is broken by timing variance (offset tails or cycle jitter bursts), not by link-layer corruption.

Quick check: log offset histogram (p50/p99), cycle variance, DMA/IRQ rates, and queue watermark at desync; capture last desync snapshot fields.

Fix: bound host contention (DMA/IRQ) and queueing; ensure coherent snapshot window is not violated by mailbox/asynchronous work.

Pass criteria: p99 offset ≤ X ns; p99 cycle jitter ≤ X; deterministic window violation count = 0 over X min.

Host CPU load spikes → jitter rises: DMA burst sizing / IRQ coalescing first fix?

Likely cause: copy/interrupt work slips outside the fixed update phase, causing cycle-to-cycle phase drift and jitter tails.

Quick check: record DMA burst (bytes), IRQ rate (/s), and copy completion time relative to update window; identify whether bursts correlate with mailbox/background traffic.

Fix: cap DMA burst to X bytes and apply IRQ coalescing X µs; move non-cyclic work out of the cycle window.

Pass criteria: copy completes within X µs window; IRQ rate < X/s; p99 cycle jitter < X; CPU per-cycle budget < X%.

Two boards with same BOM show different latency: most common station-to-station mismatch?

Likely cause: station environment or configuration mismatch (firmware build, clock source, host copy settings, traffic load) rather than silicon.

Quick check: perform station correlation: same DUT across stations/cables/peer; log firmware version, temperature, supply, host BW, DMA burst, IRQ coalescing, and background load.

Fix: freeze configuration profiles, enforce identical traffic patterns, and require a shared logging schema for every station.

Pass criteria: per-hop added latency variation across stations ≤ X ns; p99 latency ≤ X ns under load X%; config hash match = true.

Ring recovery is slow: link-down detect threshold too conservative?

Likely cause: detection threshold or debounce time delays isolation, or recovery enters a long settling phase before enabling cyclic windows.

Quick check: log detect time, isolate time, recover time, and settle time separately; correlate with error-rate counters and port state transitions.

Fix: tighten detect thresholds where safe, avoid state oscillation, and enable cyclic windows only after timebase/phase is stable.

Pass criteria: detect ≤ X ms; recover+settle ≤ X ms; cycle jitter returns to p99 ≤ X within X cycles.

Mailbox traffic causes deterministic window violation: how to isolate async channel?

Likely cause: mailbox DMA/IRQ work competes with cyclic copy or queue resources and pushes commit outside the update window.

Quick check: run A/B with mailbox disabled vs enabled; measure copy completion time, IRQ rate, and cycle jitter tails; log mailbox burst size and frequency.

Fix: rate-limit mailbox, move it to non-cyclic phase windows, and hard-cap DMA burst/IRQ frequency for async paths.

Pass criteria: window violation count = 0 over X min; mailbox enabled does not change p99 jitter by > X; copy completes within X µs.

Temperature sweep passes throughput but fails sync: drift/compensation logging plan?

Likely cause: timebase drift or compensation behavior changes with temperature; throughput is not sensitive to the same error mode as synchronization.

Quick check: log temperature, supply, offset p99, drift proxy (ppm placeholder), and re-lock time at each step; compare offset histogram tails across temperatures.

Fix: define compensation policy (update interval, filtering) and require stable latch/commit enable only after settling; improve clock source if drift dominates.

Pass criteria: p99 offset ≤ X ns over X s at each temperature; drift across temp ≤ X ppm; re-lock ≤ X ms.

“Works at 1G but not at 100M” (or vice versa): which internal clocking assumption is wrong?

Likely cause: rate-dependent timebase, divider, or update-window assumption changes; host copy/interrupt cadence no longer aligns to the cycle phase.

Quick check: compare timestamp resolution, copy completion time, IRQ rate, and queue watermark at both rates; confirm latch/commit phase is invariant across rate modes.

Fix: re-derive update window and host scheduling per rate mode; ensure the timebase discipline and capture points are consistent across port rates.

Pass criteria: p99 offset ≤ X ns at both rates; copy completes within X µs window; watermark ≤ X%; deterministic window violations = 0.