Industrial Ethernet slave/master controllers exist to make cyclic process-data exchange deterministic: bounded forwarding + hardware timestamps + disciplined timebase, so multi-node systems hit tight latency/jitter budgets in real cabinets.
This page turns “determinism” into measurable, testable acceptance criteria (p99 offset/jitter, per-hop delay, recovery time) and maps them to the controller architecture, host DMA/IRQ strategy, and dual-port switching behaviors.
Definition & Scope: Industrial Ethernet Slave/Master (What it is / isn’t)
Intent
Define the canonical boundary: focus on determinism, dual-port forwarding, hardware timestamping, and synchronization inside the slave/master controller—exclude PHY, TSN switching/bridging, and protection parts.
What is an “industrial Ethernet slave/master controller” (engineering definition)
An industrial Ethernet slave/master controller is the real-time engine above MAC/PHY and below the host CPU/SoC that hardens deterministic behavior:
it implements dual-port forwarding (line/ring topologies), hardware timestamp capture and timebase discipline, process-data staging (process image / DPRAM),
and diagnostic hooks that keep control cycles predictable across load, topology changes, and production variance.
Dual-port switching: what it means in real systems (line/ring determinism)
Forwarding path control: predictable per-node added delay (fast path vs buffered path) and bounded queueing under load.
Topology visibility: port role/state, link-change timestamps, and ring/line recovery evidence for root-cause analysis.
Diagnostics-first behavior: counters and watermarks that correlate burst jitter and stalls to congestion or recovery state transitions.
Hardware timestamps: what they must guarantee (sync error budget, control-loop impact)
Capture-point consistency: ingress/egress timestamp locations must be consistent across ports and test setups to avoid “same DUT, different topology → different accuracy”.
Timebase discipline: time correction, drift control, and restart behavior must be observable (logs/counters), not assumed.
Budgetable residuals: quantization + capture point + queueing + discipline residuals should roll up into a measurable p99 offset/jitter target.
Control consequence: sync instability manifests as sampling-phase variance, which translates to output noise or oscillation risk in tight servo loops.
What this page delivers (engineering outputs)
Determinism accounting templates (latency/jitter decomposition with pass criteria placeholders).
Protection (ESD/Surge/EMI): IEC paths, TVS/CM chokes → Protection & Test
Measurable targets (placeholders + how to validate)
Cycle time target
Target: X µs / X ms
Quick validation: measure cycle period error on the master (p50/p99) from “cycle start” to “process-image commit”.
Pass criteria: p99 period error < X and no missed cycles within X minutes.
Allowed end-to-end jitter
Target: X ns / X µs
Definition discipline: specify the jitter type (cycle-to-cycle, timestamp-offset, or sampling-phase) and keep capture points identical.
Pass criteria: p99 jitter < X and burst duration < X cycles.
Per-node forwarding delay
Target: X ns (or X µs)
Quick validation: measure per-hop added delay under a stated payload load (e.g., X% line rate) and log queue watermarks.
Pass criteria: p99 per-hop delay < X with no queue watermark overflow.
Scope boundary map: this page focuses on controller determinism (dual-port forwarding, hardware timestamps, process image, diagnostics). PHY, TSN switching/bridging, magnetics, and protection are linked out.
System Placement: Where the Controller Sits in the Stack
Intent
Clarify responsibilities across PHY/MAC/controller/host/app, and show where determinism is won or lost—this is not a protocol software tutorial.
Stack view (who owns what)
PHY/MAC: electrical signaling and frame I/O (linked out for details).
Strength: high integration and cost efficiency.
Risk: IRQ/DMA contention, bus arbitration, and cache effects can convert host load spikes into cycle jitter.
FPGA-assisted
Strength: tightly controlled timing and fixed-phase data movement.
Risk: higher verification and maintainability cost; determinism is earned through test rigor, not assumptions.
Discrete controller/ESC + host
Strength: real-time paths are hardened in silicon, reducing host-induced jitter.
Risk: host interface bandwidth and driver behavior can become the bottleneck if sizing is incorrect.
Two data paths that must not be mixed up
Fast path (forwarding):
port RX → forwarding engine → port TX.
Goal: bounded and predictable latency/jitter, minimally impacted by host load.
Host path (process/config):
port RX → process image/DPRAM → DMA/IRQ → host CPU → application.
Goal: consistent process-data semantics, diagnostics, and maintainability.
Engineering rule: if real-time behavior depends on host-path timing, the design must prove bounded jitter under worst-case host load.
Where determinism is won or lost (quick checks)
Capture-point mismatch: verify ingress/egress timestamp points are identical across ports and testers.
Queueing variance: log queue watermarks and classify stalls as congestion vs recovery states.
DMA/IRQ contention: correlate cycle jitter bursts with host interrupts, DMA bursts, and memory bandwidth saturation.
Restart phase jumps: check timebase initialization and latch phase after warm reset (offset “step” behavior).
Practical sizing:
Required_BW ≈ (ProcessDataBytes_per_cycle × Cycles_per_second) × X_overhead
where X_overhead = X1.2…X1.5 to cover protocol/driver overhead and diagnostics.
Pass criteria: bandwidth margin ≥ X% under worst-case host load.
End-to-end payload rate requirement
Target: X% of line rate
Test context must be stated: node count N, process data size X bytes, cycle time X,
and background traffic (mailbox/acyclic).
Pass criteria: at X% line rate, p99 jitter < X and no determinism-window violations.
Not in scope for this section
Protocol software stacks, configuration tooling, and full TSN scheduling details are intentionally excluded here. This section is strictly about system placement and deterministic data paths.
Placement view: deterministic forwarding happens in the controller fast path; process data and configuration traverse the host path via the process image (DPRAM) and host interface.
Intent
Explain how deterministic behavior is created via cycle windows, process-data windows, synchronization, and bounded jitter—without message formats, object models, or configuration tooling.
One common timing abstraction (shared across protocols)
Cycle window: the repeat interval that defines control-loop phase and allowable latency budget.
Process-data window: the reserved portion where real-time I/O must be consistent and bounded.
Async/acyclic window: background traffic (diagnostics/parameters) that must not steal determinism margin.
Synchronization residual: remaining offset/jitter after time discipline; this is what shows up as sampling-phase variance.
Engineering rule: comparisons are only meaningful if the same cycle definition, capture points, and statistics (p50/p99 + burst duration) are used.
Common determinism breakers: queueing variance under load, recovery-state transitions, and inconsistent timestamp capture points across setups.
PROFINET: RT/IRT as a window/priority model (without TSN expansion)
RT vs IRT concept: determinism comes from how strictly a predictable process-data window is protected inside each cycle.
Window ownership: background traffic is allowed only if it cannot violate the reserved process-data phase.
Common determinism breakers: uncontrolled async traffic, station-to-station measurement mismatch, and recovery behavior that shifts cycle phase.
Scope guard: details of switching fabrics and TSN scheduling belong to the TSN Switch/Bridge page; this section stays at timing-model level.
SercosIII: cyclic communication + synchronization (cycle/jitter view)
Cyclic phase discipline: the value is in stable phase relationships between communication cycles and process-data update points.
Jitter translation: residual sync error becomes sampling-phase variation, which directly affects high-bandwidth motion control stability.
Common determinism breakers: phase jumps after warm resets, topology changes that alter hop delays, and hidden queueing variance.
Five selection questions (cycle, jitter, topology, redundancy, diagnostics)
1) Cycle
Ask: What is the target cycle time under N nodes and X% line-rate payload?
Why: cycle time sets the phase budget for forwarding, host updates, and control stability.
Evidence: p50/p99 cycle error + “missed cycle” counters over X minutes.
2) Jitter
Ask: Which jitter definition is used (cycle-to-cycle, timestamp offset, or sampling-phase), and where are capture points?
Why: mismatched definitions make “good results” non-transferable between labs and systems.
Evidence: offset histogram (p99) + burst duration in cycles.
3) Topology
Ask: What is the per-hop added delay (p99) for line/ring, and how does it change under load?
Why: hop-to-hop delay accumulation can silently consume cycle margin.
Evidence: per-hop latency logs + queue watermark traces.
4) Redundancy
Ask: What is the recovery time target after link flap/cable pull, and what state evidence is available?
Why: slow or unstable recovery manifests as burst jitter and control faults.
Evidence: link-state timestamp logs + recovery-state snapshots.
5) Diagnostics
Ask: Which counters and “last-fault snapshots” exist (CRC, queue watermarks, offset alarms, restart phase)?
Why: determinism claims are only actionable if failures can be correlated to measurable evidence.
Evidence: counter set + timestamped event log fields agreed for production.
Validation discipline: fix node count N, payload X bytes, background traffic rules, then report p50/p99 cycle error and burst duration.
Synchronization accuracy target
Target: X ns / X µs
Measurement must specify capture points and statistics: p99 offset, temperature drift, and re-lock time < X ms after disturbances.
Not in scope (kept out to prevent topic sprawl)
Message formats, object dictionaries, configuration tooling, and full TSN scheduling are intentionally excluded here. This section only covers timing-model concepts that determine cycle and jitter behavior.
Concept view: determinism is created by protecting the process-data window inside a repeating cycle and controlling synchronization residuals; protocol specifics are intentionally excluded.
Hardware Architecture Deep Dive: Dual-Port Controller Block Diagram
Intent
Provide a practical mental model of the controller internals that maps directly to determinism budgets, timestamp validation, process-data staging, and production diagnostics.
Ports & forwarding engine (fast path)
2× MAC ports: dual-port designs serve line/ring topologies and enable per-hop delay accounting.
Forwarding modes: cut-through (lowest latency) vs store-and-forward (buffered integrity) must be measurable and selectable by policy.
Queue visibility: watermarks and drop/throttle behavior must be observable to classify burst jitter as congestion vs recovery.
Time engine (clock domains + timestamps + sync timer)
Clock domain control: defines timestamp stability and restart behavior across temperature and brownout.
Timestamp unit: capture points (ingress/egress) must be explicit and consistent across ports.
Sync timer: anchors cycle boundaries and provides the reference for process-data commit phase.
Process image (DPRAM / shared memory / mailbox)
Process data staging: separates real-time semantics from host timing noise.
Mailbox/acyclic channel: isolates non-critical traffic and prevents determinism window violations.
Buffering discipline: size headroom and commit phase are part of the determinism budget (not optional details).
Host interface + IRQ/DMA (where jitter leaks in)
Interfaces (category view): SPI/QSPI, parallel bus, PCIe, and other high-throughput links (no vendor naming).
IRQ latency budget: p99 interrupt latency must be bounded under worst-case host load.
DMA burst shaping: burst size and coalescing settings must be chosen to avoid cycle-phase drift.
Error capture: “last fault snapshot” fields to correlate failures with timebase/queue/recovery states.
Watchdog + safe-state pins: deterministic fault containment for field reliability and compliance narratives.
Measurable placeholders (map to diagram blocks)
Forwarding latency
Cut-through: X ns | Store-and-forward: X µs
Measure per-hop added latency (p99) under X% line rate and log queue watermarks to separate congestion from recovery behavior.
Timestamp resolution
Target: X ns
Verify ingress/egress capture points and report p99 offset; include warm reset and temperature sweeps to expose phase jumps.
Process image size
Target: X KB
Size from bytes/cycle × cycles/sec × buffering; enforce headroom ≥ X% to avoid post-deployment expansion failures.
Interrupt latency budget
Target: X µs
Measure p99 IRQ latency under worst-case host load; correlate jitter bursts with IRQ/DMA activity to validate bounded behavior.
Macro block model: deterministic forwarding runs in the forwarding engine (cut-through or buffered), time engine stabilizes cycle phase, process image isolates host timing noise, and diagnostics provide production-grade evidence.
Determinism Budget: Latency, Jitter, and Forwarding Delay Accounting
Intent
Turn “determinism” into a measurable budget with explicit delay components, jitter categories, and a repeatable measurement order—using card lists (not tables).
Budget anatomy (end-to-end delay accounting)
End-to-end determinism margin is consumed by fixed latency (cycle margin) and
variable latency (jitter/bursts). A practical budget splits the path into
measurable segments with consistent capture points.
End-to-end segments (listed, not expanded)
PHY/MAC (listed only): link layer contribution and capture-point definition.
Controller forwarding: fast path (cut-through / buffered) with per-hop accounting.
Host copy: DMA/interrupt and memory traffic shaping.
Scope guard: PHY implementation details belong to the Ethernet PHY page; this section only keeps them as named budget items.
Latency decomposition (what to measure per segment)
Controller forwarding
Measure: per-hop added delay using consistent RX/TX capture points (p50/p99 + burst duration).
Signature: p99 rises with load; bursts correlate with queue watermarks.
Host copy (DMA/IRQ)
Measure: IRQ latency distribution (p50/p99) under worst-case CPU/memory load and DMA burst settings.
Signature: cycle-to-cycle jitter bursts align with IRQ/DMA activity peaks.
Application loop
Measure: cycle boundary → compute → commit timing (p99) with the same cycle definition used for network results.
Signature: jitter increases with scheduler contention even if forwarding remains stable.
Engineering rule: averages are insufficient; determinism validation must use p99 and burst duration under defined load and topology.
Jitter taxonomy (root-cause categories that must be separable)
Clock / timebase
Drift and phase jumps translate into offset instability; validate with temperature sweeps and warm-reset comparisons.
Queue contention
Load-dependent queueing raises per-hop p99; must be correlated with watermark/counter evidence.
DMA / IRQ
Host-side bursts appear as cycle-phase variance; classify using IRQ latency p99 and DMA burst traces.
Cache / memory contention
Shared-memory pressure can dominate host copy and application loop timing even when forwarding looks stable.
Ring re-convergence
Recovery behavior produces burst jitter and re-sync settling; validate with disturbance events and settle-time metrics.
Measurement order (repeatable escalation)
Step 1 — Single node baseline
Establish host/timebase baseline without forwarding; record p99 cycle error, p99 IRQ latency, and p99 offset stability.
Step 2 — Two nodes (one hop)
Measure per-hop added delay p50/p99 with defined payload and load; capture queue watermark correlation.
Step 3 — Ring (N nodes)
Validate delay accumulation and recovery behavior; measure re-sync settling in cycles and milliseconds.
Step 4 — Disturbance injection
Inject controlled background load, link flaps, temperature ramps, and host CPU peaks; validate that p99 and burst durations remain bounded.
Repeatability requirement: lock node count (N), payload (X bytes), background rules, and statistics window length before comparing runs.
Measurable placeholders (acceptance-ready)
Per-hop added delay
p50 / p99: X / X
Define capture points (RX ingress → TX egress) and report burst duration when exceeding the p99 threshold.
Cycle-to-cycle jitter
p99: X
Use the same cycle definition across labs; report p99 and burst duration over a fixed observation window.
Re-sync settle time after disturbance
X cycles / X ms
Define “settled” as returning inside p99 offset/jitter limits for a continuous window of X cycles.
Not in scope (kept out to prevent cross-page overlap)
Detailed PHY behavior, protection components, and TSN scheduling mechanisms are intentionally excluded. This budget focuses on measurable end-to-end delay and jitter accounting for controller-based forwarding systems.
Budget view: treat each segment as a measurable block with consistent capture points; acceptance should reference p99 and burst duration, not averages.
Hardware Timestamping & Synchronization: What “HW TS” Must Guarantee
Update interval: controls how quickly residual offset can be corrected under drift and disturbances.
Filtering: must be selected to avoid slow settling while keeping noise bounded.
Burst errors: disturbances should trigger bounded re-lock behavior with measurable settle time.
Not in scope (link out)
802.1AS and TSN scheduling (Qbv/Qbu) are intentionally excluded here. They belong to the TSN Switch/Bridge topic page; this section focuses on HW timestamp guarantees that remain valid regardless of TSN policy.
Measurable placeholders (HW TS acceptance-ready)
TS capture granularity
X ns
Capture-point definition must accompany this number; otherwise granularity cannot be compared across devices.
Offset stability (p99)
X ns
Report p99 offset and burst duration under defined load/topology; averages are insufficient for determinism claims.
Drift across temperature
X ppm
Validate drift with temperature sweeps and document the re-lock behavior after thermal steps.
Re-lock time
X ms
Define “locked” as returning inside p99 offset limits for a continuous window of X cycles.
Capture-point discipline: define ingress/egress timestamp boundaries consistently across ports, then validate timebase correction behavior via p99 offset stability, drift across temperature, and bounded re-lock time after disturbances.
Dual-Port Switching Behaviors: Cut-through vs Store-and-forward, Queues, and Priorities
Intent
Describe dual-port switching as chip-level behavior: forwarding mode selection, queue/priority rules, and congestion/flow-control checkpoints that keep determinism bounded under load.
Forwarding modes (chip-level semantics)
Cut-through
Starts forwarding before full-frame validation completes. Minimizes forwarding latency and can reduce per-hop delay variance when queues remain bounded.
Best when: low error rate, bounded congestion, strict cycle budgets.
Risk surface: error/CRC knowledge arrives late; policy must define how errors are handled.
Store-and-forward
Buffers the full frame before forwarding. Enables integrity gating and explicit buffering rules, but introduces buffer-dependent latency and queueing tails under load.
Best when: integrity gating is required or error containment is prioritized.
Full-frame validation completes late in the receive path, so deterministic behavior under errors depends on the controller’s mode and policy.
When error counters rise, a robust design must prove whether forwarding stays bounded or transitions to a defined containment mode.
Required checkpoints
Mode transition evidence: does error growth correlate with cut-through → buffered switching (or other containment)?
Determinism protection: does p99 queueing delay remain bounded under error bursts?
Loss reporting: is loss behavior explicit (drop/throttle/alarm) with thresholds?
Queueing model (principles that preserve determinism)
Priority isolation (minimum requirement)
Strict priority: protects cycle traffic, but needs explicit controls to avoid starvation tails.
Reserved resources: dedicated queue/buffer share for cyclic process data reduces p99 inflation under background load.
Acyclic containment: diagnostics/config traffic must be isolated or rate-limited to prevent tail bursts.
Determinism metric (do not use averages)
Queueing must be validated using p99 delay and burst duration under defined load/topology.
Average delay can look acceptable while burst tails break control-loop stability.
Congestion behavior (explicit and testable)
Define the outcome under worst-case load
Drop: which class drops first, and how is it reported?
Throttle: which ingress is rate-limited, and does it preserve cycle traffic bounds?
Alarm: which watermark/error thresholds generate alarms for diagnosis and recovery sequencing?
Required pass evidence
Bounded p99 queueing delay: remains below X µs with background load.
Burst control: exceedance bursts are bounded to X cycles.
Recovery behavior: after congestion relief, settle time returns inside limits within X ms.
Backpressure / flow control (determinism checkpoints)
Flow-control events can introduce hidden tail latency and head-of-line blocking. Determinism validation must confirm whether cycle traffic remains bounded when backpressure triggers.
Quick checks
Correlation: do jitter bursts align with flow-control counters or pause/backpressure events?
Isolation: is cyclic traffic placed in an isolated class/queue that remains bounded during flow-control?
Containment: does background traffic throttle first, with explicit thresholds and alarms?
Data placeholders (acceptance-ready)
Internal buffering
X KB
Specify whether the number is total, per-port, or per-queue; validate with watermark p99 under defined background load.
Worst-case queueing delay under load
X µs
Report p99 + burst duration (X cycles) at a fixed load model (payload, background rate, and topology).
Loss behavior
drop / throttle / alarm threshold = X
Define the trigger (watermark or errors/sec) and the required determinism behavior during and after the event.
Not in scope (kept out to prevent cross-page overlap)
TSN scheduling details and full protocol specifications are intentionally excluded. This section focuses on controller forwarding behavior, queueing principles, and explicit congestion/flow-control checkpoints.
Controller-level view: forwarding mode is policy-driven (errors, load, priority). Determinism requires bounded queueing delay and explicit loss/flow-control behavior under worst-case load.
Topology & Redundancy Hooks: Line/Ring, Bypass, and Fast Recovery (Controller View)
Intent
Provide controller-focused topology and redundancy hooks that keep control loops stable: line/ring delay implications, bypass/relay options, and measurable detect→isolate→recover sequences without expanding full redundancy protocols.
Topology constraints (line vs ring)
Line
Determinism is dominated by per-hop delay accumulation (p99 + burst).
Forwarding policies must keep queueing tails bounded under background traffic.
Ring
Determinism must be validated across normal, fault, and recovery states.
Control-loop stability depends on bounded recovery time and bounded re-sync settle time after disturbances.
Redundancy hooks (controller view)
Port isolate: isolate a failing segment to prevent error propagation and uncontrolled retries.
Bypass/relay (if supported): define how traffic passes through during power loss or fault states.
Fast detection signals: link-down and error-rate spikes must map to time-bounded events.
Diagnostic snapshots: capture counters/watermarks and timestamps at fault entry for correlation.
Bypass / relay behavior (what must be defined)
Trigger: which fault or power condition engages bypass, and what evidence confirms engagement?
Delay impact: does bypass introduce additional variable delay that affects cycle budgets?
Failure modes: define what happens if bypass fails (alarm and safe-state behavior).
Isolate: port disable or segment isolation to stabilize the topology.
Recover: restore a valid forwarding path with explicit mode and queue policy.
Settle: offset/jitter and forwarding delay return inside limits within X cycles.
Not in scope (protocol specifics)
Full MRP/HSR/PRP mechanisms are intentionally excluded. This section only lists controller-level hooks and the measurable questions required to bound recovery time and determinism impact.
Data placeholders (topology & recovery acceptance-ready)
Link-down detection time
X ms
Specify p99 detection time under defined load and temperature; verify event generation and log correlation fields.
Recovery time target
X ms / X cycles
Separate “path restored” time from “settled inside p99 determinism limits” time to avoid false confidence.
Alarm thresholds
errors/sec = X
Define false-positive tolerance and required actions (alarm vs isolate vs bypass) with measurable outcomes.
Ring validation must include normal, fault, and recovery states. Acceptance should bind detection time, isolate action timing, path-restore time, and settle time back inside p99 determinism limits.
Process Data Exchange Model: DPRAM/Process Image, Mailbox, DMA/IRQ (Engineering Reality)
Intent
Turn “process data” into a designable system: coherent snapshots, bounded CPU/DMA/IRQ behavior, and isolation rules that prevent host-side jitter from corrupting deterministic cycles.
Process image (I/O regions) and coherency
The process image is a memory-mapped view of cyclic inputs and outputs. Determinism depends on
coherent snapshots: the host must not read a mixed-state image that spans different cycle phases.
Minimum coherency goals
Single-cycle snapshot: inputs read correspond to one cycle boundary.
Atomic commit: outputs written become active at a defined latch/commit point.
Bounded visibility delay: host read/write windows do not drift across cycles.
Maintain two images (A/B). While the controller updates one image, the host reads the other.
A latch/swap point flips the active pointers at a defined phase, creating a clean “cycle snapshot”.
Host rule: read/write only inside the allocated window, not continuously.
Optional integrity check (version counter)
A lightweight “version” field can detect mid-update reads: read version → read payload → read version again.
If versions differ, the snapshot is not coherent and must be retried within the window.
Mailbox / acyclic channel (use and isolation)
Mailbox traffic carries non-cyclic functions (configuration, diagnostics, and service operations). It must be
contained so bursts cannot inflate cyclic p99 delay or disturb host update phase.
Engineering rules
Isolation: mailbox work must not share the critical cyclic queue/IRQ path without limits.
Rate limits: cap mailbox DMA/IRQ frequency to avoid jitter tails.
Correlation: prove mailbox bursts do not change cyclic p99 jitter and burst duration.
DMA vs IRQ (how host-side jitter leaks into cycles)
DMA (bandwidth-friendly, but burst-sensitive)
Benefit: reduces per-cycle CPU work, enabling fixed CPU budgets.
Risk: large bursts can contend for memory/bus bandwidth and shift update phase.
Control: bound burst size and schedule transfers in fixed-phase windows.
IRQ (responsive, but preemption-sensitive)
Benefit: fast event notification for latch/commit coordination.
Risk: interrupt storms and uncontrolled preemption inflate p99 cycle jitter.
Control: use IRQ coalescing and prioritize cyclic handlers above acyclic work.
Practical anti-jitter hooks
IRQ coalescing: merge events into a bounded window so update phase stays stable.
Fixed-phase update: read/commit inside a defined cycle window, not continuously.
Snapshot integrity: align DMA-complete and latch events to avoid mixed-state reads.
Host interface impact on jitter (principles only)
Host interface choice changes transaction granularity, interrupt rate, DMA burst behavior, and bus contention.
Determinism requires the interface to support bounded copy time and stable update phase.
Selection questions
Does the interface sustain the required copy volume within the cycle update window?
Can DMA burst size be bounded to avoid memory/bus stalls?
Can IRQ frequency be bounded via coalescing or event grouping?
Can cyclic and mailbox paths be separated or rate-limited?
Data placeholders (engineering acceptance-ready)
Update period
X µs
Host-visible coherent snapshot cadence (aligned to latch/commit phase).
CPU budget per cycle
X%
Budget includes cyclic ISR/DMA completion and commit handling; evaluate p99, not average.
DMA burst size
X bytes
Bound to avoid bus stalls; verify phase stability under background traffic and mailbox bursts.
IRQ coalescing
X µs
Coalescing window must reduce IRQ rate without pushing cyclic commit outside the update window.
Not in scope (kept out to prevent cross-page overlap)
Object dictionaries, detailed message formats, and configuration tooling are intentionally excluded. This section focuses on coherent process images and host-side DMA/IRQ behavior that affects determinism.
Double-buffer with a latch/swap point prevents mixed-state reads. Determinism requires bounded DMA bursts and bounded IRQ behavior aligned to a fixed update phase.
Engineering Checklist: Design → Bring-up → Production (Must-have hooks)
Intent
A production-minded checklist that defines must-have hooks, logging fields, and pass criteria placeholders, enabling consistent bring-up, station correlation, and field diagnostics.
Design checklist (before layout freeze)
Clocking: define reference source, domain boundaries, and cycle phase anchor.
Power: bound ripple/transients; verify ground return paths for noisy cabinets.
Reset & straps: deterministic boot state for port roles and safe-state behavior.
A checklist is only production-ready when every item has a measurable pass criterion and the system logs enough fields to correlate failures across stations and in the field.
This section stays within the Slave/Master controller boundary: dual-port forwarding, hardware timestamps/synchronization,
deterministic cycle updates, process-image exchange, and field diagnostics. It does not teach packet formats, object dictionaries, or configuration tools.
Controller focus: bounded re-lock time after disturbance and explicit port-state visibility for maintenance workflows.
Operational focus: deterministic recovery targets (in cycles/ms), not “eventually recovers.”
Gateway / bridge (role boundary only)
Slave/Master controllers are often used as deterministic “edges” around a control domain. This page only covers role-driven constraints:
cycle timing, dual-port forwarding behavior, and diagnostic artifacts (not protocol translation details).
This section targets the selection stage: turn requirements into questions and hard thresholds.
It avoids generic “feature lists” and keeps focus on deterministic dual-port behavior, hardware timestamps, process-image exchange, and diagnostics.
1) Protocol support & certification artifacts
Single-protocol vs multi-protocol: is one hardware platform required to cover multiple product lines?
Conformance evidence: provide artifacts (reports, versioned logs, test IDs) required by the target ecosystem.
Role boundary: IO-Device/Slave vs Controller/Master expectations (cycle control + diagnostics).
EMI/ESD/surge strategy: specify targets, then link to protection/TVS/magnetics pages for implementation details.
Field diagnostics: require error counters and “last failure snapshot” visibility.
6) Package & thermal (board-level reality)
Thermal resistance target: θJA ≤ X °C/W (or define allowed junction rise at X load).
Heat path plan: copper + vias + airflow; define acceptance at ambient X °C.
Production: thermal margin must be validated under worst-case background traffic.
Hard thresholds (copy/paste into PRD)
Timestamp resolution ≤ X ns (define capture point: ingress/egress)
p99 sync offset ≤ X ns (measurement window: X s, steady-state)
Per-hop added latency (p99) ≤ X ns (background load: X% line rate)
Worst-case queueing delay ≤ X µs (define overload scenario)
Host BW ≥ X MB/s (must complete within cycle update window)
Re-lock time ≤ X ms after disturbance (or ≤ X cycles)
Production diagnostics: CRC/sec ≤ X; link flap = 0 in X min; snapshot present
Concrete material numbers (reference examples; verify suffix/package/availability)
These examples are widely used Industrial Ethernet controller/ASIC references for deterministic slave/master designs.
Some devices integrate PHY/switch functions; selection should still follow the hard-threshold checklist above.