Industrial TSN Edge Gateway & PTP Time-Sync to Cloud
← Back to: Industrial Ethernet & TSN
An edge gateway must protect determinism at the field boundary: isolate real-time TSN traffic from cloud/IT variability while preserving time integrity (PTP/holdover) and predictable latency end-to-end. This page provides a practical architecture, budgeting mindset, and verification hooks to design, bring up, and service that boundary without turning the gateway into a single point of failure.
What This Gateway Page Covers (Scope + Reader Outcomes)
This page focuses on the edge gateway as a boundary system between a deterministic field network and an unpredictable IT/cloud uplink. Content stays on the gateway viewpoint: switching + timing + segmentation + observability, plus practical budgets and acceptance checks.
- TSN switching at the boundary: classification → queues → shaping/window mapping.
- PTP gateway strategy: GM/BC/TC placement, holdover intent, asymmetry control.
- Segmentation: VLAN/QoS remap, multicast boundary, storm containment at the edge.
- Determinism budgeting: gateway-induced delay/jitter terms and acceptance targets.
- Observability: counters + black-box snapshots for field forensics and remote ops.
- TSN standards deep dive (Qbv/Qci/Qav/Qbu…): see TSN Switch / Bridge.
- PTP theory and BMCA details: see PTP Hardware Timestamping.
- Security protocol tutorials (MACsec/DTLS/TLS): see Security Offload.
- Stack certification mechanics (PROFINET/EtherCAT/CIP): see Industrial Ethernet Stacks.
- A gateway reference architecture blueprint (data/control/time planes).
- A latency & determinism budgeting method (what to measure and accept).
- A bring-up and acceptance checklist (design → validation → production).
- A field troubleshooting FAQ (fixed-format, pass/fail oriented).
Use Cases & Topology Placement (Where the Gateway Sits)
Placement determines whether the gateway behaves as a determinism guardian or a latency amplifier. This section anchors the gateway in line/star/ring deployments, defines the boundary between field and IT/cloud domains, and separates real-time control from telemetry/diagnostics so uplink behavior cannot break the field cycle.
- Line: place the gateway at the aggregation end; protect the field cycle from uplink bursts via strict queue isolation.
- Star: treat the gateway as a policy boundary; enforce VLAN/QoS mapping and multicast containment at the edge.
- Ring (as a topology): position the gateway at the OT/IT break; keep field redundancy and uplink redundancy independent.
- Dual uplink: use two independent egress paths for cloud/IT traffic; avoid coupling failover events into the real-time queues.
- Placement: near motion control / remote I/O aggregation.
- Objective: deterministic cycle stays local; cloud uplink is a tap, not the main path.
- Risk: uplink bursts back-pressure shared buffers and inflate queueing jitter.
- Gateway hooks: bypass vs tap separation, strict queue partition, bounded buffers, stable time boundary.
- Placement: at line-level switching and diagnostics aggregation.
- Objective: multi-stream concurrency with deterministic windows and strong observability.
- Risk: broadcast/multicast storms and diagnostics bursts steal time from gated queues.
- Gateway hooks: VLAN/multicast boundary, storm guards, per-queue watermarks, black-box snapshots.
- Placement: OT/IT boundary (policy and time boundary in one box).
- Objective: protect the field domain from IT/cloud variability while enabling remote operations.
- Risk: unstable uplink and time-source drift cause re-sync events and service interruptions.
- Gateway hooks: holdover intent, safe failover modes, config versioning, controlled recovery throttles.
Reference Architecture (Data Plane / Control Plane / Time Plane)
A gateway becomes easier to design, validate, and operate when responsibilities are separated into three planes. Later chapters will refer to these planes explicitly so decisions stay traceable and cross-domain tangents are avoided.
- Function: L2/L3 forwarding, queueing, shaping/window mapping, and bounded buffering.
- Key knobs: class-to-queue map, shaping limits, gate/window tables, per-queue watermarks.
- Observability: per-port drops/CRC, per-queue depth, gate-miss, congestion/backpressure counters.
- Failure signature: “works at low load” but jitter spikes at peak; selective drops on one class.
- Validation hook: loopback/PRBS + traffic profiles; verify bounded queue depth under worst-case bursts.
- Rule: keep real-time on a protected path; treat cloud/IT as an independent tap path.
- Function: configuration, policy boundary (VLAN/QoS/ACL), logging, and remote operations.
- Key knobs: versioned config bundles, staged rollouts, recovery throttles, safe change windows.
- Observability: config version, policy hits, event timeline, link/state transitions, audit trail.
- Failure signature: after maintenance, identity changes; policy updates create flaps or bursts.
- Validation hook: change simulation + rollback test; verify real-time budgets unchanged after updates.
- Rule: remote changes must never share critical time budgets with gated queues.
- Function: time inputs (PTP/SyncE/local), timestamp path, and holdover intent at the boundary.
- Key knobs: role choice (GM/BC/TC), sync source priority, holdover thresholds, re-sync pacing.
- Observability: lock state, offset trend, step events, re-sync counters, holdover duration.
- Failure signature: “PTP seems up” but application timestamps drift; periodic re-sync causes spikes.
- Validation hook: force source loss → verify holdover stability; measure asymmetry sensitivity with swap tests.
- Rule: timestamp tap points must be explicit, verifiable, and unchanged across firmware updates.
Determinism & Latency Budget (End-to-End, Not Just a Switch)
Determinism is not a single feature; it is the result of an end-to-end system budget. Every module between field ingress and uplink egress contributes delay, jitter, and buffer risk. A correct budget separates the protected real-time path from the tap/telemetry path so cloud bursts cannot pollute the field cycle.
- Path separation: real-time budgets must not share buffer and retry behavior with telemetry/cloud traffic.
- Tail-first thinking: worst-case jitter and queue tails matter more than average throughput numbers.
- Observability-first: each budget item must map to a counter, timestamp, or measurable state.
- Change impact: any policy/firmware update must re-validate the same budget checkpoints.
- Delay: classification + filtering cost.
- Jitter: bursty arrivals and micro-congestion.
- Buffer risk: uncontrolled buffering before queue assignment.
- Delay: intentional scheduling offsets.
- Jitter: window misalignment, gate misses.
- Buffer risk: queue tails when windows are too tight.
- Delay: lookup + internal pipeline.
- Jitter: contention across ports/classes.
- Buffer risk: head-of-line blocking.
- Delay: ingest + processing time.
- Jitter: scheduling and resource contention.
- Buffer risk: backpressure into the gateway if not isolated.
- Delay: egress scheduling + shaping.
- Jitter: uplink congestion and retries.
- Buffer risk: burst absorption vs tail latency inflation.
- Real-time jitter: cycle-to-cycle variation ≤ X under profile Y for duration Z.
- Queue tails: critical queue watermark never exceeds X; tail latency remains bounded.
- Gate integrity: gate-miss counter = 0 (or ≤ X/hour) during worst-case load.
- Isolation: uplink congestion changes real-time jitter by < X (tap path cannot pollute protected path).
- Time stability: lock/holdover transitions do not introduce step events beyond X.
TSN at the Gateway (Time-Aware Scheduling Without Over-Explaining TSN)
This section focuses on how a gateway applies time-aware scheduling in practice: classify traffic, shape it into predictable patterns, and validate that protected control timing stays stable even when telemetry and background loads surge. Standard parameter-by-parameter explanations belong to the dedicated TSN switching page (handoff only).
- Multi-service flow pipeline: class → queue → shape → gate (engineering order).
- Which flows must remain faithful across the boundary vs which may degrade.
- Queue isolation, watermarks, and counters used for field validation.
- Clause-by-clause TSN parameter definitions and standard tables.
- Detailed explanations of specific TSN features beyond gateway usage.
- Goal: lock each service into a stable class (Control / Sync / Telemetry / Background).
- Inputs: endpoint list, VLAN/priority policy, port roles, and boundary rules.
- Outputs: class-map + fixed queue-map (no runtime ambiguity).
- Pass criteria: misclassification ≤ X per Y minutes; per-class counters stable.
- Goal: convert bursts into predictable envelopes so windows cannot be flooded at open.
- Inputs: per-class rate ceiling, burst allowance, and worst-case load profiles.
- Outputs: shaping limits + per-queue watermark thresholds (observable).
- Pass criteria: critical watermark ≤ X; tail latency remains bounded under profile Y.
- Goal: prove determinism survives uplink and telemetry stress without polluting control timing.
- Inputs: stress profiles (telemetry bursts, background floods), time sync state, and window tables.
- Outputs: evidence set: gate-miss, queue depth, drops, and timing deltas.
- Pass criteria: gate-miss = 0; real-time jitter increase ≤ X for Y minutes at Z load.
- Control: keep protected queue + windows intact.
- Sync-adjacent: prevent queue interference that amplifies timing noise.
- Rule: never remap to best-effort under congestion.
- Telemetry: stronger rate-limits, batching, wider windows.
- Background: lowest priority, opportunistic windows only.
- Rule: degradation must not increase control jitter beyond X.
PTP Gateway Strategy (GM/BC/TC Placement + Asymmetry Control)
This section treats the gateway as a time boundary. The focus is role placement and verification: choosing Grandmaster, Boundary Clock, or Transparent Clock based on stability and isolation needs; maintaining time quality during uplink instability (holdover); and controlling asymmetry using field-friendly calibration methods. PTP message mechanics and BMCA deep dives belong to the PTP page (handoff only).
- Role selection logic: GM vs BC vs TC at the gateway boundary.
- Holdover when uplink is unstable: trigger, behavior, duration, and recovery pacing.
- Asymmetry control methods: swap tests, path comparison, and reference-segment checks.
- PTP packet formats, field-by-field interpretation, and BMCA internals.
- Algorithm-level explanations beyond verification and placement choices.
- Best when: field requires an independent master; uplink cannot be trusted as a stable time source.
- Cost: local time source quality and health monitoring become mandatory engineering items.
- Pass criteria: after uplink loss, offset drift ≤ X over Y minutes; no step events beyond X.
- Risk note: recovery must be paced to avoid re-sync storms that disturb gated schedules.
- Best when: the field domain must be insulated from uplink timing noise and topology changes.
- Cost: added configuration and verification surface at the boundary (role + source priority).
- Pass criteria: uplink timing disturbance does not increase field jitter by > X under load Y.
- Risk note: unclear timestamp tap points can mask drift until application failures appear.
- Best when: upstream is stable; the goal is to minimize boundary-induced timing error.
- Cost: requires hardware timestamp coverage aligned with the real forwarding path.
- Pass criteria: offset increment ≤ X as load varies; time health counters remain stable.
- Risk note: hidden buffering or queue contention can appear as “time error” during peaks.
- Trigger: lock loss or offset beyond X for Y seconds.
- Behavior: keep protected windows stable; throttle re-sync events.
- Duration: maintain drift within X for Y minutes (target).
- Recovery: rejoin gradually to prevent step-induced jitter spikes.
- Swap test: swap ports/paths; offset shift should be ≤ X if asymmetry is controlled.
- Path A/B: compare parallel paths; long-term bias should stay within X.
- Reference segment: insert a known stable segment; calibration should converge within X.
- Rule: record calibration version so drift changes are traceable after maintenance.
Bridging & Segmentation (VLAN/QoS, Multicast, Policy Boundaries)
This section explains why an edge gateway must enforce domain isolation between a controlled field TSN domain and an uncontrolled IT/Cloud domain. The focus is on engineering levers that are measurable: VLAN/QoS mapping, broadcast/multicast containment, and policy placement that does not inject unpredictable cost into deterministic forwarding.
- VLAN/QoS remap: preserve control/sync semantics across the boundary.
- Broadcast containment: stop unknown floods from crossing into the field domain.
- Multicast boundary: proxy/limit group replication at the gateway edge.
- Policy placement: segmentation first, filtering second; keep deterministic path predictable.
- Full enterprise security architectures, layered L3 defenses, and deep network security design patterns.
- Extended security protocol tutorials (handoff to the Security page).
- Broadcast / unknown-unicast floods traverse upstream switches during peak events.
- Multicast replication grows without a boundary, multiplying load unpredictably.
- QoS marking is rewritten upstream, collapsing control priority into best-effort.
- Policy updates occur during production, creating transient micro-outages.
- Mixed cell/line traffic shares the same segment, causing cross-coupled failures.
- Maintenance introduces loops or mirror storms that escape local containment.
- Control jitter rises under “unrelated” telemetry peaks (tail latency inflation).
- Queue watermarks remain elevated, creating hidden timing debt.
- Gate-miss / late-tx events appear even when average utilization is low.
- Multicast storms amplify CPU/forwarding contention, masking root cause.
- Fault isolation becomes non-local: field symptoms originate upstream.
- Recovery retries cause secondary storms, extending outage windows.
- Segmentation first: per-cell VLAN boundary with explicit allow-lists.
- QoS remap: preserve Control/Sync classes; forbid remap to best-effort.
- Broadcast containment: block or absorb broadcasts at the boundary (pass: cross-domain broadcast count = 0).
- Multicast boundary: proxy/limit replication (pass: replication factor ≤ X).
- Storm guard: rate thresholds + deterministic protection (pass: control jitter increase ≤ X during storms).
- Policy versioning: changes are auditable and reversible (rollback ≤ X, impact window ≤ Y).
Edge Compute Pipeline (Ingest → Buffer → Compute → Publish)
This section treats compute as a non-blocking observability pipeline. Deterministic forwarding must remain predictable and typically bypasses compute. Compute consumes tapped data through bounded buffers and publishes uplink payloads without back-pressuring protected control timing.
- Two-path design: bypass for deterministic forwarding vs tap for observability.
- Buffer strategy: store-and-forward vs drop policy vs internal backpressure.
- Resource binding principles: compute caps and isolation to protect forwarding tail latency.
- Failure containment: compute/uplink issues must not leak into deterministic timing.
- MQTT/OPC UA/REST/streaming deep tutorials and implementation walkthroughs.
- OS-level step-by-step guides (only principles are provided here).
- Purpose: keep control timing predictable under worst-case background load.
- Touches: time-aware queues, gates/shapers, minimal forwarding pipeline.
- Must never: wait on compute scheduling, logging bursts, uplink congestion, or storage stalls.
- Pass criteria: gate-miss = 0; control jitter increase ≤ X for Y minutes at Z load.
- Purpose: extract and process data without feeding unpredictable load back into forwarding.
- Touches: tap/metadata, bounded buffers, compute budget, publish scheduling.
- Must never: backpressure deterministic queues; overflow handling must remain local.
- Pass criteria: tap drops are bounded and logged; publish stalls do not change deterministic counters.
- Store-and-forward: keep completeness; pay in buffer and publish latency.
- Drop policy: keep freshness; drop older samples first (bounded loss).
- Internal backpressure: limit compute/publish ingestion; never propagate into deterministic forwarding.
- Pass criteria: overflow action is deterministic; event logs include policy version and watermark peak ≤ X.
- Cap resources: set a hard ceiling so compute cannot inflate forwarding tail latency.
- Prioritize deterministic: forwarding retains priority; compute yields under contention.
- Keep time semantics: maintain timestamp/sequence consistency for reliable correlation.
- Pass criteria: under compute overload, deterministic KPIs stay within spec; only observability degrades.
- Compute overload: observability drops or reduces sampling; deterministic unaffected.
- Uplink down: publish pauses or buffers internally; deterministic unaffected.
- Storage full: keep metadata/events, not raw flood; avoid blocking forwarding.
- Update/restart: deterministic path remains predictable; rejoin is paced (pass: impact ≤ X).
Reliability & Resilience (Failover, Holdover, Update Safety)
An edge gateway sits on the boundary between a controlled field domain and an unstable uplink domain. The design must provide degradation modes that keep deterministic forwarding predictable when uplink, time source, or software lifecycle events occur. This section focuses on measurable behaviors and pass criteria, not protocol deep dives.
- Uplink down: local autonomy, bounded buffering, controlled recovery.
- Time source lost: holdover strategy, drift bounds, re-sync safety.
- Update & rollback: upgrades must not break determinism; define impact windows.
- Pass criteria: deterministic KPIs remain within limits during failures.
- MRP/HSR/PRP mechanism details (handoff to the Ring Redundancy page).
- PTP/BMCA frame-level explanations (handoff to the PTP page).
Each mode below is defined by a trigger, a bounded degrade action, protected invariants for deterministic traffic, and pass criteria that can be validated during bring-up and field service.
- Trigger: link loss persists for ≥ X ms or X consecutive failures.
- Degrade action: isolate uplink publish; switch observability to drop/store policy.
- Protected invariants: deterministic path does not backpressure or re-class.
- Pass criteria: gate-miss = 0; control jitter increase ≤ X over Y min.
- Trigger: up/down cycles ≥ X within Y minutes.
- Degrade action: rate-limit reconnection; publish uses paced rejoin.
- Protected invariants: deterministic queues remain stable; no retry storms.
- Pass criteria: queue watermark ≤ X; recovery avoids burst amplification.
- Trigger: time state leaves locked for ≥ X ms (loss-of-sync window).
- Degrade action: enter holdover; freeze time-critical knobs; log snapshot.
- Protected invariants: timestamp semantics remain consistent for correlation.
- Pass criteria: drift ≤ X over Y; holdover duration ≥ T without instability.
- Trigger: time source returns and remains stable for ≥ X seconds.
- Degrade action: paced convergence; avoid abrupt time jumps into control loops.
- Protected invariants: deterministic scheduling remains predictable during convergence.
- Pass criteria: settle time ≤ X; transient offset peak ≤ Δ.
- Trigger: planned maintenance or safety-driven patch event.
- Degrade action: staged update; policy/version pinned; controlled restart window.
- Protected invariants: deterministic KPIs must not regress beyond thresholds.
- Pass criteria: impact window ≤ X; rollback ≤ Y; KPI delta ≤ Z.
Observability & Field Service (Counters, Black-Box, Remote Ops)
Observability must close the loop: field issues should be attributable to which port, which queue, which gate window, and which time state. This section defines the minimum counters and black-box fields needed for repeatable triage and remote operations.
- Required counters: CRC/drop, queue depth/watermark, gate-miss, clock state, temp/power events.
- Black-box fields: timestamp, configuration versions, snapshot at the moment of alarm.
- Remote ops: discovery/asset view, snapshot pull, and bounded mitigation knobs.
- Physical cable diagnostics methods (TDR/return-loss/SNR) details (handoff to Cable Diagnostics page).
- LLDP protocol tutorial (LLDP is used only for discovery and asset mapping here).
Keep the record minimal but sufficient for replay and correlation. Fields are grouped to avoid noise and to preserve attribution.
- Device ID
- Port ID
- Ingress/Egress
- VLAN / Class
- Queue ID
- Event timestamp
- Time state
- Offset summary
- Holdover flag
- Policy version
- TSN schedule ver
- QoS map ver
- FW / build ID
- CRC/error delta
- Drop delta
- Queue depth
- Watermark peak
- Gate-miss count
- Temp state
- Power event
The flow below narrows the fault domain without guesswork and preserves deterministic operation while collecting evidence.
- Time state first: locked / holdover / re-sync; check recent transitions.
- Port attribution: identify ingress/egress port with the highest error deltas.
- Queue attribution: compare queue depth and watermarks by class/queue ID.
- Gate attribution: validate gate-miss / window overrun counters for time-aware flows.
- Config correlation: map to policy/TSN/QoS versions; detect change windows.
- Export evidence: export 20 fields + counter snapshots; apply bounded mitigation if needed.
- Reduce observability sampling rate; keep deterministic path unchanged.
- Switch buffer policy (drop/store) within the observability pipeline only.
- Enable per-port loopback/PRBS test window during maintenance mode.
- Freeze policy version during recovery; resume after stability timer.
H2-11 · Engineering Checklist (Design → Bring-up → Production)
This checklist is structured as three gates. Each gate has Do, Don’t, and Pass criteria so the gateway can be validated as an end-to-end deterministic system, not a “switch-only” feature.
Gate A · Design (Architecture + HW/PHY + Time path)
- Split paths: deterministic control path (bypass) vs observability/cloud path (tap).
- Make time explicit: define where HW timestamping happens (ingress/egress) and what modules must be time-aware.
- Budget first: allocate latency/jitter budgets per stage (gate/shaper, forwarding, buffering, compute, uplink).
- Contain the field: VLAN/QoS mapping and multicast/broadcast boundaries are defined at the gateway.
- Plan holdover: define local time source + acceptance test when uplink/PTP source is lost.
- Design for service: per-port loopback/PRBS hooks, counter snapshots, and versioned configuration logging.
- Don’t route deterministic traffic through variable compute (containers/VMs) without a hard bypass.
- Don’t rely on “default queues” for mixed traffic; classify and pin critical flows to known queues/windows.
- Don’t allow uplink broadcast domains to leak into the field domain (storm risk + timing jitter).
- Don’t treat timestamping as a software-only feature; verify the hardware timestamp path and tap points.
- Don’t skip thermal and power-event logging; “random” jitter often correlates with throttling or brownouts.
- Deterministic path: P99 end-to-end latency ≤ X µs under background load; P99.9 jitter ≤ X µs.
- Time: PTP lock ≤ X s; steady-state offset (P99) ≤ X ns over Y min; holdover drift ≤ X ppb/ppm over Y min.
- Containment: broadcast rate in field domain ≤ X pps; multicast replication stays within configured boundary.
- Serviceability: a single counter snapshot identifies (port/queue/window) root domain within X minutes.
Gate B · Bring-up (Link → Time → Traffic → Counters)
- Start with PHY sanity: link, polarity, pair-swap, EEE off/on matrix, loopback/PRBS.
- Time first-class: validate HW timestamping, verify GM/BC/TC role behavior, and check asymmetry calibration method.
- Traffic profile: run deterministic load + background load together; verify gate misses and queue depth behavior.
- Counter discipline: record CRC/drop/queue depth/gate miss/clock state/temp/power events per port.
- Golden config: version every TSN schedule + QoS mapping; store a snapshot with a single command.
- Don’t validate TSN windows without a background stressor (bursts, multicast, management traffic).
- Don’t accept “PTP locked” without checking timestamp tap point consistency across ports.
- Don’t tune queue/shaper in production first; lock a test matrix and keep traceable revisions.
- PRBS/loopback: BER ≤ X over Y minutes per port; zero unexpected link down events.
- TSN schedule: gate miss count = 0 under the target profile; queue depth stays below X frames.
- Time: offset distribution stable; re-sync after cable/port changes ≤ X seconds.
- Thermal: no throttling during profile; temperature delta ≤ X °C across ambient range.
Gate C · Production (Robustness + Update Safety + Field Recovery)
- Fail-safe updates: A/B images, signed artifacts, deterministic rollback test with traffic running.
- Degraded modes: define uplink-loss behavior (local autonomy + caching + publish later) without breaking field determinism.
- Holdover acceptance: verify drift over time and the re-sync behavior (no oscillation, no storm).
- Forensics: black-box ring buffer with event triggers (clock state change, queue saturation, power dip).
- Remote ops: LLDP for discovery/asset only; secure remote configuration with versioned diffs.
- Don’t allow update-time traffic reshaping without validation; schedule drift is a production outage multiplier.
- Don’t let cloud retry storms back-propagate into the field domain; isolate and rate-limit at the boundary.
- Don’t erase last-known-good schedules; keep a recoverable “golden TSN/PTP config”.
- Update: upgrade + rollback complete within X minutes; deterministic KPIs remain within thresholds.
- Uplink loss: field deterministic service continues for X hours; backlog policy prevents storage exhaustion.
- Holdover: drift ≤ X ppb/ppm; recovery to locked state ≤ X seconds without flapping.
- Security: measured boot verified; keys protected; remote ops uses authenticated sessions.
H2-12 · Applications & IC Selection (for Gateway Builders)
Selection on this page is organized as capability bundles. The goal is to choose a set of TSN switching, timestamp/time, PHY/port, security, and service hooks that matches the target determinism and deployment risk—without turning into a shopping list.
- Determinism target: cyclic latency / jitter (P99/P99.9), and worst-case background load.
- Node scale: number of endpoints, multicast/diagnostics intensity, and ring/dual-uplink needs.
- Time accuracy: PTP offset target (ns/µs) + holdover duration when uplink time is lost.
- Uplink behavior: “trusted & stable” vs “unstable/untrusted” (cloud/IT jitter, storms, retries).
- Environment: EMC/ESD/surge, temperature range, connector strategy (RJ45/M12/SPE).
- Edge compute load: store-and-forward needs, encryption, AI/NPU, storage endurance.
- TSN switch SoC examples: Microchip LAN9668 / LAN9662 (TSN switch with integrated CPU options).
- Industrial GbE PHY examples: TI DP83869HM; ADI ADIN1300; Microchip KSZ9031RNX.
- Clock/jitter (optional): SiLabs Si5341 (jitter attenuator/clock generator class).
- Security baseline: Infineon OPTIGA™ TPM SLI 9670 (TPM2.0) or Microchip ATECC608B.
- ESD example (Ethernet diff pair class): Nexperia PESD2ETH1G-T (low-cap ESD device class).
- System synchronizer examples: Renesas 8A34001 (PTP/SyncE system synchronizer class); Microchip ZL30771/72/73 family class.
- Network synchronizer examples: Microchip ZL30732 / ZL30772 class (PTP/SyncE timing components).
- TSN switch examples: Microchip LAN9668/LAN9662 (field aggregation), plus an uplink-facing scheduler/shaper policy in firmware.
- Industrial PHY examples: TI DP83869HM (TSN-friendly low-latency PHY class) and equivalent industrial-grade PHYs.
- TSN-capable compute examples: TI Sitara AM64x family (industrial processors with TSN-capable ports class).
- TSN switch SoC examples: Microchip LAN9668/LAN9662 (for multi-port field aggregation + policy boundary).
- Secure boot / keys: Infineon SLI 9670 TPM2.0 or Microchip ATECC608B secure element.
- Service hooks: PHY loopback/PRBS + timestamp visibility + per-port counter snapshots.
- Protection examples: low-cap ESD device class such as PESD2ETH1G-T for differential lines; magnetics/CM suppression matched to connector strategy.
- Microchip: LAN9668, LAN9662 (TSN switch SoCs).
- NXP (TSN switch class): SJA1110 family (commonly referenced for TSN switch SoC class).
- TI: DP83869HM (GbE PHY; TSN-friendly indications class).
- Analog Devices: ADIN1300 (industrial GbE PHY class).
- Microchip: KSZ9031RNX (GbE PHY with loopback/diagnostic hooks class).
- Renesas: 8A34001 (system synchronizer for IEEE 1588 class).
- Microchip: ZL30772, ZL30732 class timing ICs.
- Clock/jitter: Si5341 (jitter attenuator / clock generator class).
- TPM2.0: Infineon OPTIGA™ TPM SLI 9670 (aka “9670” class).
- Secure element: Microchip ATECC608B (hardware key storage class).
- Low-cap ESD device class: Nexperia PESD2ETH1G-T (example).
- GbE magnetics example: Pulse H5007NL (example transformer module P/N class).
- WE-LAN class: Würth Elektronik WE-LAN family (choose P/N by PoE + speed + temperature).
Recommended topics you might also need
Request a Quote
H2-13 · FAQs (Field Troubleshooting, Fixed 4-Line Answers)
- X = threshold value (latency/jitter/offset/drop rate/watermark, etc.).
- Y = measurement window (minutes/hours) under the defined traffic profile.
- P99/P99.9 = tail metric; do not accept average-only checks.