123 Main Street, New York, NY 10001

Automotive Ethernet TSN Switch and Router

← Back to: ADAS / Autonomous Driving

This page is where I turn abstract TSN capabilities into concrete specification and RFQ fields. Instead of memorising standard numbers, I map real ADAS traffic—camera, radar, LiDAR, brake and steering control, logging, diagnostics and OTA—into traffic classes and features that my switch IC must support if the network is going to behave deterministically under load and during faults.

Scope: IC-level roles and selection for TSN switches/routers in ADAS networks. Time-sync topologies, OTA strategy and gateway functions are handled on their own pages.

What problem am I solving with an Automotive Ethernet TSN switch?

When I start an ADAS network design, it is tempting to drop in a “generic automotive Ethernet switch” and rely on VLANs and simple priority queues. That works while the system is lightly loaded, but it breaks down as soon as multiple cameras, radars and ECUs all become active at the same time. I can no longer explain, in numbers, what the worst-case latency and packet-loss risk really look like.

My ADAS network rarely carries just one type of traffic. At the same time I can have forward and surround cameras streaming video, radar or LiDAR publishing dense point clouds, brake and steering controllers exchanging fast cyclic messages, and background logging or diagnostics pushing data into a central recorder. All of this often converges on the same uplink or pair of uplinks into an ADAS domain controller or central compute node.

On a best-effort switch, those flows simply compete. A burst of high-rate video can fill queues and push control messages to the back, just when the vehicle is making a critical manoeuvre. A misbehaving ECU can flood the fabric with multicast or broadcast traffic, and the only visibility I get is a handful of per-port counters. From a safety and diagnostics point of view, this is very uncomfortable: I cannot write down clear, justified bounds for delay and loss in my safety case or design report.

An automotive Ethernet TSN switch or router is the point where I change the question. Instead of “does it usually work under typical load?”, I ask “can I classify each traffic type, assign it a deterministic place in time and bandwidth, and prove that the worst-case behaviour is still acceptable?”. The answer lives in a small set of TSN capabilities: time-aware scheduling so that critical traffic has exclusive time windows on key links, bandwidth reservation and shaping so that video cannot starve control flows, per-stream filtering and policing so a faulty ECU cannot flood the network, and frame preemption so short control frames can cut ahead of long video frames on congested segments.

In other words, the problem I am solving with an Automotive Ethernet TSN switch is not “how do I plug everything together,” but “how do I make the ADAS network behave like an engineered resource with predictable latency, isolation and diagnosable failures?”. Once I think about it this way, the TSN feature set becomes less of a protocol checklist and more of a set of levers that I map directly to my ADAS safety and performance requirements.

From best-effort Ethernet to TSN-based deterministic ADAS network Block diagram comparing a best-effort automotive Ethernet switch with mixed ADAS traffic and an Automotive Ethernet TSN switch that applies traffic classes, deterministic queues and diagnostics. Best-effort automotive Ethernet Mixed ADAS traffic with no deterministic guarantees Switch best-effort Camera Radar/LiDAR Control Log/Diag Unbounded delay & loss Automotive Ethernet TSN Deterministic ADAS network behaviour TSN switch/router TSN tools • Time windows • Bandwidth shaping • Stream policing • Frame preemption Deterministic latency, protected traffic, diagnosable faults
Comparing a best-effort automotive Ethernet switch with an Automotive Ethernet TSN switch that assigns ADAS traffic to deterministic resources and exposes better diagnostics.

Traffic classes and TSN feature set I actually care about

TSN only becomes useful when I translate real ADAS traffic into explicit classes and rules. Instead of saying “the network must support TSN,” I write down which sensor and control flows exist on each link, what their latency and jitter limits are, and where occasional loss is acceptable. From there I decide which TSN tools I really need and which ones are optional for my project phase and vehicle segment.

I usually start by listing my high-bandwidth sensor streams. Cameras and surround-view nodes might run anywhere from tens to hundreds of megabits per second per link, with different modes for parking, highway, or high-dynamic-range capture. Radar and LiDAR often send periodic bursts of dense point clouds. These streams are sensitive to jitter and sustained loss, but they can tolerate the occasional dropped frame as long as the control loop stays stable. For these flows, I care about reserved bandwidth and controlled jitter more than absolute zero loss.

Brake, steering and chassis control messages sit in a different class in my head. Their payload is small but their timing is tight: cyclic periods in the 1–10 ms range, plus event-driven messages when the driver or autonomy stack demands a fast response. For these flows I need strong guarantees on worst-case latency and a clear strategy for what happens if a packet is delayed or dropped. They are the first candidates for dedicated high-priority queues and, in many cases, for time-window protection so they never sit behind long sensor frames on a congested link.

Logging, event data recorders and fleet analytics sit somewhere in between. I want the vehicle to keep rich history around a critical event, but I do not need that history in real time. I typically assign logging flows to classes that have reserved average bandwidth and a reasonable maximum latency, but they are allowed to back off under transient overload. Diagnostics and OTA behave similarly: I do not want them to consume critical-path resources, yet I cannot let them starve indefinitely or my field-service story falls apart.

Once I have these traffic classes laid out, the TSN feature set turns into a list of checkboxes on my switch IC specification. Time-aware scheduling is the first one: I need the ability to define time windows on critical links where only safety-relevant control traffic can transmit. That is how I stop a burst of camera or LiDAR frames from delaying a brake or steering message beyond the limits in my safety analysis. The exact standard name matters less to me than the capability: do I get reliable, per-cycle windows for my most critical flows?

Bandwidth reservation and shaping come next. I want per-class and often per-stream bandwidth guarantees so that video streams cannot slowly expand and steal capacity from control and logging flows over time. In practice that means asking whether the switch lets me define committed and peak rates, and whether it can shape egress queues in a way that preserves both high utilisation and predictable behaviour. This is where I translate abstract TSN credit-based or shaped queues into simple questions: can I guarantee a minimum bandwidth for control and logging, and cap camera flows during heavy load?

Per-stream filtering and policing solve a different problem. At some point in the vehicle lifetime, a faulty ECU, misconfigured camera or compromised node will send too much traffic or traffic with the wrong headers. I do not want that node to have the power to flood the fabric. So I look for TSN implementations that support per-stream admission control, filtering rules and policing thresholds. In my specification I phrase this as “the switch must be able to enforce per-stream rate and conformance checks so that a single faulty node cannot block ADAS-critical flows.”

Finally there is frame preemption. Long frames—especially aggregated or tunneled traffic—can occupy a link for a surprisingly long time. Without preemption, a short, high-priority control packet might have to wait until the long frame finishes. With preemption, part of the long frame can be paused so the control frame passes through. When I am sizing my safety margins, I pay close attention to whether the TSN switch supports preemption on the links that carry mixed traffic. If it does, my worst-case latency budget becomes noticeably easier to meet.

By the end of this exercise, the TSN feature set I care about is no longer a theoretical list from a standard: it is a set of concrete requirements tied to real flows. Camera and radar streams map to classes with reserved bandwidth and controlled jitter. Brake and steering messages map to high-priority queues and protected time windows. Logging, diagnostics and OTA map to background classes with enough guaranteed capacity to stay useful without threatening safety. The switch IC that I finally select is the one that can implement this map reliably, with the counters and diagnostics I need to prove in the field that the network is doing what the safety case says it will do.

Mapping ADAS traffic classes to TSN features in the switch Diagram showing camera, radar, control, logging and OTA traffic mapped into TSN switch queues and features such as time windows, shaping, policing and frame preemption, then forwarded to an uplink with bounded latency. ADAS traffic classes Camera streams Radar/LiDAR Control Logging/EDR OTA/Diag TSN switch feature set Time windows protect control slots Shaping reserve bandwidth Policing block faulty nodes Preemption pass short frames Queues per class/stream camera, control, logging, OTA Uplink behaviour • Bounded latency for control • Reserved bandwidth for sensors • Predictable logging & OTA
Each ADAS traffic class maps onto specific TSN features and queues in the switch so that camera, radar, control, logging and OTA flows share the same fabric without losing deterministic behaviour.

Typical ADAS TSN topologies this switch has to support

When I plan an Automotive Ethernet TSN switch for an ADAS domain, I do not treat it as an isolated chip on a schematic. I picture the full domain topology: an ADAS domain controller SoC at the centre, multiple camera ECUs around the vehicle, radar and LiDAR ECUs at the corners, and at least one uplink into a central gateway or zone controller. Only after I have drawn that picture do I decide which topologies the switch must support and where I am willing to accept single points of failure.

The simplest pattern is a single TSN switch in a star. Every forward and surround-view camera ECU, plus radar and LiDAR ECUs, connects back to this switch over point-to-point Automotive Ethernet links. One or two uplinks then connect the switch to the ADAS domain controller and, often, to a central gateway or zone controller. For lower-channel-count Level 1 or Level 2 projects, this star is attractive: the harness is easy to reason about, diagnostics are straightforward and every flow passes through one place that I can instrument and control.

As channel counts grow, star topologies start to strain the harness. To reduce cable length and weight, I often see ECU chains or rings, especially for cameras and sensors mounted along the sides of the vehicle. Several camera ECUs may daisy-chain their Ethernet links, with the last node returning to the TSN switch, or a ring may close back into the domain. From the switch’s point of view, this means that some ports aggregate traffic from multiple sensors, not just one. My TSN configuration and uplink budgeting have to reflect that, or my worst-case utilisation estimates will be wrong.

For higher-assurance Level 2+ and Level 3 systems, I also need to think about dual-homing and dual-switch architectures. A critical ECU, such as the primary forward camera or a long-range radar, can be connected to two separate TSN switches with independent power and harness routes. The ADAS domain controller may itself have dual Ethernet interfaces, one into each switch. In that world, the question becomes: which ECUs are dual-homed, which stay single-homed, and which failures I am willing to tolerate without degrading the ADAS function below its safety goals.

Dual-switch topologies also affect how I handle uplinks. I might choose to give each TSN switch its own uplink into the central gateway or a zone controller, or I might concentrate traffic and rely on redundancy only at the sensor layer. Both options have cost and availability implications. A dual-uplink design improves resilience but consumes more gateway ports and harness resources. A single-uplink design is cheaper but may leave parts of the domain without a clean fallback path if the uplink segment fails.

At the end of this exercise, I categorise my topologies by both safety and cost. A single TSN star is cost-effective and simple to debug, but switch, supply or harness failures can knock out many sensors at once. Chains and rings reduce harness cost and make it easier to reach distant sensors, but they increase the complexity of failure analysis and rerouting. Dual-homed ECUs and dual-switch architectures improve availability and safety margins, at the expense of port count, uplink capacity and significant wiring complexity. The TSN switch I choose has to support the topologies I intend to deploy, not just the ones that look clean in a simplified lab setup.

Typical ADAS topologies around an Automotive Ethernet TSN switch Block diagram showing an ADAS domain controller, TSN switches, camera, radar and LiDAR ECUs, and uplinks to a central gateway or zone controller, with star, chain and dual-homed topologies indicated. ADAS Domain Controller Central Gateway / Zone Controller TSN Switch A Star / chain aggregation TSN Switch B Optional redundant pair Front Cameras ECU cluster Surround Cameras ECU ring / chain Radar ECUs Side / corner LiDAR ECUs Roof / front Camera ring / chain Critical ECU Dual-homed Topology notes • Star: simple wiring, easier diagnostics • Chain/ring: lower harness cost, more complex failures • Dual-switch: higher availability, higher port and harness cost • Dual-homed ECUs: only for sensors that matter most to safety goals
Typical ADAS topologies around an Automotive Ethernet TSN switch, showing star, chain, dual-switch and dual-homed variants between sensors, the ADAS domain controller and the central gateway or zone controller.

Port count, speed and queue planning

Once I have settled on a topology, the next step is to turn that sketch into concrete numbers: how many ports I need, which speeds they must support, what uplink capacity is required and how many queues I need per port. This is where I stop thinking in generic “multiport TSN switch” terms and start writing specific combinations like “six 1 Gbit/s camera ports, two 100 Mbit/s radar ports and two multi-gig uplinks with at least four hardware queues per port.”

I start with downlink ports. For each camera ECU, I look at the maximum resolution, frame rate and encoding mode to derive a worst-case bitrate, including burst modes such as HDR or high-frame-rate capture. If I have four cameras that each peak near 200 Mbit/s, I treat them as 800 Mbit/s of potential load, not the average values quoted in marketing material. Radar and LiDAR ports get the same treatment, even if their average load is lower. In my selection criteria, I record not just the number of ports but how many must operate at 100BASE-T1, how many at 1000BASE-T1 and whether I need the flexibility to mix and match speeds as sensor generations change.

Uplink sizing is where people often under-specify. An uplink that looks sufficient on paper with average bitrates can be dangerously close to saturation when every camera and radar is simultaneously active during a safety-relevant manoeuvre. I budget uplink capacity from the worst-case downlink sum, plus control, diagnostics and logging overhead, and then add margin so that my target utilisation under worst-case load stays well below one hundred percent. That margin gives me room for future software updates, added sensors and real-world traffic patterns that are more bursty than my early models assumed.

In many ADAS designs, this analysis quickly pushes me beyond a single 1 Gbit/s uplink. If a switch is aggregating several 1 Gbit/s camera ports and additional radars, I often prefer either dual 1 Gbit/s uplinks with well-defined flow placement or a higher-speed uplink such as 2.5GBASE-T1 on the path towards the ADAS domain controller. The TSN switch I select must therefore support the mix of downlink speeds I need and either multiple uplinks or a multi-gig uplink with the right automotive-grade PHY options.

Queue planning is just as important as raw bandwidth. Each physical port must have enough hardware queues to accommodate my traffic classes without collapsing them into one generic “high” and one generic “low” class. If my switch only offers two queues per port, it is difficult to give control traffic, sensor streams and logging distinct behaviours. With four or more queues, I can map brake and steering control into the highest-priority queue with time-window protection, assign camera and radar flows to shaped queues, and reserve a lower-priority but still guaranteed queue for diagnostics and logging.

Per-queue shaping and policing turn those queues into real tools. I look for the ability to configure minimum and maximum rates per queue, so that my control and safety queues have reserved capacity that cannot be stolen by camera bursts, while logging and OTA queues are capped to avoid creeping expansion. On some links I might allow camera queues to use unused capacity opportunistically, but I never want them to be able to push control or safety-critical flows into unpredictable delay.

Store-and-forward versus cut-through switching is another dimension I have to decide explicitly. Store-and-forward switching waits for the full frame and verifies its integrity before forwarding, which adds per-hop latency but keeps corrupted frames from propagating further into the network. Cut-through forwarding starts sending a frame as soon as the header is received, which reduces per-hop latency at the cost of potentially forwarding corrupted data. On short, high-quality ADAS domain links where every microsecond counts, cut-through can help my latency budget. On longer or noisier harness segments, or where diagnostics are more important than a few microseconds of latency, I may prefer store-and-forward.

For that reason, I treat switching mode as a configuration requirement, not a footnote. In my RFQ and design notes, I specify where I expect store-and-forward behaviour and where cut-through is acceptable, and I ask whether the TSN switch allows per-port or per-class control. The IC that gives me this level of control makes it much easier to balance safety, diagnostics and latency systematically instead of accepting whatever default behaviour happens to ship with the device.

By the time I finish port, speed and queue planning, I have a concrete picture of what my TSN switch must look like: a precise downlink port mix, uplinks sized for worst-case and future headroom, enough queues per port to reflect my traffic classes, and explicit control over shaping, policing and switching mode. That set of requirements is far more actionable than a vague “TSN-capable Ethernet switch,” and it directly supports the ADAS safety and performance targets that I am responsible for.

Port mix, uplink sizing and queue mapping for an ADAS TSN switch Diagram showing downlink ports for cameras, radar and logging mapped into TSN queues inside the switch, then aggregated into a sized uplink with reserved capacity and headroom. Downlink ports Cam1 1G Cam2 1G Cam3 1G Cam4 1G Radar 100M LiDAR 1G Log / Diag TSN Switch Queues and shaping per port Queue 0 Control Queue 1 Cameras Queue 2 Radar/LiDAR Queue 3 Log / OTA • Minimum rate for control and safety queues • Shaping and policing on camera / logging queues Uplink capacity Example: 2.5GBASE-T1 Camera bulk load Radar/LiDAR Control / Log / OTA Reserved headroom for bursts and future growth Store-and-forward vs cut-through: configured per link or per class where latency is critical.
Downlink ports for cameras, radar, LiDAR and logging feed TSN queues inside the switch, which then aggregate into a sized uplink with reserved capacity, per-class shaping and explicit headroom.

Time sync and timestamping in the switch

I only start worrying about time sync inside the TSN switch when I care about more than just moving packets. In an ADAS network, my cameras, radar and LiDAR are all watching a moving world, and the ADAS domain controller wants their data aligned to a single timebase. If the switch does not help preserve or refine that timebase, it quietly becomes another source of error between the grandmaster and each sensor. So when I select an Automotive Ethernet TSN switch, I treat time sync and timestamping as a first-class requirement, not a background detail.

The first question I ask is what role the switch is expected to play in the gPTP or 802.1AS domain. In some designs, the switch stays out of the timing domain and simply forwards PTP messages transparently while a dedicated time-sync IC or the ADAS SoC itself runs the grandmaster and boundary clock logic. In others, the switch acts as a transparent clock, measuring and compensating for its own residence time so that multi-hop paths do not accumulate uncontrolled delay. In more advanced domains, the switch may host a boundary clock and participate directly in the 802.1AS topology. I do not need to decide every detail in this page, but I do need to state clearly which of these roles I expect the switch silicon to support.

Hardware timestamping is the next layer. I care about where the switch can apply timestamps and how precise those timestamps are. On a basic level I want ingress and egress timestamps on PTP event messages, at least in the microsecond range and preferably in the sub-microsecond or nanosecond range on critical links. I also want to know how those timestamps are exposed to the rest of the system: are they attached to descriptors, stored in per-port FIFOs, or made visible through dedicated registers that my PTP stack can read without racing the data path? Without clear answers, my time sync implementation will be guessing about what really happened in the switch.

Beyond pure PTP, I think about how timestamps flow into the sensor-fusion pipeline. If my cameras and radar send their data through the TSN switch into the ADAS SoC, the SoC needs either precise timestamps attached to the sensor streams or a tightly bounded delay model inside the switch so that software can reconstruct the timing. If I intend to trigger multiple cameras off a shared timebase, the switch has to forward the timing messages and schedule TSN windows in a way that keeps exposure and readout aligned. Otherwise, the apparent time alignment in my fusion algorithms will be an illusion.

In practice, I phrase these needs as requirements rather than as protocol names. I write down that the switch must preserve and, where applicable, correct time information for multi-hop ADAS sensor paths, provide hardware timestamps on PTP events at a given resolution, and make those timestamps available to the SoC without excessive software overhead. I also clarify whether I expect transparent or boundary clock behaviour and on which ports. The more explicit I am in my specification, the less likely I am to discover late in the project that my chosen switch cannot meet the fusion timing budget.

I do not try to solve the entire time-sync problem on this page. Detailed time-sync topology, grandmaster placement and error budgeting across PHYs, cables and ECUs are a separate exercise. I keep that work on its own Time Sync for Sensor Fusion page and let this section focus on what the TSN switch itself must support. That separation keeps my selection process clean: this page defines what I demand from the switch, and the time-sync page explains how I build a complete timing architecture around it.

Time sync roles and timestamping around an Automotive Ethernet TSN switch Diagram showing a time-sync grandmaster and 802.1AS domain, an Automotive Ethernet TSN switch with transparent or boundary clock roles, camera and radar ECUs, and an ADAS SoC that consumes timestamps for sensor fusion. Time Sync GM gPTP / 802.1AS domain ADAS SoC PTP stack & sensor fusion TSN Switch Transparent / boundary clock Ingress / egress hardware timestamps gPTP / 802.1AS messages Timestamps & delay info Camera ECUs multiple viewpoints Radar ECUs front / corner LiDAR ECUs long-range / near-range Sensor data + timestamps Fusion timebase: • Align camera, radar, LiDAR to GM time • Use switch timestamps or bounded delay • Keep errors within fusion budget Detailed topology and error budgeting: handled on the Time Sync for Sensor Fusion page.
Time sync and timestamping around an Automotive Ethernet TSN switch, with a gPTP or 802.1AS grandmaster, camera and radar ECUs, and an ADAS SoC that relies on hardware timestamps for sensor fusion.

Redundancy, health monitoring and diagnostics

Redundancy and health monitoring are the parts of the TSN switch that I only appreciate fully when something goes wrong. A link that flaps intermittently, a camera ECU that starts flooding the network, or a harness that slowly degrades over time can all turn a clean lab demo into an intermittent field failure. If my switch cannot detect, localise and contain these problems, the whole ADAS domain becomes difficult to debug and painful to operate. So I treat redundancy, health monitoring and diagnostics as core selection criteria, not as optional extras.

I think about redundancy at three levels. The first is topology, where I decide whether I am building a single-switch star, a chain or ring of ECUs, or a dual-switch architecture with dual-homed sensors and uplinks. The second is protocol behaviour: which forms of link aggregation, redundancy and fast failover are actually supported and how they interact with TSN scheduling. The third is the switch’s own fault behaviour: what it does when a link goes bad, a port storms the network or a switch in a redundant pair fails. A device that looks attractive on a block diagram but ignores the second and third levels is a poor fit for a serious ADAS project.

At the protocol level, dual uplinks and dual-switch topologies are the most visible features. If my TSN switch has two uplinks toward an ADAS domain controller or central gateway, I need clear behaviour when one uplink fails. Does the switch advertise the failure quickly to the redundancy protocol? Does it drain and reroute flows within a bounded time? Similarly, in a dual-switch architecture, I must understand how the chosen protocols and the switch hardware interact when one device goes offline. The silicon does not have to solve redundancy by itself, but it must expose the right signals and controls so that my redundancy strategy actually works.

Per-port health monitoring is just as important. I rely on per-port counters for link up/down events, link flaps and a range of error conditions such as CRC errors, symbol errors and alignment errors. Those counters let me decide whether I am dealing with a marginal harness, a noisy environment, a faulty ECU or a misconfigured TSN schedule. Without them, every failure looks the same in the field: “network unreliable,” with no clear path to root cause. I also expect to see storm-control information so that I can detect and limit broadcast or multicast floods before they take down the entire domain.

Loop detection and storm control deserve special attention. During service or late changes, it is easy for a harness to be miswired and create an unexpected loop. In a pure Ethernet world, that can saturate links, overload queues and disrupt time-sensitive traffic. A TSN switch that can detect loops, limit storm traffic and, if necessary, shut down or rate-limit the offending port gives me a controlled failure instead of a chaotic one. When I write my requirements, I state explicitly that I need loop detection and storm control with thresholds and actions I can configure.

I also think about “safe defaults” and isolation behaviour. When a port or a connected ECU misbehaves, I do not want the switch to oscillate randomly between states. I want it to apply clear rules: count errors, raise warnings, and if thresholds are exceeded, isolate or downgrade that port in a predictable way. Blacklisting a chronically faulty node, reducing its speed or confining its traffic to a limited set of queues are all useful tools, as long as the switch gives me enough control to document and justify the behaviour in my safety and service documentation.

None of this matters if the information stays locked inside the switch. For redundancy and diagnostics to be useful, the ADAS domain controller and any logger or event data recorder must be able to see the relevant statistics and events. I plan how the switch exposes its counters and alarms through a management interface, whether that is a dedicated management Ethernet port, an internal host interface or a sideband control bus. Then I define how the ADAS software will poll or subscribe to those signals and how it will write them into logs so that field engineers can reproduce and understand failures after the fact.

In my specification, I translate all of this into concrete expectations. I ask for per-port counters and clear link-flap reporting, configurable storm control and loop detection, and a way to isolate misbehaving nodes without disturbing the rest of the domain. I require that the switch support the redundancy topologies I intend to deploy and that it expose its health and fault information cleanly to my ADAS domain controller and logging system. A TSN switch that meets these criteria gives me a network that not only behaves deterministically in normal operation but also fails and recovers in ways that I can explain, test and support over the lifetime of the vehicle.

Redundancy, health monitoring and diagnostics for an ADAS TSN switch Diagram showing dual TSN switches with uplinks to an ADAS domain controller and logger, dual-homed sensors, per-port counters and storm control, and a failover path when one link or switch fails. ADAS Domain Controller Logger / EDR TSN Switch A Primary TSN Switch B Redundant Health monitoring & events per-port counters, alarms, loop/storm detection Camera ECUs Radar ECUs Critical ECU Dual-homed Uplink A fault Traffic re-routed via B Status, counters, alarms Logged events & traces Redundancy & diagnostics notes • Dual switches and uplinks support failover • Per-port counters expose link and error health • Loop and storm control contain faulty nodes • ADAS DC and logger receive events for diagnosis • Behaviour must be predictable and configurable
Redundancy and health monitoring around dual TSN switches in an ADAS network, with dual-homed critical sensors, dual uplinks, per-port diagnostics and clear failover paths that feed the ADAS controller and logger.

Power, thermal and layout notes

When I select an Automotive Ethernet TSN switch for an ADAS domain, I try not to treat it as a simple “bigger Ethernet PHY.” At the board level it is a dense digital and mixed-signal device with multiple rails, SerDes blocks, internal PLLs and sometimes integrated packet processing engines. If I do not plan power, thermal and layout details early, I can easily end up with a switch that meets the functional specification on paper but fails to deliver deterministic latency or long-term reliability in the vehicle.

On the power side, I first map out the rails. A TSN switch typically has at least a core rail, one or more I/O rails, SerDes rails and PLL or reference rails. I want to know which rails can share a regulator, which demand their own low-noise supply and whether any rails have special sequencing requirements. The rail map goes into my power-tree planning and will later drive the selection of a safety PMIC or discrete regulators; the detailed PMIC design lives on my separate Safety PMIC for ADAS Compute page, but here I still record what the switch itself expects.

I also look at how typical power scales with port count, line rate and TSN feature usage. A bare switch with a few 100 Mbit/s ports enabled is not the same as a fully loaded device aggregating several 1 Gbit/s cameras, radar and LiDAR streams while running time-aware scheduling and per-stream policing. I ask for numbers or estimates that reflect the worst realistic configuration for my project, not just a “typical” low-activity setting. Those numbers feed directly into my thermal budget and my PMIC current planning, especially if the ADAS domain controller and TSN switch share parts of the power tree.

Thermal design is the next step. Whether the package is QFN or BGA, I want to understand how much heat I am being asked to move into the PCB and eventually into the vehicle structure. For QFN devices, that usually means a generously sized exposed pad with a dense grid of thermal vias into a solid inner layer. For BGA, it means enough copper area in the core layers and, if needed, local copper floods on the outer layers to spread heat. During layout I reserve this area up front so it does not get eaten by late routing changes or nearby high-speed interfaces.

In higher-port-count or higher-speed designs, I also decide whether the TSN switch needs additional help, such as a small heatsink, a thermal interface pad to a shield or chassis, or enforced clearance from other hot components on the same board. I have learned that it is easier to plan mounting points and copper keep-outs early than to retrofit cooling late in the design. Whatever solution I choose must survive the vehicle’s ambient range while keeping the switch inside its specified junction temperature, because excessive temperature drift can translate into timing variation and reduce margins for TSN scheduling and timestamp accuracy.

On the layout side, my first focus is on the Ethernet pairs themselves. Differential pairs to the magnetics or direct-connect PHY interfaces need controlled impedance, tight pair matching and limited skew between pairs that carry related signals. Within reason, I keep critical TSN paths as short and symmetrical as the mechanical constraints allow, both to control signal integrity and to avoid unnecessary variation in link delay. Around the magnetics and common-mode chokes I reserve clean reference planes and avoid routing noisy digital signals that might couple into the Ethernet lines and raise my EMI or jitter.

I also pay attention to how the TSN switch sits relative to other high-speed blocks such as SerDes lanes, DDR interfaces and the ADAS SoC itself. I try to keep noisy DDR fly-by routing and large return-current loops away from the switch’s PLL and SerDes supply regions, and I avoid routing long, aggressive aggressor traces under or immediately adjacent to the most sensitive clock and reference pins. If the board density forces me to share layers, I rely on solid reference planes and careful layer stack-up to keep coupling under control. The goal is not perfection, but a layout where TSN timing and jitter are dominated by the Ethernet and optics, not by self-inflicted board noise.

Finally, I include a few EMC and transient notes in my selection criteria. When the ADAS domain powers up or transitions between modes, the TSN switch can draw significant inrush current as its internal logic and SerDes blocks start. I want to know whether the device has any built-in soft-start behaviour, whether it expects a particular sequence between its rails and how it reacts to short supply dips or line transients. Those points feed back into my PMIC and protection device design and help me avoid scenarios where a harmless transient becomes a domain-wide reset or a long recovery event. By capturing these power, thermal and layout notes alongside the functional requirements, I make it much more likely that the TSN switch I select will behave as intended in a real ADAS vehicle, not just on a lab bench.

Power, thermal and layout considerations around an Automotive Ethernet TSN switch Block diagram showing an Automotive Ethernet TSN switch IC in the center with surrounding notes for power rails and PMIC, thermal path, layout isolation for Ethernet and high-speed interfaces, and EMC and transient considerations. TSN Switch IC multi-rail, SerDes, PLL, Ethernet ports Power rails • Core / I/O / SerDes / PLL • Rail count and sequencing • Power vs ports, speed & TSN load Safety PMIC Detailed PMIC design: Safety PMIC for ADAS Compute Thermal path • QFN/BGA pad and thermal vias • Copper area and inner-layer spreading • Optional heatsink or chassis contact • Keep junction temp within TSN timing budget Layout & isolation • Ethernet pairs: length matching, skew • Magnetics / chokes with clean reference planes • Distance from DDR / noisy high-speed buses • Protect PLL & SerDes rails from aggressors ADAS SoC / DDR region EMC & transients • Inrush current at power-up • Line transients and short dips • Coordination with PMIC soft-start and limits • Avoid domain-wide reset on benign events Impact on TSN behaviour • Stable rails → consistent latency & jitter • Adequate cooling → reliable high-load operation • Clean layout → fewer SI/EMI surprises • Controlled transients → predictable recovery
Power, thermal and layout considerations around an Automotive Ethernet TSN switch, showing how rails, PMIC, thermal paths, layout isolation and EMC planning all support deterministic behaviour in an ADAS design.

IC selection mapping

By the time I reach the IC selection step, I already have a clear picture of what my Automotive Ethernet TSN switch has to do in the ADAS domain: how many cameras and radars it aggregates, which flows need deterministic latency, what redundancy I expect and how tight my power and thermal budgets are. At this point I stop thinking about a single “perfect” device and instead build a small family of acceptable IC series types. That way I can map different vendor product trees into the same set of expectations without locking myself into one brand too early.

The first axis in that mapping is port count. A small camera-only leaf switch for a limited L2 feature set does not need the same number of ports as a full ADAS domain switch that aggregates multiple forward and surround cameras plus radar and LiDAR. I roughly separate devices into classes such as low-port-count switches for three to five sensors, mid-range parts for mixed camera and radar clusters and high-port-count devices for domain aggregation with dual uplinks. When I write this into my spreadsheet, I do not write arbitrary numbers; I tie each port count range to a clear role in the vehicle so the mapping stays meaningful.

Maximum line rate is the next axis. Some IC families are optimised for 100 Mbit/s only, which might still work for legacy radar or small control domains but will not carry modern high-resolution video. Others are pure 1 Gbit/s devices that handle new camera and LiDAR ECUs but offer no simple way to connect older 100 Mbit/s nodes. In many ADAS designs I need a mixed portfolio: some ports at 100 Mbit/s for legacy devices, others at 1 Gbit/s for newer sensors, and at least one uplink that can run at a higher rate such as 2.5 Gbit/s when I aggregate multiple streams. I capture all of this as a port-mix profile for each IC family rather than just writing “1G capable.”

TSN feature completeness is where the series begin to separate. For simple ADAS functions with modest traffic, a basic TSN implementation with prioritised queues and coarse shaping might be enough. For more demanding L2 and L2+ systems I start to require time-aware scheduling, per-class shaping and tighter control over how control, sensor and logging traffic share the fabric. At the top end, for dense sensor domains and more ambitious automation levels, I look for devices with deep per-stream policing, frame preemption support and rich hardware hooks for time sync and timestamping. In my own mapping I label these as “basic,” “medium” and “full” TSN feature levels so I can group vendor series into the right buckets.

Safety and diagnostics depth is another selection dimension that does not show up in a simple port-count filter. Some switch families provide only basic per-port error counters and simple link up/down information. That may be enough for a small, non-safety-critical subsystem, but it is not sufficient for a central ADAS domain where I want to monitor link health, detect flapping ports, distinguish between symbol errors and CRC errors and enforce storm control policies. Other families offer deeper diagnostics, better logging granularity and clearer hooks to feed events into an ADAS domain controller and logger. I capture that difference as a “diagnostic depth” level in my mapping so I know which series are suitable for which roles.

Packaging, temperature range and qualification form the last group of technical axes. I note whether a given IC family offers multiple packages with different sizes or ball maps, which ambient and junction temperature ranges are officially supported and what AEC-Q100 grade the devices target. I prefer series that offer several package and temperature variants while keeping the same fundamental architecture and programming model, because that allows me to reuse software and a large part of the validation work across platforms. In the spreadsheet these become simple columns, but they carry a lot of weight when I think about long-term platform planning.

To keep all of this organised, I build an IC selection matrix that does not mention any vendor names. Each row represents an abstract IC family type that I am willing to use in my ADAS networks and each column captures one dimension of capability. Later, when I evaluate specific vendor products, I map them into one or more of these abstract rows. That way the technical discussion with the rest of the team can stay vendor-neutral, and we only bring brand choices into the conversation when we already know which class of device we are looking for.

A simple version of that matrix can be captured in an Excel sheet with fields like the ones below. I use English field names so they can be copied directly into tools or shared with suppliers:

  • IC_family_alias – my internal name for this abstract series type.
  • Role_in_ADAS_domain – camera leaf switch, radar cluster switch, full ADAS domain switch, and so on.
  • Total_port_count – total Ethernet ports on the device.
  • Downlink_100M_ports, Downlink_1G_ports, Uplink_1G_ports, Uplink_multigig_ports.
  • TSN_feature_level – Basic / Medium / Full, based on scheduling, shaping and policing depth.
  • Supports_time_aware_scheduling, Supports_per_stream_policing, Supports_frame_preemption – simple yes/no flags.
  • Diag_depth_level – Basic / Deep / Safety_ready, reflecting how rich the counters and logs are.
  • Per_port_error_counters, Link_flap_detection, Storm_control_loop_detection – availability of key health features.
  • Event_logging_granularity – low, medium or high, depending on how detailed the events can be.
  • Core_rails_count, IO_rails_count, SerDes_PLL_rails_count – for power tree planning.
  • Typical_power_at_target_config – power estimate at my planned port mix and TSN usage.
  • Package_type, Package_size_mm, Temp_range, AEC_Q100_grade.
  • Target_ADAS_level_fit – which automation levels this family is comfortable with.
  • Topology_fit – star, chain, ring, dual-switch or mixed topologies it suits best.
  • Notes_for_this_project – free text where I record trade-offs or platform-specific comments.

When I keep this mapping vendor-neutral, I can revisit it across projects and reuse the same structure for new platforms. Later, on BOM and brand mapping pages, I simply link concrete IC series into these abstract categories so that procurement and design reviews can see both the technical fit and the sourcing options without having to rebuild the logic from scratch.

BOM & procurement notes

When I send out a request for quotation, I have learned that writing “Automotive TSN switch” as a single line item is not enough. Suppliers will quite reasonably respond with a wide range of devices, many of which are optimised for very different markets: simple industrial nodes, in-vehicle infotainment or small gateways that do not need the determinism and diagnostics depth of an ADAS domain. To avoid weeks of back-and-forth, I now treat the BOM and RFQ fields for the TSN switch as part of the design work, not as an afterthought.

The first group of fields I fill in describes the switching capability and TSN features I actually need. Instead of just quoting a generic bandwidth number, I specify the required switching capacity or aggregate bandwidth in terms of “non-blocking support for N one-gigabit sensor ports plus M additional 100 Mbit/s ports and at least one high-speed uplink.” I also state which TSN feature set I expect. If the project requires time aware scheduling, per-class shaping and per-stream policing to keep camera, radar, control and logging traffic under control, I say so explicitly. That way the supplier knows that a very basic TSN implementation will not be sufficient.

The next group of fields captures the port mix and speed. Here I list how many downlink ports I need at 100 Mbit/s, how many at 1 Gbit/s and what kind of uplinks I expect, including whether I prefer multi-gig uplinks for aggregation. I also state what redundancy mechanisms I need: dual uplinks to the ADAS domain controller or gateway, dual-switch architectures for critical sensors, and any failover behaviour that must be supported within a bounded time. That information turns vague requirements like “redundancy needed” into actionable design constraints suppliers can respond to.

Diagnostics and health monitoring need their own line in the RFQ as well. I explicitly ask whether the device provides per-port error counters, link flap detection, storm control and loop detection, and how these statistics and events are exposed to the host. If I want to integrate the TSN switch into a health monitoring and logging framework for the ADAS domain, I explain that I need more than a simple “link up/down” view. I want enough detail to distinguish marginal harnesses from faulty ECUs and to correlate field failures with network conditions.

Functional safety expectations also need to be captured, even if the switch is not itself a safety element in the strictest sense. I describe the role the device plays in the overall safety architecture, such as carrying safety-related sensor traffic or providing redundant paths for critical camera feeds. I then outline the level of diagnostic coverage I need and whether I expect safety-oriented documentation or design features. The detailed safety decomposition belongs elsewhere, but this high-level statement in the BOM tells suppliers whether they are dealing with a safety-relevant network or a purely comfort-oriented subsystem.

I also include project and lifecycle fields that are important for procurement but easy to forget during early design. I state the target SOP year, the expected project lifetime and any second-source strategy I have in mind. If the platform is expected to run for a decade or more, or if I know I will need multiple sourcing options, I say so clearly. I include my preferred package types, temperature range and AEC-Q100 grade so that suppliers do not waste time proposing devices that cannot survive the environment or fit the board.

All of these points can be translated into structured fields in my BOM and RFQ templates. A typical TSN switch section might include items like:

  • Switching_capacity / Aggregate_bandwidth – non-blocking requirement for the planned port mix.
  • TSN_feature_set – time aware scheduling, shaping, policing and any other required TSN tools.
  • Port_mix_and_speed – counts of 100 Mbit/s and 1 Gbit/s downlinks and high-speed uplinks.
  • Redundancy_requirements – dual uplinks, dual switches, expected failover behaviour.
  • Diagnostics_requirements – per-port counters, link flap and storm detection, event export.
  • Functional_safety_expectations – how safety-related the carried traffic is and what diagnostic depth is needed.
  • Time_sync_dependency – whether the switch must support specific time sync roles or hardware timestamps.
  • Power_and_thermal_constraints – power budget at the intended configuration and available cooling options.
  • Temp_range / AEC_Q100_grade – environmental and qualification expectations.
  • Target_SOP_year and Planned_lifetime – project timing and duration.
  • Second_source_strategy – whether multiple compatible series are needed.

The most important lesson for me is to avoid hiding the determinism requirements behind four vague letters: “TSN switch.” In my BOM and RFQ I now describe which traffic classes I have—camera, radar, LiDAR, control, diagnostics, logging, OTA—and which of them must be delivered with deterministic latency and bandwidth guarantees. That single change makes it much easier for suppliers to propose appropriate devices and for my own team to judge whether a suggested IC truly fits the ADAS domain we are building, rather than merely matching a generic buzzword.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs — Automotive Ethernet TSN switch/router

When I plan an Automotive Ethernet TSN switch for an ADAS domain, I keep coming back to the same practical questions. I use this FAQ as a checklist when I decide whether I really need TSN, size links and uplinks, plan redundancy and health monitoring, and talk with suppliers about concrete IC options for my network.

When do I really need a TSN-capable switch instead of a basic automotive Ethernet switch in my ADAS design?
I start to treat a TSN-capable switch as mandatory when my ADAS network is carrying mixed camera, radar and control traffic and I need deterministic latency instead of best-effort Ethernet. If I only have a few low-bandwidth sensors and no strict timing constraints, a basic automotive switch can still be enough.
What TSN features are must-have for Level 2/2+ versus Level 3 ADAS architectures?
At Level 2 and 2+ I focus on prioritised queues, time-aware scheduling and basic shaping so camera and radar flows cannot starve safety-related control messages. For Level 3, I usually demand deeper per-stream policing, tighter bandwidth guarantees, frame preemption support and more robust time-sync hooks because the system’s safety and functional split is much more demanding.
How much headroom should I leave on each uplink port when sizing a TSN switch for multi-camera ADAS?
For multi-camera ADAS I rarely design for average bandwidth. I estimate the worst-case throughput for all active sensors on an uplink, then add margin for diagnostics, logging and future options. As a rule of thumb I like to keep at least twenty to thirty percent headroom per uplink once TSN shaping has been planned.
How do I map camera, radar, LiDAR, control and logging traffic into TSN classes and queues without starving diagnostics and OTA?
I start by listing every traffic type in the domain: forward cameras, surround cameras, radar, LiDAR, control loops, diagnostics, logging and OTA. Then I map each type into a TSN class with minimum bandwidth and latency expectations and keep a separate, protected budget for diagnostics and OTA so fault handling and software updates never get starved.
How do gPTP and TSN scheduling interact inside the switch, and when do I need a dedicated time-sync design?
Time sync and TSN scheduling solve different parts of the same problem. gPTP or 802.1AS gives my network a common timebase and accurate timestamps, while TSN scheduling controls when different classes of traffic are allowed onto each link. When my timing budget is tight I design the time-sync architecture explicitly instead of treating it as an afterthought.
How do I evaluate whether a switch’s cut-through mode is safe enough for my safety goals?
Cut-through switching reduces latency by forwarding frames before the full packet is received, but it can also propagate corrupted frames with fewer checks. I only use it on carefully analysed paths where microseconds really matter and the remaining error detection is still sufficient for my safety case. Otherwise I prefer store-and-forward with well understood delay bounds.
How do I plan redundancy between dual-homing, ring topologies and dual-switch architectures, and what does my TSN switch IC need to support in each case?
I pick a redundancy pattern by matching it to my topology and safety needs. Dual-homing gives critical sensors two switch paths, rings give me alternative routes with less cabling and dual-switch architectures can survive the loss of a whole device. Whatever I choose, I check that the switch supports fast link detection and clean failover behaviour.
Which diagnostics counters and health metrics do I need my TSN switch to expose for field debugging and fleet monitoring?
For field debugging and fleet monitoring I need more than a simple link status bit. I rely on per-port error counters, symbol and CRC error counts, link flap statistics, storm and loop detection, and clear timestamps on significant events. With those metrics I can distinguish bad harnesses, noisy environments and failing ECUs instead of guessing.
How should I integrate a TSN switch’s health and fault information into my ADAS domain controller and event logger workflows?
I treat the switch’s health information as another sensor in my ADAS domain. I plan how the ADAS controller polls counters or subscribes to events, what gets forwarded to the logger and how long I keep that history. When I do this early, I can answer fleet questions about intermittent issues with data instead of speculation.
What are the main power, thermal and layout traps when I place a high-port-count TSN switch close to my ADAS compute module?
Placing a high-port-count TSN switch next to ADAS compute is convenient for routing but dangerous if I ignore power, thermal and noise. Shared rails can inject jitter, hot spots can push the switch out of its timing corner and DDR routing can couple into Ethernet paths. I reserve area, copper and isolation rules before layout starts.
How do I translate my ADAS TSN requirements into an IC family mapping so I am not locked into a single part number?
Instead of hunting for a single perfect part number, I describe families of acceptable devices using the same axes I used in the design: port count, port mix, TSN feature level, diagnostics depth, power, thermal limits and automotive qualification. Then I map candidate series into those buckets so I can swap devices without rewriting the whole architecture.
How should I write my BOM and RFQ so suppliers propose the right TSN switch instead of any generic TSN-capable device?
In my BOM and RFQ I describe the ADAS role, traffic types and which flows require deterministic latency and bandwidth, not just that I want a TSN-capable switch. I specify port mix, switching capacity, TSN features, redundancy, diagnostics, safety expectations and project lifetime. That level of detail helps suppliers propose parts that genuinely fit my network.