Robot Controller and Motion Controller Card Design
← Back to: Industrial Robotics
This page explains how the robot controller or motion card serves as the real-time computation core in an industrial robot cell. It manages trajectory planning, TSN/PTP timing, safety monitoring, and local power rails — while its servo drive, encoder AFE, cabinet power and safety PLC counterparts are detailed in the sibling subpages. Each section below dives into the decision points engineers face when selecting architecture, synchronization methods, data throughput, and safety hooks.
Role of the Robot Controller / Motion Card in an Industrial Cell
The controller card acts as the real-time “brain” of a robot cell. Upstream it connects to line-level PLCs, cell gateways or cloud/MES scheduling platforms through EtherCAT, PROFINET or TSN networks. Downstream it coordinates multi-axis servo drives, encoder feedback modules, force/torque sensors, machine vision interfaces, remote I/O and teach pendants. Its key responsibilities center on loop timing (1–4 kHz or higher), synchronization accuracy in the sub-microsecond range, and deterministic decision-making under safety constraints.
Detailed power-stage design, shunt AFEs, NTC sampling, gate-drive protection and thermal management are treated in the multi-axis servo drive subpage. Likewise, advanced encoder front-ends or force/torque sensing are covered in the feedback subpages. Here, we focus on the real-time computation, communication and safety backbone — how the controller manages MCU/SoC resources, schedules tasks with TSN/PTP timing, maintains watchdogs and ensures each storage and power rail is monitored.
MCU, SoC and FPGA Partitioning for Multi-Axis Robot Control
Selecting the right architecture depends on axis count, minimum control cycle, interpolation complexity and the need for onboard vision or AI support. Simpler systems often rely on a single safety MCU plus a small FPGA, suitable for basic pick-and-place robots at 1 kHz cycles. In contrast, higher-end or collaborative robots may combine an application SoC with a real-time MCU or dual-core SoC to separate scheduling from communication and diagnostics.
For large payload or high-speed robots with advanced spline or dynamic path planning, a larger FPGA or SoC-FPGA combination becomes valuable. PWM generation, encoder capture, custom protocols and ultra-fast protection channels typically reside in the FPGA, while trajectory planning, inverse kinematics, network stacks and UI services fit better in the MCU/SoC domain. Detailed FOC algorithms are handled in the multi-axis servo drive subpage, allowing this controller topic to stay focused on the “brain” level tasks.
Real-Time Scheduling, PTP/TSN Timing and Network Integration
The controller card operates as a time-driven computing node. Its local oscillator and PLL maintain the basic real-time loop, while a TSN-based PTP master supplies the unified plant-wide time reference. In standalone robots, the local clock is sufficient. However, collaborative or line-level robots require sub-microsecond alignment to prevent trajectory jitter and unstable torque commands.
The control loop typically follows a fixed execution window: feedback → interpolation → torque/position calculation → command output. Timing windows from TSN switches (based on 802.1AS/Qbv/Qbu) dictate when data can be transmitted or received, meaning that the loop must align with the network’s schedule. The controller therefore works in both computing and network domains, becoming a deterministic node rather than a free-running MCU.
Physical-layer TSN configuration, PHY selection, switch topology and fieldbus migration strategies are developed in the Industrial Ethernet Switch with TSN and Fieldbus Transceivers subpages. Here we focus solely on how timing influences the control loop and architectural planning on the motion card itself.
Functional Safety and Watchdog Strategy on the Controller Card
While safety PLCs and servo drives provide certified safety outputs such as STO, the controller card must maintain internal self-monitoring. A lockstep safety MCU, ECC, voltage and clock monitors, and dual watchdogs help ensure that the real-time loop does not continue running under a fault. The controller’s role is to diagnose, time-stamp, and route safety events—rather than execute every safety function by itself.
Safety I/O pins connect to emergency-stop signals, safety relays and STO enable lines, with the controller providing heartbeat or plausibility checks. External safety PLCs handle logic arbitration and system-level resets, typically via Profisafe or FSoE. Meanwhile, secure diagnostic channels such as DoIP, HTTPS or TLS are routed to the robot cell gateway. This division of tasks lets the controller concentrate on real-time computation, while still contributing to an ASIL/SIL-capable architecture.
Standard calculation methods and certification procedures are handled in the separate subpage Safety PLC / Safety Controller. Here we focus only on which safety hooks the controller must expose for diagnostic, timing and computational support within the hierarchy.
Memory, Storage & Power-Rail Planning for the Controller Card
A robot controller must balance real-time performance, power integrity and logging capability. This section explains how to structure memory layers and sequence power rails to support multi-axis control, diagnostics and firmware redundancy.
Memory & Storage Layers
The controller card typically separates firmware, runtime data and high-speed buffer into three storage levels:
- Firmware Storage — internal Flash or external NOR for boot and update redundancy.
- Runtime Data & Logs — eMMC / SD / SPI-NAND for diagnostics and trace logs.
- High-Speed Buffer — DDR3L / DDR4 / LPDDR4(X) used by SoC or FPGA for interpolation and kinematics.
Storage size can be estimated with a formula: axes × trajectory buffer + diagnostic logs + event trace + firmware images. Typical values: 64 MB to 256 MB for a 6–8 axis controller.
Power-Rail Requirements
Power for the controller card is built on a sequenced multi-rail architecture. Only the high-level rail strategy is covered here — detailed surge protection or EMI filtering belongs to the 24 V Industrial Front-End PSU and Cabinet EMC pages.
- 24 V cabinet input → local DC-DC / PMIC.
- Sequenced rails for: SoC core + DDR (1.1 V / 1.35 V).
- Monitored rails for FPGA + high-speed PHYs.
- Common rails (3.3 V / 5 V) for IO, AFEs and sensors.
Typical requirement: “tightly sequenced rails for SoC and DDR, plus monitored rails for FPGA and PHY.”
Interfaces to Drives, Feedback Modules, HMIs and Gateways
The controller card sits at the center of a dense interface map. It must provide deterministic links to servo drives and feedback modules, while also exposing uplinks to HMIs and robot cell gateways. This section focuses on which ports the card should reserve and how they group into downstream and upstream connections, without diving into the analog or AFE details that are handled in sibling pages.
Downstream Interfaces to Drives and Feedback Modules
On the drive side, the controller typically exposes one or more high-performance industrial Ethernet ports such as EtherCAT, PROFINET or SERCOS, or proprietary LVDS-style links for tightly coupled servo stages. These ports carry command and status frames at the control-loop rate and must be budgeted for real-time bandwidth and latency. The physical-layer design and TSN feature set are covered in the dedicated network subpages.
For feedback, the controller card aggregates master ports for EnDat/BiSS encoders, resolver excite-and-sense channels via motor feedback AFEs, SPI links for six-axis force/torque modules, and LVDS or CSI-like links for laser profilers and vision modules. From the card’s perspective, the design work is to reserve the right mix of ports, clock domains and interrupt routes; the analog front-end, EMC and excitation circuitry live in the Absolute Encoder Interface, Motor Feedback AFE and 6-Axis Force/Torque Sensor Module pages.
Upstream Interfaces to HMIs and Robot Cell Gateways
Upstream, the controller card connects to teach pendants and HMIs using Ethernet, USB and, in some cases, display links such as LVDS or embedded DisplayPort. These ports must support configuration, jogging, recipe loading and diagnostic viewing, but they do not need the same ultra-low latency as the servo control links.
A second class of uplink targets the robot cell gateway or TSN backbone. Here, the controller exposes time-aware Ethernet ports that integrate into a TSN or fieldbus network and route diagnostics to higher-level PLCs, gateways and cloud services. The controller page therefore focuses on which logical ports and bandwidth classes to provision, while the physical, protocol and security details move to the Industrial Ethernet Switch with TSN, Fieldbus Transceivers and Robot Cell Gateway subpages.
IC and Module Selection Checklist (MCU/SoC, FPGA, Power, Timing)
This section turns the previous architecture decisions into a sourcing-oriented checklist. Instead of listing specific part numbers, it highlights the parameters you should confirm for each device class before sending RFQs to suppliers or comparing vendor reference designs for the robot controller card.
MCU / SoC Selection Checklist
For the main MCU or SoC, selection should reflect axis count, control-loop rate, communication load and safety level:
- Cores, MHz, floating-point support, TCM and cache size.
- Real-time peripherals: timers, PWM, capture units, DMA and interrupt response.
- Industrial Ethernet capability: number of EtherCAT / TSN ports and time stamping units.
- Safety features: lockstep options, ECC coverage, diagnostic registers and self-test.
- External interfaces: QSPI NOR, eMMC / SD, DDR type and lane count.
- Industrial grade: operating temperature range, supply-tolerance and lifecycle status.
FPGA / CPLD Selection Checklist
For the FPGA or CPLD that offloads encoder capture, custom links or safety logic:
- Logic cells / LUTs and BRAM for interpolation, buffering and safety shadows.
- High-speed I/O: number of LVDS / SerDes channels and supported voltage standards.
- Clocking resources for deterministic capture and alignment with PTP / TSN domains.
- Safety-oriented options such as triple voting, CRC blocks or lockstep slices.
- Industrial temperature grades and package options compatible with your PCB stack-up.
Timing & Clocking Checklist
Timing and clock devices must support the required PTP / TSN profile and jitter budget:
- PTP-capable PHY or switch ports with hardware time stamping.
- Clock generators and PLLs with jitter compatible with your servo loop and link speeds.
- Holdover or redundancy strategy if the TSN grandmaster is temporarily unavailable.
Power and PMIC Checklist
For the power tree and PMICs, the focus is on rail count, sequencing and observability:
- Number of rails for SoC core, DDR, FPGA, PHY and I/O domains.
- Sequencing capability and power-good outputs for SoC and DDR bring-up.
- Telemetry, fault flags and temperature monitoring for cabinet-level diagnostics.
Memory Combination Checklist
Memory planning should confirm the NOR + eMMC + DDR combination and interface headroom:
- Firmware NOR size for bootloader, application and at least one backup image.
- eMMC or SD size for logs, event trace and configuration snapshots.
- DDR bandwidth and density based on axis count, buffer depth and vision or AI use.
FAQs – Robot Controller / Motion Card Planning & Selection
These twelve questions are how I sanity-check a robot controller or motion card before I lock the architecture or send RFQs. Each answer compresses one decision area into a short note I can reuse in design reviews, supplier discussions and structured FAQ data without rewriting everything from scratch.
When is a simple MCU plus small FPGA enough, and when do I need a SoC plus large FPGA?
I stay with a simple MCU and small FPGA when I have up to six or eight axes, moderate loop rates and only a few standardized encoder or network links. Once I add high axis counts, sub-millisecond loops, custom high-speed links or on-board vision and safety logic, I move to a SoC plus larger FPGA.
How do control period and bandwidth planning change for 6-axis, 12-axis and 20+-axis robots?
For a 6-axis arm I can often run a 1 kHz loop with modest frame sizes and still have margin. At 12 axes I start budgeting every bit of EtherCAT or TSN bandwidth and CPU time. Beyond 20 axes, I treat network shaping, frame packing and FPGA offload as mandatory, otherwise the loop timing collapses under load.
What PTP or TSN sync accuracy do I need so that time error does not limit trajectory precision?
I want PTP or TSN sync error comfortably below my allowed trajectory jitter. For tightly coordinated motion between robots I aim for sub-microsecond error so time skew never dominates interpolation error. For single, non-synchronized cells I accept looser timing, but I still ensure the controller is not the worst element in the timing chain.
How do I avoid scheduling conflicts when several controller cards share the same TSN network?
I start by describing each controller card as a traffic source with a clear loop rate, frame size and priority class. Real-time motion traffic gets its own queues, while HMI and diagnostics are rate-limited or placed in lower classes. With that definition, my TSN switch configuration can allocate time slots instead of letting every card compete blindly for bandwidth.
When should I place a safety MCU on the controller card versus relying on an external safety PLC?
I place a safety MCU on the controller when the card must supervise internal loops, cross-check trajectories or host safety-critical voting that a remote PLC cannot see in time. If the project already has a strong safety PLC and the controller only needs basic monitoring and diagnostics, I keep the card simpler and delegate most safety logic to that external PLC layer.
How do I layer watchdogs, brown-out and clock monitors without over-designing the safety concept?
I usually keep one internal watchdog and clock monitor in the MCU and add one external supervisor that can cut power or force a safe reset. If a monitoring block does not cover a new failure mode or cannot be tested, I resist stacking it. Instead, I focus on a few well understood layers that have clear self-test paths and diagnostic reporting.
How much DDR and external storage should I reserve for complex trajectories and online path planning?
I size memory from the workload, not from a random gigabyte number. I estimate how many seconds of trajectory buffer I need per axis, add space for online planning scratch data and allocate room for diagnostics and event logs. If the controller also hosts vision or AI, I increase DDR headroom so worst-case scenes do not stall the motion loop.
When I integrate machine vision or AI inference, how much compute stays on the controller card?
I keep time-critical inference that directly gates motion, such as simple part detection or safety checks, on the controller card. Heavy models, training and fleet analytics move to gateways or servers. That way, the card only hosts the minimum accelerator or SoC performance needed to keep the real-time loop deterministic while larger workloads run upstream.
How do I plan interface counts and types for servo drives, feedback AFEs and the HMI or teach pendant?
I start from how many drives, encoder chains, force sensors and cameras must be online at the same time. That drives the number of industrial Ethernet ports, encoder masters, SPI or LVDS links and HMI connections I reserve. Only after I have a complete interface map do I shortlist MCUs, SoCs, FPGAs and PHYs with enough high-speed and real-time capable ports.
For remote maintenance and diagnostics, which debug, trace and secure channels must the controller card expose?
I always reserve at least one low-level debug path, such as JTAG or SWD, plus a serial or network log channel that can survive partial failures. For remote sites, I add a secure IP channel over TLS or a gateway protocol, so I can pull traces, update firmware and inspect states without visiting the cabinet or bypassing safety boundaries.
When migrating multiple legacy fieldbuses to a unified TSN network, what bridge role should the controller card play?
In brownfield projects, I first decide whether the controller card should terminate legacy buses directly or rely on dedicated gateways. If the card acts as a bridge, I budget CPU, memory and ports for buffering and translating CANopen or DeviceNet traffic into time-aware TSN streams. If not, I treat the gateway as another upstream device and keep the controller focused on motion tasks.
How do I turn all these decisions into BOM fields that help me talk to suppliers and compare RFQs?
I convert my decisions into concrete BOM fields: core type, clock rate and port count for the MCU or SoC, logic and I or O resources for the FPGA, memory sizes and interfaces, rail count and sequencing for the PMIC and timing capabilities for PHYs and oscillators. With that list, my RFQs describe measurable needs instead of vague marketing phrases.