Multi-Axis Sync & Timing for Motion Systems
← Back to: Motor & Motion Control
This page explains how to build and verify a timing model for multi-axis motion systems, from choosing between local clock trees and PTP or TSN synchronisation to placing the grandmaster, budgeting jitter and distributing sync pulses. It links these decisions to concrete IC roles for clocks, PHYs, TSN switches, sync I/O and safety monitors so that axis-to-axis timing targets and component choices stay consistent.
What this page solves
This page focuses on multi-axis motion systems where several drives must move in a coordinated way: servo-based gantries, multi-station indexers, printing and packaging lines, and rotary tables. The goal is to make sure every axis shares a consistent timebase so start and stop events, control loops and sampled feedback stay aligned.
In these machines, small timing errors show up as visible defects and mechanical stress. Axes may start a move a few hundred microseconds apart, current and encoder samples may be taken at slightly different instants, and dual-drive gantries can twist because each side follows the trajectory on a different time line.
The content here is structured to guide the timing design from the top down: first establishing a unified timebase, then choosing a PTP/TSN or fieldbus timing architecture, then selecting PLL and clock tree devices, and finally placing hardware timestamping and sync pulse distribution into the system. The result is a clear picture of where time is generated, how it propagates through motion controllers, switches and drives, and which IC building blocks are required to keep multi-axis systems synchronized.
- Clarify timing goals for multi-axis servo, gantry and indexing machines.
- Map out where the PTP/TSN or fieldbus timebase should live in the system.
- Identify PLL, clock tree and timestamping functions needed in motion controllers and drives.
Typical multi-axis sync problems & timing goals
Multi-axis timing issues usually fall into three categories. The first is start and stop alignment, where axes should begin and end moves together. The second is sampling alignment, where current and position measurements must be taken at consistent points in the control cycle. The third is trajectory alignment, where gantries and multi-axis paths must follow the same time scale so that spatial errors do not accumulate along the machine.
Start and stop problems appear when one drive sees the start command slightly earlier or later than the others. In simple conveyors this may only look like a small delay, but in high-speed packaging or printing machines it translates into misaligned cuts, labels or print marks. In dual-drive gantries and precision indexers, different stop times can generate extra mechanical stress and visible defects on the workpiece.
Sampling problems appear when current or encoder samples are not aligned to the same instant on each axis or within the PWM cycle. Even if the FOC and position loops are well tuned, asynchronous sampling creates phase errors that show up as torque ripple, acoustic noise and uneven motion quality. When several axes are meant to work together, sample-time skew makes one drive react to older information than the others.
Trajectory-level problems appear in systems such as dual-drive gantries, multi-station rotary tables and robots that coordinate several joints. The path planner may compute a single trajectory, but each drive executes its piece on a slightly different timebase. Over distance and speed this turns time skew into position error, visible as skewed edges, ripples or misaligned features on the product.
Different applications tolerate very different levels of skew and jitter. A simple conveyor may work with millisecond-level axis skew as long as motion is consistent. High-speed packaging and robots typically need axis-to-axis start skew in the tens to hundreds of microseconds range. Precision gantries and printing or semiconductor tools often push the requirement down into the single-digit microsecond range for critical axes and sampling clocks.
| Application | Typical axis start skew | Sampling alignment target |
|---|---|---|
| Simple conveyor or feeder | Milliseconds down to hundreds of microseconds | Coarse alignment, focus on repeatability |
| Packaging line or pick-and-place | Tens to hundreds of microseconds | Sampling within a few microseconds between axes |
| 6-axis robot or coordinated servo group | Below ~100 µs for coordinated moves | Sub-microsecond to a few microseconds, tied to control bandwidth |
| Dual-drive gantry, printing or semiconductor tool | Single-digit microseconds for critical axes | Sampling clocks aligned to better than a microsecond on key drives |
The rest of the page uses these timing targets as a reference when selecting PTP/TSN topologies, clocking devices and timestamping options. The objective is to match the complexity of the timing chain to the actual skew and jitter requirements of the machine instead of over- or under-designing the synchronization scheme.
System architectures for multi-axis timing
Multi-axis timing starts with a clear choice of system architecture. In some machines all axes live on one controller and share a local clock tree and sync lines. In others, an industrial PC or PLC connects to several distributed servo drives by an EtherCAT, PROFINET or TSN network, with PTP or distributed clock support. Many brownfield systems sit in between, mixing a fieldbus time reference with separate hardwired sync or latch lines.
A local multi-axis controller works best when all drives sit in the same cabinet or backplane and axis counts and distances are modest. Start, stop and sampling events are aligned by a common oscillator, simple PLLs and one or more sync pulses. Network-based architectures become attractive when drives are distributed across a machine or line and a unified timebase must cross cabinets and wiring harnesses via Ethernet or fieldbus. Hybrid schemes combine both: the network carries control data and a coarse time reference, while critical events are still aligned by one or more dedicated sync lines.
The key design questions are where the master timebase resides and which devices must understand time. A grandmaster clock may run inside the PLC, inside a TSN switch, or in a dedicated timing module. Some nodes must behave as boundary or transparent clocks to contain and correct network delay. At the edge, each drive needs a way to map the global timebase into local PWM, ADC and encoder timing so the axis follows the same time line as the rest of the system.
PTP/TSN timing chain & jitter budget
Once a network-based architecture is selected, the next step is to understand the timing chain. A grandmaster clock establishes reference time, boundary or transparent clocks in switches and couplers correct for network delays, and each drive recovers the timebase and maps it onto local PWM, ADC and encoder timing. Every segment adds some jitter and uncertainty, which accumulate into the final axis-to-axis skew and sampling alignment.
From a hardware perspective, the timing chain consists of a PTP or TSN-aware grandmaster, one or more switches or routers that implement boundary or transparent clock functions, and time-aware MAC or PHY devices in each endpoint. The quality of oscillators and PLLs used to discipline these nodes, and the placement of hardware timestamp units, strongly influence the achievable synchronization accuracy for motion control applications.
Hardware timestamping can occur at different levels. PHY-level timestamping observes frames close to the wire and avoids most buffering effects, providing the lowest uncertainty. MAC-level timestamping works further up the stack and is acceptable for many motion networks that only require tens of microseconds of skew. Pure software timestamping depends on CPU scheduling and protocol stack latency and is usually not sufficient for tight multi-axis synchronization on its own.
A simple jitter budget helps decide which devices must be PTP-capable and how clean the clock tree must be. The goal is to keep the combined jitter of the grandmaster, network hops, endpoint recovery and local PLL distribution within the axis-to-axis skew target defined earlier. This often leads to shortlisting PTP-capable PHY or MAC devices, TSN switches with boundary or transparent clock support and low-jitter PLLs that can discipline oscillators and feed motion-control timers and converters.
| Timing segment | Typical contribution | Design notes |
|---|---|---|
| GM oscillator and PLL | Tens to hundreds of picoseconds rms | Depends on TCXO/OCXO quality and PLL phase noise |
| One TSN or fieldbus switch hop | Hundreds of nanoseconds to a microsecond | Boundary/transparent clock implementation and load dependent |
| Drive-side PTP/DC recovery | Sub-microsecond to a few microseconds | Influenced by PHY/MAC timestamping and servo loop update rate |
| Local PLL to PWM/ADC clocks | Tens of picoseconds to a few nanoseconds | Depends on clock generator jitter and fan-out topology |
| Combined axis-to-axis skew | Single-digit microseconds (representative target) | Must be matched to the machine’s timing goals from earlier sections |
PLL & clock trees for motion systems
A motion control node usually carries several distinct clock domains. Typical examples include a high frequency core clock for the FOC MCU or DSC, timer and PWM clocks that define switching frequency and dead-time resolution, sampling clocks for ADCs that observe phase currents, Ethernet or TSN reference clocks for MAC and PHY devices, and high speed clocks for FPGA or encoder interfaces that perform position interpolation. These domains rarely share the same frequency, but the relative phase and jitter between them directly influence current measurement quality and position accuracy.
Clock tree design therefore starts with a clear map of which blocks must share a common timebase and which blocks only need local stability. Low jitter sources and PLLs are typically reserved for PWM, ADC and encoder-related clocks, where sampling instant and carrier phase have a visible impact on torque ripple and noise. Fan-out topology determines how a reference propagates across a multi-axis drive: differential signalling is preferred on long or noisy routes, while single-ended LVCMOS fan-out can be acceptable for short, well controlled traces. Skew between branches must remain within the timing goals defined earlier for axis-to-axis alignment.
Many drive and motion controllers use one stable oscillator feeding a low-jitter clock generator or PLL that produces several integer or fractional output frequencies. These outputs feed MCU cores, PWM and ADC timers, industrial Ethernet PHYs and encoder or FPGA logic through buffer or fan-out devices. Multi-axis systems often add clock distribution on daughtercards so that each group of axes receives a locally buffered and well matched copy of the reference. Where availability requirements justify the cost, redundant clock inputs are combined through glitchless multiplexers so the system can switch to a backup source without large phase steps.
From an IC selection point of view the key building blocks are low-jitter PLL and clock generator devices, clock buffers and fan-out ICs, and suitable oscillators such as XO or TCXO parts. These parts must be matched to the jitter needs of PWM and interpolation circuits rather than to RF or multi-gigabit SerDes requirements, which belong to other design domains. Focusing on the clocks that feed drive and motion functions keeps the tree compact, predictable and easier to verify against current sensing and multi-axis timing targets.
Hardware timestamping & sync pulse distribution
Accurate multi-axis motion relies not only on a shared timebase but also on the ability to mark events against that timebase. Hardware timestamping captures when signals such as encoder index pulses, limit switches or protection comparators toggle, using counters that are aligned to one common reference. Typical implementations combine PTP or TSN-aware MAC and PHY devices with timer capture units inside the MCU or DSC, or with FPGA-based time-to-digital converters in higher performance drives.
In a networked drive the time reference often originates in a PTP or TSN clock module and is exposed as a free-running counter. Local timers in the drive are periodically corrected against this counter so that captured edge timestamps and network message timestamps sit on the same timeline. When encoder or sensor edges arrive, dedicated capture inputs latch the current timer value with sub-microsecond precision, forming the basis for later reconstruction of motion profiles and fault sequences across multiple axes.
Sync, latch and trigger signals complement this timestamping. A periodic sync pulse can align PWM carrier phases and ADC sampling instants across several drives, while shared latch signals allow all axes to sample encoder positions at exactly the same instant. These lines are usually distributed via isolated digital I/O, differential RS-422 or LVDS drivers so that edges remain clean in the presence of switching noise from power stages. Path lengths and device delays are kept as similar as practical so that skew between axes stays within the timing targets defined earlier.
Together, hardware timestamping and sync pulse distribution create a closed loop between the abstract timebase and physical motion. Network and PTP time define the reference, capture units bind local events to that reference, and sync lines provide deterministic edges that align sampling and actuation across drives. Limit and home switch inputs that require precise time correlation can use the same capture and sync infrastructure and are covered in more detail in the dedicated limit and home switch section.
Robustness, diagnostics & failover
Time synchronisation in a motion system must remain trustworthy over configuration changes, network disturbances and partial hardware faults. Monitoring starts at the protocol level, where PTP or distributed clock instances provide offset, path delay, grandmaster identity and clock quality information. These values are watched against engineering limits so that loss of synchronisation or excessive jitter is detected before it degrades current control, position accuracy or safety-related functions that depend on time correlation.
Clock signal integrity is monitored through loss-of-signal and PLL lock indicators in clock generators, buffers and PHY devices. A loss of reference or loss of lock condition can cause PWM, ADC and encoder clocks to drift or jump, so these status pins and registers are routed into a monitoring function that can trigger alarms and initiate controlled reactions. Multi-output clock chips with per-channel status help distinguish between global timing problems and single-axis issues in a multi-drive system.
Failover strategies combine grandmaster switchover, holdover behaviour and safety integration. When a grandmaster changes, the new source is validated and the transition is smoothed to avoid sudden phase steps where possible. If network synchronisation is lost, local oscillators enter holdover and maintain operation for a defined window while time quality is marked as degraded. Exceeding holdover limits typically leads to derated operation or controlled stopping. Clock and synchronisation faults are also forwarded to a safety monitor so that time-source failures can be treated as a dedicated fault class and linked to safe torque off or safe stop functions in the protection group.
Relevant ICs include low-jitter clock generators and cleaners with integrated status flags, clock buffers and fan-out parts with loss-of-signal detection, small monitoring MCUs that supervise grandmaster and switch status, and safety monitors that expose time-source fault inputs. Focusing diagnostics on synchronisation offset, delay, clock quality and PLL health provides clear signals for system-level fault handling, while detailed safety reactions such as STO and safe stop remain in the protection and braking section.
Example architectures for multi-axis timing
Multi-axis timing can be realised with different combinations of fieldbus, TSN networks and local clock trees. The following examples illustrate how EtherCAT distributed clocks, TSN boundary clocks and purely local synchronisation are applied in typical motion control systems. Each case highlights the position of the time reference, the synchronisation mechanism and the role of local PLL and timestamp hardware so that new designs can be mapped to a closely related pattern.
Case A: EtherCAT-based 6-axis servo system
A central motion controller with an EtherCAT master and distributed clock function acts as the grandmaster for six servo drives. EtherCAT DC propagates a time reference along the bus, and each drive contains a slave controller that corrects its local timebase to the master. Local PLL and clock distribution feed PWM, ADC and encoder logic inside each drive so that phases and sampling instants remain aligned. This pattern targets applications such as electronic gearing, flying shear and high-speed pick-and-place where skew between axes must be held within a few microseconds or less.
| Time source | EtherCAT DC grandmaster in motion controller |
| Sync mechanism | EtherCAT distributed clock, local offset correction |
| Local timing ICs | PLL, clock fan-out, encoder and ADC timing |
Case B: TSN-based gantry and conveyor line
In a TSN motion network a central controller and one or more TSN switches provide the time reference using IEEE 802.1AS. Switches often act as boundary clocks and correct timing at each hop so that gantry drives, conveyor drives and IO blocks all share a common timebase. Each endpoint combines a time-aware MAC or PHY with local timers and capture logic to time-stamp encoder, sensor and trigger signals. This structure suits longer production lines where multiple motion and IO nodes must remain phase aligned over tens of metres of cabling and several switch stages.
| Time source | TSN grandmaster with boundary clock switches |
| Sync mechanism | IEEE 802.1AS time-aware bridges and endpoints |
| Local timing ICs | TSN switch silicon, time-aware PHY and MAC |
Case C: Single controller with four internal axes
A compact controller with four integrated axes often omits real-time industrial Ethernet and relies on a shared clock tree and sync wiring on a single board. A crystal or TCXO and a low-jitter PLL feed the MCU core, PWM timers, ADC and encoder front-ends, and a common sync signal aligns sampling instants across all axes. This arrangement is suitable for smaller machines, gantries or indexers where all axes reside on one PCB and skew is dominated by on-board routing and clock distribution rather than by network effects.
| Time source | Local XO or TCXO with PLL |
| Sync mechanism | Shared clock tree and SYNC line per axis |
| Local timing ICs | PLL, clock fan-out, MCU timers and capture |
Design checklist & IC mapping
This checklist condenses the multi-axis timing topic into concrete items for design reviews and sourcing. Each group focuses on one aspect of the timing chain and highlights where specific IC types typically sit in the system, so that motion performance targets and component choices stay aligned.
System timing model
- Has a system timing diagram identified the grandmaster, boundary clocks and all time-aware endpoints? IC hooks: TSN switches with boundary clock, EtherCAT master cards, dedicated time cards or timing SoCs.
- Is the time reference location fixed and documented for the motion project? IC hooks: controller CPUs or timing modules that host the grandmaster and its oscillator.
- Are non-time-aware nodes labelled so they are not assumed to follow motion skew budgets? IC hooks: standard Ethernet PHYs, fieldbus transceivers that do not implement PTP or distributed clocks.
- Are timing requirements aligned with the mechanical layout of axes, gantries and conveyors? IC hooks: selection of where to deploy TSN or EtherCAT DC versus simpler fieldbus interfaces.
PTP / TSN capability
- Which nodes must support hardware timestamping in MAC or PHY to meet axis-to-axis skew targets? IC hooks: PTP-capable PHYs, TSN endpoints, industrial SoCs with integrated 1588 / TSN MACs.
- Which devices can remain as standard Ethernet or fieldbus nodes without time awareness? IC hooks: non-PTP Ethernet PHYs, RS-485 and IO-Link transceivers for non-critical links.
- Are PTP profiles and domain settings defined consistently for all time-aware nodes? IC hooks: switches and controllers that explicitly support the required PTP or IEEE 802.1AS profiles.
- Is the synchronisation message rate sized for jitter budgets and overall network load? IC hooks: TSN switches and EtherCAT controllers with sufficient processing headroom and buffer depth.
Clock tree and jitter budget
- Is a single clean reference oscillator defined for PWM, ADC and encoder-related domains? IC hooks: XO and TCXO devices, low-jitter clock generators and jitter cleaners.
- Has a jitter budget been allocated for PWM, ADC and encoder clocks based on torque and position noise targets? IC hooks: clock generators and PLLs with jitter performance matched to drive and interpolation IC needs.
- Is clock fan-out sufficient and routed with controlled skew across all axes and boards? IC hooks: LVDS or LVCMOS fan-out buffers, timing distribution and crosspoint switches.
- Are differential signalling and PCB constraints defined for long or noisy clock routes? IC hooks: clock generators and buffers with differential outputs, LVDS or LVPECL drivers and receivers.
- Is clock redundancy or holdover required, and is a glitchless multiplexer in place where needed? IC hooks: clock mux and switchover ICs, timing devices with holdover and hitless switching support.
Sync I/O, latch and trigger lines
- Are sync, latch and trigger signals defined separately from general-purpose I/O? IC hooks: MCU and DSC timer capture inputs, FPGA trigger and capture resources.
- Is fan-out for sync pulses to all motion-critical axes clearly documented? IC hooks: digital isolators, RS-422 or LVDS line drivers and receivers for sync distribution.
- Have isolation ratings and creepage / clearance for sync and latch lines been checked against system voltages? IC hooks: safety-rated digital isolators and optocouplers in the chosen voltage class.
- Is expected skew of sync lines across axes within the chosen timing budget? IC hooks: low-skew isolators and line drivers with specified propagation delay and matching.
- Are encoder index and limit switch signals connected to capture-capable inputs where time correlation is required? IC hooks: encoder interface ICs, MCU capture channels and input AFEs with proper filtering and protection.
Diagnostics and safety hooks
- Are PTP or distributed clock offset, delay and grandmaster identity monitored against defined limits? IC hooks: TSN and EtherCAT controllers exposing timing status to a monitoring MCU.
- Are clock loss-of-signal and PLL lock status flags wired into a monitoring function rather than left unused? IC hooks: clock generators and buffers with LOS / lock outputs, GPIO inputs on control or safety MCUs.
- Is there a holdover and degradation strategy defined for loss of synchronisation? IC hooks: timing ICs with holdover support and controller firmware that implements derated modes.
- Are timing-related faults reported to safety logic as a dedicated fault class? IC hooks: safety monitors and safety MCUs with inputs for time-source faults and safe-stop requests.
- Are logging and diagnostic channels sized to capture timestamped events during timing faults? IC hooks: edge MCUs with non-volatile memory, dedicated logging controllers and PdM storage devices.
FAQs on multi-axis sync & timing decisions
These twelve questions capture the decisions that repeatedly appear when multi-axis synchronisation moves from diagrams to real hardware. Each answer reflects a motion designer's perspective but is phrased neutrally, and lines up with the earlier sections on architectures, PTP or TSN timing chains, clock trees, sync I/O and diagnostics.
When do I really need PTP or TSN based time synchronisation instead of just sharing a local clock and a SYNC line?
For compact machines with only a few axes on one controller and short local wiring, a shared clock tree and SYNC line is usually sufficient. Once axes are split across separate drives, IO blocks or cabinets and the skew target enters the low microsecond range, a PTP or TSN based time reference becomes the more robust and scalable option.
How tight does axis-to-axis skew need to be for a gantry compared with a simple conveyor or indexing system?
Simple conveyors and indexers typically tolerate axis-to-axis skew in the tens of microseconds as long as product spacing stays within tolerance. A mechanically coupled gantry usually needs skew in the low microsecond range or better, because even small timing errors can translate into visible position mismatch, racking forces or increased bearing wear.
Where should I place the PTP grandmaster clock in a multi-axis motion system so that maintenance and failover stay manageable?
In many systems the grandmaster clock is placed in the main motion controller, because this node already concentrates configuration and maintenance. When the network core is built around TSN switches, a dedicated time card or master-capable switch can simplify failover, diagnostics and firmware updates, provided ownership and access are clearly defined.
Do the drives themselves need to be time-aware PTP or TSN endpoints, or can the controller carry all of the precise timing?
Drives can remain non time aware when the controller carries the critical timing and drives simply follow setpoints. When each drive closes fast current or position loops from local sensors and cross-drive phase alignment is required, implementing full PTP or TSN endpoints in the drives provides more deterministic timing and easier jitter budgeting.
Do I really need hardware timestamping in the PHY, or is MAC-level timestamping accurate enough for my drives and IO blocks?
For shorter links and moderate timing requirements, MAC-level timestamping is often accurate enough and can reduce BOM cost. In networks with several switch hops, asymmetric paths or sub-microsecond skew targets, PHY-level timestamping usually delivers lower uncertainty because it observes the frame closer to the wire and reduces variable latency inside the PHY.
How should I budget jitter from the clock source through the PLL and fan-out into my PWM and ADC sampling clocks?
A practical approach starts with the oscillator jitter, then adds the PLL contribution and fan-out buffer jitter to estimate total clock uncertainty. This value is compared with torque ripple and position error limits. When the budget becomes tight, dedicated low-jitter timing devices are preferred over generic PLLs or unqualified clock trees.
When is a simple onboard clock tree enough, and when does a dedicated low-jitter timing IC family start to make sense?
A simple onboard clock tree from a decent XO is often sufficient when all axes share a single PCB and loop bandwidths are modest. Dedicated low-jitter timing ICs start to pay off when designs span multiple boards, encoder rates increase, or current loops are pushed harder and phase noise margins shrink.
Which sync, latch and trigger signals must be hard-wired across axes instead of being sent over the fieldbus?
Signals that align sampling instants or position captures between axes are strong candidates for hard-wired SYNC or LATCH lines. Typical examples include ADC conversion triggers, encoder index capture and safety-relevant trigger events. Fieldbus transport suits slower status changes, while tight skew budgets favour dedicated wiring with isolation and delay control.
What should the motion system do when PTP synchronisation is lost or the clock PLL drops lock in one of the drives?
A well-behaved system raises an alarm, enters a controlled degradation mode and avoids abrupt behaviour. Typical actions include holding local oscillators in holdover, reducing speeds and limiting synchronised moves. If synchronisation or PLL lock is not restored within a defined time, the system should transition into a safe and predictable stop state.
How much logging and timestamped data is really needed to diagnose timing faults and jitter problems in the field?
Effective diagnosis usually requires timestamped data from the controller, switches and drives covering several seconds around a disturbance. Useful logs record offset and delay values, PLL status, sync events and key motion signals to non-volatile storage. Simple live counters on a HMI screen rarely give enough context to solve intermittent timing problems.
Which of the example architectures comes closest to my machine, and what is the lowest-cost path to reach the timing targets from there?
A compact four-axis module on a single controller board typically matches the local clock tree example. A line or cell with distributed drives and IO stations resembles the TSN case, while high-performance servo systems with tight electronic gearing align naturally with the EtherCAT distributed clock architecture shown earlier in this page.
Where in the design is it most effective to map vendor timing, sync I/O and safety monitor IC families for long-term support?
High-value vendor timing and safety monitor IC families are most effective at points where replacement cost or downtime is highest: the grandmaster or TSN switch, the main clock tree feeding motion domains and the safety monitor that supervises time-source faults. Commodity PHYs, basic isolators and minor IO paths can usually stay with simpler parts.