Speed & Torque Observer Planning for Motor Drives
← Back to: Motor & Motion Control
This page collects everything needed to plan and debug a speed/torque observer: how ADC paths, DSP compute and sensor-fusion hooks are chosen for the drive, how noise, quantisation and delay are turned into robust estimates, and how IC roles and FAQs map those design choices into a practical checklist for real projects.
What this page solves
This page concentrates on one narrow but critical topic: how to build a speed and torque observer chain that stays stable across the full operating range of a servo or BLDC drive. The focus is not on FOC theory or sensor catalogs, but on the hardware and compute path that feeds the observer with usable data.
The content is organized around three design decisions. First, the sampling architecture: whether phase and bus currents are captured through on-chip SAR ADCs, external simultaneous ADCs, or sigma-delta modulators, and how sampling instants and noise performance affect observer robustness. Second, the compute budget and DSP offload: how much math the observer consumes and when a dedicated DSP core or accelerator is required to keep the loop deterministic. Third, the choice to stay fully sensorless or blend in encoders, resolvers or Hall sensors as optional sensor-fusion inputs.
Topics such as MCU core selection, flash sizing and PWM resources are handled by the FOC controller and motion MCU pages. Detailed coverage of shunt, Hall and isolated current sensors belongs to the phase and bus current sensing pages. This page focuses on the observer chain itself: how the sampling path, DSP resources and optional fusion signals come together to deliver trustworthy speed and torque estimates.
Where the speed/torque observer sits in the FOC loop
In a practical FOC drive, the speed and torque observer sits between the current sensing front-end and the speed and torque control loops. Phase and bus currents are acquired by the sensing AFE and ADC or sigma-delta paths, then handed to the observer. The observer generates estimated speed and torque that become the feedback variables for the speed and current loops, which in turn drive the SVPWM and power stage.
The position of the observer inside the loop makes it extremely sensitive to latency, bandwidth and noise. Total delay from sampling instant through ADC conversion, digital filtering and task scheduling shifts the effective poles and phase margin of the control loops. If this delay becomes excessive, low-speed operation tends to jitter, and torque response becomes sluggish or oscillatory even when the control law is correct in theory.
Bandwidth and noise of the measurement path are equally important. Filters or sigma-delta decimation that are too narrow hide fast changes in current and torque, while quantization and analog noise can be amplified by the observer structure, especially at low speed and light load. Many unstable or noisy observers originate not from the algorithm itself, but from sampling jitter, poorly aligned ADC paths and heavily loaded CPUs further down the loop. The block diagram below highlights where these sensitivities appear in the chain.
ADC topologies that keep the observer stable
The speed and torque observer relies entirely on the quality of current and voltage samples delivered by the ADC path. Any weakness in noise performance, sampling alignment or total delay is amplified by the observer structure and can loosen stability margins, especially at low speed and light load. This section compares common ADC topologies from the observer’s point of view and highlights how each one behaves in terms of effective resolution, alignment to PWM and group delay.
Three families of architectures are considered. On-chip multi-channel SAR ADCs are widely used in compact servo and BLDC drives and can operate in single-channel or simultaneous sampling modes. Sigma-delta modulators with on-chip sinc filters offer high dynamic range and isolation, at the cost of additional filter delay that must be accounted for in the control bandwidth. Standalone synchronous or isolated ADCs are common in multi-axis or higher power systems, where per-phase alignment, signal integrity and consistent ENOB are more critical than minimal BOM count.
This section focuses on the ADC and front-end path itself rather than the sensor element. Decisions such as shunt versus Hall or fluxgate current transducers are covered on the phase and bus current sensing pages. The emphasis here is on how each ADC topology influences observer robustness: the achievable torque resolution, the consistency of sampling instants with respect to PWM and the amount of group delay that must be built into the observer and control-loop design.
DSP co-processing and compute budgeting for observers
From an implementation standpoint, the speed and torque observer is a periodic computation pipeline that must complete within every control cycle. Clarke and Park transforms, the observer equations themselves and any low-pass filters or PLL structures all consume CPU cycles. If the combined workload for these blocks approaches the available budget for a given control period, timing jitter and extended execution time start to feed directly into observer latency and noise.
A practical design therefore considers how the workload is partitioned between the main CPU core, any DSP accelerators and, in higher-end drives, a second control core. The main core typically schedules the overall FOC loop, runs the speed and torque controllers and services communication stacks, safety logic and diagnostics. DSP co-processors or instruction extensions handle numerically heavy operations such as vector transforms, matrix updates, sigma-delta filtering or trigonometric functions. In multi-axis or very high-bandwidth systems, a dedicated control core may execute the current loop and observer for all axes, while the application core focuses on motion profiles and fieldbus traffic.
Compute budgeting for the observer starts from the control period and CPU clock rate, then allocates a fraction of the available cycles per axis to the observer pipeline and tightly coupled current-control math. Remaining margin is reserved for communication, diagnostics and housekeeping. As a rule of thumb, total utilization on a control core is often kept in the range of fifty to sixty percent, leaving headroom for occasional load peaks without disturbing observer timing. The FOC controller and motion MCU pages address device-level selection; this section focuses on how those resources are actually consumed by the observer and its supporting math.
How noise and quantization shape observer robustness
Robust speed and torque estimation starts with understanding how ADC resolution and noise translate into the smallest observable change in current, torque and speed. The effective number of bits and measurement range together define the minimum current step. Through the motor torque constant and the mechanical inertia, that step becomes a discrete jump in torque and a corresponding step in speed feedback. If this step size is too large compared with the control resolution expected at low speed, the observer output will inevitably show visible jitter even when the control law is tuned correctly.
White noise from the ADC, residual PWM ripple and common-mode interference all pass through Clarke and Park transforms and the observer structure. Depending on the chosen observer bandwidth and pole locations, certain parts of this spectrum are attenuated while others are amplified. A fast observer with aggressive bandwidth settings tends to pass more high-frequency noise into the estimated torque and speed, whereas a slower observer may hide real transients during start-up or rapid load changes. Fixed delay from filtering and computation shifts poles and reduces phase margin, while delay jitter from task scheduling and contention introduces time-varying disturbance that is often more harmful than a constant delay of the same size.
From a control perspective, observer robustness is the ability to keep estimates stable and predictable in the presence of limited resolution, non-ideal noise spectra and unavoidable delays. A robust design accepts that noise and quantization cannot be removed, but ensures that low-speed torque and speed do not deteriorate into chatter, and that phase margin remains adequate across the full operating range. The checkpoint list below helps triage typical low-speed jitter and instability symptoms back to noise, quantization or timing root causes.
- Verify that ADC effective resolution and measurement range provide a current LSB that maps to a torque step well below the required resolution at low speed.
- Confirm that sampling instants are aligned with PWM in relatively flat portions of the current waveform, not near dead-time edges or large ripple peaks.
- Check sigma-delta decimation and digital filter settings, and ensure the resulting group delay has been accounted for in observer and current-loop bandwidth targets.
- Examine observer pole and filter bandwidth choices; reduce bandwidth if high-frequency noise is clearly leaking into estimated torque and speed.
- Inspect execution-time jitter for the control task and observer update; make sure these blocks run with fixed priority and are not pre-empted by long communication or logging routines.
- Probe the analog front-end around the ADC input to identify intermittent spikes, saturation or common-mode disturbances that could corrupt samples.
- Evaluate the impact of friction, backlash and other mechanical nonlinearities at low speed, and confirm that observer tuning does not attempt to overreact to these effects as if they were electrical noise.
- Confirm that tuning and validation covered actual low-speed and high-load operating cases, not only mid-speed or lightly loaded bench conditions.
Sensor-fusion hooks: encoders, resolvers and PdM
A speed and torque observer can operate in a purely sensorless mode or as part of a sensor-assisted scheme. Pure sensorless operation keeps wiring and cost low and performs well in mid to high speed ranges, but is sensitive to parameter drift and low-speed nonlinearities. Sensor-assisted modes combine electrical estimates with position and speed information from encoders or resolvers, and in some designs with vibration or temperature indicators. These additional signals provide hard references for low-speed and start-up behaviour, and allow the observer to track long-term changes in the drive mechanics.
Several fusion patterns are common. A first pattern uses encoder or resolver feedback at low speed and during start-up, then gradually transitions to sensorless estimation above a defined threshold, with both paths overlapping across a hand-over region. A second pattern uses a high-resolution absolute encoder as an angle reference, while the observer provides filtered and predicted angle and speed between encoder updates, correcting drift whenever fresh encoder data arrives. A third pattern feeds observer estimates into simple predictive maintenance indicators, combining long-term torque, current and harmonic content with vibration and temperature channels to detect gradual changes in load, friction or imbalance.
Implementing these fusion schemes requires hardware hooks that align all measurements on a common time base. Capture and timestamp units are needed for incremental encoders and resolver-to-digital outputs so that angle and speed samples are correlated with current and voltage sampling. Synchronisation inputs and shared timing from the multi-axis sync and timing subsystem keep fusion logic consistent across axes. Interfaces for encoders, R/D devices and vibration AFEs are detailed on the feedback and sensing pages; the emphasis here is on how those signals are consumed by the observer and fusion blocks to stabilise low-speed operation and feed simple PdM metrics.
IC and subsystem mapping around the observer
Around the speed and torque observer, MCU, ADC, AFE and encoder interface roles are always selected as a set. The mapping depends on power level, number of axes and maximum speed: compact single-axis drives can rely on an integrated FOC MCU with SAR ADC, while multi-axis robot and high-voltage traction platforms usually require sigma-delta current modulators, external synchronous or isolated ADCs and additional co-processing resources.
The combinations below focus on observer-related functions rather than specific part numbers. For each application band, the table aligns the FOC MCU or DSC role, the primary ADC topology, the current-sense front-end, the encoder and resolver interfaces and any observer co-processing or timing infrastructure. Feedback and sensing subpages describe each interface in detail; this section highlights how these roles are typically grouped by kW range, axis count and speed requirements.
This mapping helps narrow architecture choices early in a design. A single-axis compact servo can often place observer and current loop on a single core with on-chip SAR ADC, while a multi-axis robot drive may assign observer and current loops to a dedicated control core with sigma-delta inputs and multi-channel encoder capture. High-voltage traction drives usually combine safety MCUs, isolated sigma-delta chains, resolver-to-digital interfaces and FPGA or safety co-processors, all tied to a shared timing backbone.
Observer planning & debugging FAQs
This FAQ groups common planning and debugging questions around speed and torque observers into short, reusable answers. Each answer focuses on concrete choices for ADC topology, compute budgeting, sensor fusion and timing so that observer behaviour can be stabilised and low-speed issues can be traced to a clear set of design or tuning decisions.