Ultra-High Resolution 24–32-Bit Sigma-Delta ADCs
← Back to:Analog-to-Digital Converters (ADCs)
This page explains how to design, select and troubleshoot 24–32-bit sigma-delta ADCs so that real-world weighing and lab-grade DC measurement systems actually achieve microvolt-level resolution, low 0.1–10 Hz noise and long-term stability instead of just nominal high bit counts on the datasheet.
What this page solves – Why 24–32-bit ΣΔ exists
This page focuses on ultra-high-resolution 24–32-bit sigma-delta ADCs: devices that sacrifice speed to deliver extreme DC accuracy, ultra-low 0.1–10 Hz noise and long-term stability. It explains when this resolution tier is required and how it fits into the overall ADC resolution–speed landscape.
Position in resolution & speed tiers. 24–32-bit sigma-delta converters sit at the far right of the resolution axis. Typical converters in this class offer native 24–32-bit code width, with noise levels in the tens of nV/√Hz and 0.1–10 Hz RMS noise in the few-µV range, at data rates of only a few samples per second to a few kSPS.
Typical application spaces. Ultra-high-resolution sigma-delta ADCs are used in weighing scales (bench scales, floor scales, batching and dosing systems), force and pressure sensors, as well as lab-grade instruments such as high-digit DMMs, precision power supplies, battery testers and precision RTD / thermocouple measurement modules.
Problem statement this page addresses. When a system requires ultra-low 0.1–10 Hz noise and long-term stability rather than bandwidth, the design moves into the 24–32-bit sigma-delta domain. This page shows how to understand this resolution tier, how to map datasheet noise numbers to real-world resolution, and how to design and select 24–32-bit ΣΔ ADCs for weighing and lab-grade DC measurement.
Engineers searching for reasons to use 24-bit or 32-bit ADCs, or for high-resolution sigma-delta ADCs for weighing scales and precision measurement equipment, can use this page as the entry point to the entire 24–32-bit resolution tier.
Core concepts – Resolution, noise-free bits and 0.1–10 Hz metrics
Ultra-high-resolution sigma-delta ADCs are specified with several different resolution and noise metrics. Understanding how nominal code width, ENOB, noise-free bits and 0.1–10 Hz noise relate to each other is essential before interpreting any 24–32-bit datasheet.
Nominal N-bit code width and theoretical LSB. An N-bit converter provides 2N output codes across the usable input span, so the ideal LSB size is LSB = Vref / 2N. For example, a 24-bit ADC with a 2.5 V reference has 224 ≈ 16.7 M codes and a theoretical LSB of about 0.15 µV. A 32-bit ADC with a 5 V reference provides 232 ≈ 4.29 G codes and an ideal LSB of about 1.16 nV. These numbers represent mathematical limits; real devices cannot deliver all nominal bits as stable resolution.
ENOB, RMS noise, peak-to-peak noise and noise-free bits. Effective number of bits (ENOB) folds in broadband noise and distortion and is often derived from SNR. RMS noise describes the standard deviation of code variations within a given bandwidth. Peak-to-peak noise is typically approximated as about 6.6 times the RMS noise for a Gaussian distribution. Noise-free bits represent the number of most significant bits that remain stable within ±0.5 LSB of noise; they are usually 1–3 bits lower than ENOB for high-resolution sigma-delta ADCs.
0.1–10 Hz noise as a dedicated low-frequency metric. For weighing scales and slow DC instruments, low-frequency drift and flicker noise dominate the usable resolution. The 0.1–10 Hz noise specification isolates this band and quantifies the small, slow variations that appear when the input is held constant over several seconds. A device may show low broadband RMS noise but still have poor 0.1–10 Hz performance, which directly limits zero stability and long-term readability in precision DC systems.
With these concepts in place, the next sections connect LSB size, noise-free bits and 0.1–10 Hz noise to practical quantities such as minimum resolvable weight, voltage and temperature in real 24–32-bit sigma-delta designs.
How 24–32-bit ΣΔ achieve ultra-low noise – OSR and digital filtering
Ultra-high-resolution sigma-delta ADCs rely on oversampling and digital filtering to push most of the quantization noise out of the signal band. The sigma-delta modulator spreads noise over a wide spectrum, and a digital filter with a high oversampling ratio (OSR) removes out-of-band noise so that only a narrow in-band slice remains with very low RMS noise.
Increasing OSR reduces in-band noise at the cost of output data rate. For many 24-bit converters, doubling OSR typically improves in-band noise by about 3 dB, which corresponds to roughly 0.5 additional effective bits. Datasheets show this trade-off in noise tables that list RMS noise or noise-free resolution against data rate and filter mode.
Each combination of data rate and digital filter mode maps to an effective bandwidth. Low data rate modes with heavy filtering deliver very narrow bandwidths, for example 2.5 Hz, 10 Hz or 50 Hz, and achieve the lowest RMS noise. Higher data rate modes widen bandwidth and increase in-band noise but improve step response and settling time.
As a practical rule, a 24-bit sigma-delta ADC that is intended to deliver 22 bits or more of noise-free resolution must be operated at low output data rates with high OSR and strong digital filtering. This section connects oversampling ratio, digital filter settings and noise tables so that oversampling ratio vs. resolution and data rate vs. noise decisions can be made intentionally for high-resolution designs.
Reading datasheets – interpreting 0.1–10 Hz noise, drift and stability
High-resolution sigma-delta ADC datasheets contain several DC precision metrics that determine real-world resolution and stability. The most important are 0.1–10 Hz noise, long-term drift and temperature coefficient. Together they indicate how stable zero and full-scale readings will be over seconds, hours and days.
0.1–10 Hz noise. The 0.1–10 Hz noise specification, given in nVpp or µVpp, quantifies slow code wander when the input is held constant. It includes flicker noise and very low-frequency components that directly limit zero stability in weighing scales and slow DC instruments. Lower 0.1–10 Hz noise means finer stable resolution at the scale output.
Long-term drift and temperature coefficient. Long-term drift, often specified in ppm/1000 h or µV/month, describes how offset or gain slowly change over time. Temperature coefficients, in ppm/°C, describe how readings shift with ambient temperature. Combined, these parameters indicate how far the zero point can move over a day or a week and how often a weighing system or lab instrument should be re-zeroed or recalibrated.
Noise tables and minimum resolvable weight. Noise tables that list data rate versus RMS noise and noise-free resolution show how digital filter modes translate into usable bits. By combining 0.1–10 Hz noise with load cell sensitivity and excitation voltage, it is possible to estimate the minimum resolvable weight increment at full scale. This turns abstract microvolt noise numbers into grams or milligrams of resolution for a given weighing range.
This section uses simplified datasheet-style tables to illustrate how to read 0.1–10 Hz noise specifications, understand long-term drift, interpret noise-free resolution versus output data rate and relate ADC noise to the smallest measurable change in load or voltage in precision systems.
Reference and front-end constraints at µV / nV levels
At 24–32-bit resolution levels, the reference source and front-end often dominate noise and drift rather than the ADC core itself. Reference noise and temperature drift, front-end filtering and protection, and PCB layout can easily consume the noise budget that ultra-high-resolution sigma-delta ADCs make available.
Reference source constraints. The reference source contributes wideband noise density in nV/√Hz and low-frequency 0.1–10 Hz noise that are directly translated into output-code noise. Temperature coefficient of the reference, expressed in ppm/°C, multiplies the full-scale span and sets a floor on how much the output can drift with ambient temperature. Any reference buffer amplifier adds its own voltage noise, 1/f noise and stability limitations, which become visible as extra noise and temperature-correlated drift at the ADC output in µV and nV systems.
Front-end constraints. For bridge and load-cell inputs, the RC input filter must be chosen to reject 50/60 Hz hum and higher-frequency interference while preserving acceptable step response. Source impedance and filter values must avoid excessive thermal noise and allow the ADC input sampling network to settle. High common-mode interference combined with a small differential signal requires excellent common-mode rejection from the front-end and the ADC input stage. Input protection components such as TVS diodes, series resistors and clamp diodes introduce leakage and capacitance that can disturb bridge balance and modify the effective input filter if they are not carefully selected and placed.
Layout, grounding and shielding. At microvolt and nanovolt levels, PCB layout and grounding determine whether a design meets the expected 24–32-bit performance. Star-ground structures help keep high-current loops away from sensitive reference and input returns. The connection point between analog and digital grounds must be chosen to prevent digital return currents from flowing through analog measurement regions. Guard rings and shielded regions around high-impedance input nodes reduce leakage and electric-field coupling, but must be implemented with awareness of added capacitance and its impact on front-end bandwidth and settling behaviour.
Data rate, filter modes and dynamic response for weighing and lab instruments
For weighing systems and lab instruments, configuring data rate and digital filter modes is a balance between noise, 50/60 Hz rejection and dynamic response. The same 24-bit sigma-delta ADC can behave like a slow, ultra-quiet converter or a faster, noisier converter depending on filter settings and oversampling ratio.
Weighing applications. Typical weighing scales run effective update rates around 5–20 SPS to provide stable numbers to the operator while maintaining low noise. Digital filters such as SINC3 or SINC4 are configured to place notches at 50/60 Hz, combining with the analog RC input filter to reject mains hum. Higher filter order and lower data rate improve noise and interference rejection but increase settling time when the load changes.
Lab instrument applications. Lab instruments such as DMMs, precision power supplies and battery testers often offer multiple operating modes such as fast, normal and high-resolution. Fast modes use higher data rates and lighter filtering to capture trends and dynamic behaviour, trading increased RMS noise for shorter settling times. High-resolution modes use lower data rates and heavier filtering to reveal microvolt-level changes, with longer group delay after step inputs. Normal modes sit between these extremes for everyday measurements.
Design strategies combine data rate selection, filter mode and averaging. Multiple-sample averaging at moderate data rates reduces random noise while keeping response acceptable. In weighing systems, a fast-capture mode can detect rapid load changes, followed by a slow, high-resolution mode that allows the filter to settle before presenting a stable reading. Understanding how output data rate affects RMS noise and settling time allows these modes to be placed at meaningful points on the noise–response trade-off curve.
Application patterns – weighing scales and lab-grade measurement chains
Ultra-high-resolution 24–32-bit sigma-delta ADCs appear in a limited set of applications where microvolt or nanovolt-level resolution and long-term stability matter more than raw speed. Two representative patterns are weighing scales that use load cells and lab-grade DC measurement chains such as DMMs and precision source/measure instruments. In both cases, the 24–32-bit ADC sits in the middle of a carefully engineered signal chain that defines real system performance.
Weighing scales and load-cell systems
A typical weighing scale chain uses a bridge-based load cell excited from a precision reference, followed by a 24-bit sigma-delta ADC and a microcontroller that performs compensation and user interface functions. The bridge sensitivity in mV/V and the full-scale weight determine the bridge output span in millivolts, which is then mapped to the ADC input range. From this mapping and the ADC noise specifications, the minimum resolvable weight step can be calculated.
Zero and full-scale calibration are used to align the ADC codes to real-world weight. Zero calibration stores the empty-scale output as offset, while full-scale calibration uses a known reference weight to determine the gain. Temperature compensation and long-term drift control are added on top, using temperature sensors and periodic re-zero procedures to maintain stable readings over time and ambient changes.
Lab-grade DC measurement chains
Lab-grade DC measurement systems such as DMMs, precision voltmeters and source/measure units use a range-selection and front-end network in front of a 24–32-bit sigma-delta ADC. Relays or analog switches select different ranges, feeding gain or attenuation networks that keep the ADC input close to full-scale for each range. This approach maximises the effective use of available resolution across millivolt to hundreds-of-volts spans.
At microvolt and nanoamp levels, leakage and noise become critical. Range resistors and protection components must use low-leakage, low-thermal EMF technologies. Guard rings and shielded layouts are applied to keep surface leakage and capacitive coupling under control. The high-resolution ADC is then able to deliver the required resolution for DC voltage, current or resistance ranges when combined with careful range switching, front-end design and calibration.
Engineering checklist – designing with 24–32-bit ΣΔ ADCs
Designing around a 24–32-bit sigma-delta ADC requires checking more than just nominal resolution and pin count. A structured engineering checklist helps confirm whether the application, front-end, reference and layout really support ultra-high-resolution performance and whether calibration and self-test mechanisms are in place.
• Target resolution and physical quantity. Define the required noise-free bits and the minimum resolvable weight, voltage, current or temperature at the point of use. Map ADC noise and load-cell or front-end sensitivity into real-world units.
• Allowed update rate and response time. Specify acceptable display update rates and control-loop latency. Confirm whether low data rates and long settling times are acceptable or whether fast and high-resolution modes are both needed.
• Environmental interference and EMI. Evaluate exposure to 50/60 Hz mains, motor drives and switching supplies. Decide whether stronger digital notches, heavier analog filtering or improved shielding are required to protect the noise budget.
• Reference source quality. Check reference noise density, 0.1–10 Hz noise, temperature coefficient and long-term stability, including the reference buffer. Ensure that combined reference behaviour does not dominate overall system noise and drift.
• Front-end design. Review bridge excitation, input filter bandwidth, source impedance and protection elements. Confirm that leakage, thermal noise and protection capacitance are compatible with microvolt-level resolution.
• Temperature range and compensation. Determine the ambient temperature range and the required accuracy over that range. Plan gain and offset compensation, including temperature sensors and calibration models where necessary.
• Calibration strategy. Define how zero, full-scale and linearity calibration will be performed in production and in the field. Reserve storage for calibration coefficients and ensure that measurement firmware can use updated calibration data.
• Self-test and monitoring. Decide whether the system should monitor reference voltage, detect sensor open/short faults and run periodic self-test routines. These mechanisms are essential for maintaining confidence in 24–32-bit readings over years of operation.
IC selection logic – choosing between 24 and 32-bit ΣΔ ADCs
The labels “24-bit” and “32-bit” on sigma-delta ADC datasheets rarely mean that every code is noise-free. Typical 24-bit devices deliver 18–21 effective bits and about 17–20 noise-free bits in realistic low-speed modes. Modern 32-bit devices often use deeper oversampling and internal accumulation to extend noise-free performance into the 22–24-bit region at very low data rates, trading bandwidth and conversion speed for extra resolution and dynamic range.
In practice, a well-chosen 24-bit sigma-delta ADC with a clean reference, carefully designed front-end and proper calibration covers most weighing, load-cell and industrial DC measurement tasks. The step up to a 32-bit sigma-delta ADC is justified only when a system genuinely needs more than about 22 noise-free bits, very wide dynamic range and long-term stability that surpass what high-end 24-bit devices can provide, such as in lab-grade instruments, seismic acquisition or universal input modules.
Decision logic: start from resolution, rate and channel needs
The first decision is the required noise-free resolution in the target physical quantity. From the load-cell sensitivity or front-end gain and the ADC noise or 0.1–10 Hz noise specification, the minimum resolvable weight, voltage, current or temperature can be estimated. If the application only needs about 17–20 noise-free bits at its chosen data rate, a good 24-bit sigma-delta ADC is usually sufficient. Only when the requirement clearly demands 22 or more noise-free bits does a 32-bit class device become a realistic option.
The second decision is the allowed update rate. Ultra-high ENOB is obtained at low data rates such as 1–20 SPS for scales, or a few tens of SPS for precision DC measurements. At medium data rates in the 10–1000 SPS range, high-quality 24-bit ADCs still offer strong performance, while many 32-bit devices also start to give up some effective bits. If the system must run significantly faster than a few hundred SPS, improving front-end design and reference quality around a solid 24-bit device often brings more benefit than moving to a nominal 32-bit converter.
The third dimension is channel count and integration level. Bridge-based weighing and pressure systems typically need one or a few high-precision channels with integrated PGA, bridge excitation and diagnostics. Multi-sensor industrial nodes favour multi-channel precision sigma-delta ADCs with flexible input configurations, modest data rates and robust diagnostics. Lab instruments, seismic systems and universal input modules may justify 32-bit converters where extreme resolution and dynamic range outweigh cost and complexity.
Example device buckets and representative part numbers
24-bit, bridge-focused ΣΔ with integrated PGA / excitation. This category targets weighing scales and bridge sensors where a small number of high-performance channels and integrated front-end features matter most. Typical examples include devices such as TI ADS1232/ADS1234 and ADS122C04, and Analog Devices AD7195 and AD7799, which combine 24-bit sigma-delta cores with PGAs, bridge-friendly inputs and options such as AC excitation and integrated references for load-cell and pressure applications.
24-bit, multi-channel precision ΣΔ. This bucket suits mixed-sensor industrial measurement and generic precision inputs. Examples include ADI AD7124-4/AD7124-8 and AD4130-8, Microchip MCP3561/2/4, Nuvoton NADC24 and LTD2261-family devices, which provide multiple differential or single-ended channels, low noise, flexible digital filtering and diagnostics for RTDs, thermocouples, shunts, bridge sensors and other low-bandwidth signals on one ADC.
32-bit, lab-grade ΣΔ ADCs. Lab-grade and high-end industrial systems sometimes use nominal 32-bit converters such as ADI AD7177-2, TI ADS1262/ADS1263 and ADS1285/ADS1288. These devices employ deep oversampling and advanced digital filters to extend effective resolution at low data rates, and often add rich diagnostics and multi-channel support. They target metrology, seismic acquisition and universal input modules where dynamic range, long-term stability and fault detection justify their cost, power and design complexity.
RFQ essentials for 24–32-bit ΣΔ ADCs
When issuing a request for quotation or technical proposal, the following fields help vendors recommend suitable 24–32-bit sigma-delta ADCs:
- Target noise performance: required noise-free bits, 0.1–10 Hz noise (µVpp/nVpp) and acceptable RMS noise at the intended data rate.
- Bandwidth and update rate: required output data rate range, settling time after a step and any display or control-loop timing constraints.
- Input and reference conditions: bridge, fully differential, pseudo-differential or single-ended signals; expected full-scale input range; reference source and whether the ADC must excite a bridge or use an internal reference.
- Channel count and integration level: number of channels, need for integrated PGA, MUX, current sources, temperature sensor or AC excitation, and any simultaneous-sampling requirements.
- Environment and stability: operating temperature range, required gain and offset drift over temperature and time, and long-term stability expectations.
- System and compliance constraints: preferred interface (SPI, I²C or others), supply voltage and power limits, package and size constraints, and required industrial or automotive qualification levels.
FAQs – troubleshooting high-resolution ΣΔ designs
High-resolution 24–32-bit sigma-delta ADCs reveal issues that are invisible in lower-resolution systems. The following questions focus on noise, drift, layout and upgrade decisions that appear only when designs push microvolt and nanovolt-level performance.
1. Why is my 24-bit sigma-delta ADC only giving 18–20 noise-free bits?
Nominal resolution, effective number of bits (ENOB) and noise-free bits are different metrics. A 24-bit ADC has 224 codes in theory, but its ENOB and noise-free resolution are limited by internal noise, distortion and filtering. Datasheets usually show that even high-quality 24-bit sigma-delta converters deliver about 18–21 ENOB and 17–20 noise-free bits at practical data rates.
Noise-free bits depend strongly on the selected output data rate and digital filter mode. Higher oversampling ratios and slower data rates reduce noise, while fast modes trade noise for bandwidth. External factors such as reference noise, front-end resistor noise, PCB coupling and measurement time window further reduce the number of stable counts.
When a 24-bit sigma-delta ADC is configured for realistic data rates and measured on a real sensor and PCB, 18–20 noise-free bits is normal performance. To approach the best-case datasheet numbers, the design must use the lowest-noise reference and front-end possible, operate in high-OSR modes and average over longer intervals.
2. What is the difference between 0.1–10 Hz noise and 50/60 Hz noise, and how should they be treated together?
0.1–10 Hz noise is an intrinsic low-frequency noise metric that combines 1/f noise, slow drift and very low-frequency random fluctuations from the ADC, reference and front-end components. It defines how stable the reading is over seconds to minutes and is critical for scales and slow DC instruments. In contrast, 50/60 Hz noise comes from external mains fields, ground loops and coupling from power wiring and equipment.
The two problems must be handled with different tools. Reducing 0.1–10 Hz noise requires better devices, clean references, stable temperature and higher oversampling ratios. Reducing 50/60 Hz noise requires layout improvements, shielding, common-mode rejection, analog RC filters and digital notches at the mains frequency and its harmonics.
A practical approach is to first apply layout, shielding and filtering to minimise 50/60 Hz components, verified with spectral plots or notch settings. Once mains interference is under control, the remaining low-frequency noise can be evaluated against the 0.1–10 Hz specification to determine whether the ADC, reference and front-end meet the required stability.
3. Why does a weighing scale slowly drift over time even with no load?
Long-term zero drift in a weighing scale is usually dominated by the load cell and mechanical structure rather than by raw ADC resolution. Effects such as creep in the metal structure, stress relaxation in fasteners and adhesives and slow changes in mounting conditions can produce gradual changes in bridge output even with no applied load.
Temperature changes add further drift through the load cell gauge resistances, bridge imbalance, reference voltage and front-end amplifier offsets. In a 24–32-bit system, even small ppm-level shifts translate into visible counts. Moisture, contamination and PCB leakage paths can also introduce slow drift as surface resistance changes over time.
Practical designs combine mechanical optimisation, temperature monitoring and regular auto-zero or auto-tare routines. Long-term logging of output and temperature helps distinguish normal slow drift from abnormal behaviour, and maintenance procedures can reset offset and gain periodically to keep the scale within specification.
4. Why do readings change in steps when the temperature varies?
Step-like changes during temperature variation often come from the interaction of digital filtering, averaging and calibration rather than from a truly discontinuous sensor. Deep digital filters and moving averages output a smoothed value that only moves to the next code after accumulated drift exceeds a fraction of an LSB, creating an apparent staircase as temperature changes slowly.
Temperature compensation and look-up tables can also introduce steps if they use piecewise constants or coarse interpolation. As the measured temperature crosses a boundary between compensation regions, the applied gain or offset changes abruptly, producing a visible jump in the reading that tracks the temperature grid rather than the true physical drift.
Reducing averaging depth, examining raw ADC codes and plotting readings against temperature helps determine whether steps are caused by filtering or compensation logic. If the phenomenon is purely an artefact of digital processing and the application does not need to display micro-level changes, scaling or rounding the user display can hide the staircase while preserving high internal resolution for compensation.
5. How can I tell whether drift is caused by the reference or by the sensor?
The most direct method is to monitor a stable reference node alongside the sensor signal. Many high-resolution systems reserve one channel for a precision reference point or a known divider. If both the reference-monitor channel and the sensor channels drift together in the same direction and proportion, the reference or ADC gain is likely the dominant source of drift.
If the monitored reference channel stays relatively stable while one or more sensor channels drift, the problem is probably in the sensor, wiring, front-end or mechanical conditions. Correlating sensor drift with temperature, time after power-up and mechanical disturbances helps identify whether the source is a sensor property, mounting stress or environment.
A robust design treats reference monitoring as a standard feature. By periodically sampling an internal or external reference node and comparing its behaviour with sensor channels, the system can distinguish reference-driven gain drift from sensor- or wiring-related changes and apply the correct compensation or fault indication.
6. Why do different channels show different noise when using a multiplexed high-resolution ADC?
In multiplexed sigma-delta systems, each channel usually has different source impedance, RC filtering, protection components and wiring. These differences change how the sampling network settles and how external noise couples into the ADC input. Channels connected to low-impedance bridges often show lower noise than channels connected to high impedance sensors or long cables.
Layout and grounding variations also matter. Longer traces, asymmetric routing or proximity to digital lines and power rails can expose some channels to stronger 50/60 Hz or switching noise. If settle time between MUX steps is too short, channels with heavier filtering or higher source impedance may not fully settle and can show extra noise and apparent offset shifts.
A useful diagnostic step is to short all inputs to the same stable potential and compare channel noise. If differences shrink or disappear, front-end and layout differences are the root cause. Matching RC filters, reviewing protection networks and increasing settling time between channel conversions are common measures to align noise performance across channels.
7. Why is my prototype board noisier than the vendor evaluation board with the same ADC?
Vendor evaluation boards are usually laid out as reference designs with carefully partitioned analog and digital areas, clean ground planes, short reference paths and well-filtered supplies. A custom prototype frequently integrates the ADC next to switching regulators, digital processors, fast GPIO and long sensor wiring, which all introduce extra noise and coupling paths.
Common contributors to higher noise on prototypes include shared noisy supplies, poor ground return paths, insufficient decoupling on reference and analog supply pins, long and unshielded sensor traces and incomplete separation between high-current and sensitive measurement loops. Even small differences in component placement and routing can significantly change the effective noise at 24-bit and above.
Comparing the prototype layout against the evaluation board guidelines and temporarily powering both boards from the same clean supply and sensor can reveal layout-related gaps. Applying the vendor’s layout recommendations for reference routing, ground partitioning, decoupling and guard structures is essential to reproduce evaluation-board noise performance on a custom design.
8. What can be done if digital filter delay makes a 24-bit scale response too slow?
The low noise of high-resolution sigma-delta converters comes from high oversampling and deep digital filtering, which introduce group delay and long settling times. In a weighing scale, this can make readings feel sluggish when a load is applied or removed. The trade-off is between noise performance and dynamic response.
One common strategy is to use two operating modes. A fast mode with higher data rate and lighter filtering detects changes and tracks transients, while a high-resolution mode with lower data rate and stronger filtering is used only once the signal becomes stable. The firmware can switch between these modes based on simple stability criteria.
If the application does not require the full theoretical resolution, selecting a moderate filter setting, shortening averaging windows and reducing displayed resolution can improve perceived responsiveness without sacrificing necessary accuracy. Mechanical damping and vibration control at the scale platform also help reduce the amount of low frequency content that the filter must suppress.
9. How can 50/60 Hz ripple and power-supply noise be debugged in a high-resolution ADC system?
Debugging mains ripple and supply noise starts with isolating the ADC from noisy parts of the system. Powering the ADC and front-end from a clean test supply or battery and wiring the sensor with short, twisted and shielded leads can reveal how much noise is coming from the original power and wiring architecture. Significant improvement in this test indicates that layout, supply routing or cabling are major contributors.
Spectral analysis is very effective. A narrowband FFT or histogram of ADC codes highlights 50/60 Hz and harmonics as distinct peaks. Adjusting analog RC filters, digital notches and data rate settings and observing changes in those peaks helps separate analog coupling from digital artefacts. Measuring common-mode and differential voltages at the sensor terminals, reference pins and supplies with an oscilloscope further pinpoints coupling paths.
Long-term solutions include using differential routing with good symmetry, minimising loop areas, implementing star-grounding, separating high-current returns from the measurement ground, and adding appropriate LC or RC filters in the supply tree feeding the ADC and reference. These measures reduce both mains-related ripple and broadband supply noise seen in a high-resolution system.
10. When does it really make sense to move from a 24-bit to a 32-bit sigma-delta ADC?
Moving from a 24-bit to a 32-bit sigma-delta ADC only makes sense after the 24-bit design has been optimised. The existing system should operate close to the datasheet’s best noise and ENOB figures, with a low-noise reference, carefully designed front-end, clean layout and controlled environment. If, under these conditions, the measured noise-free resolution still falls well short of a clearly defined target, a 32-bit device may be justified.
Typical candidates for 32-bit converters include instruments that need more than about 22 noise-free bits, very wide dynamic range on a single range and long-term stability as a primary selling point, such as high-digit DMMs, reference instruments, seismic acquisition systems and universal input modules. In these cases, deeper oversampling and advanced filtering can provide tangible benefit.
If the application only requires 16–18 effective bits, or if the limitation comes from reference quality, sensor behaviour, mechanical stability or layout, upgrading the ADC alone will not solve the problem. In many industrial and weighing applications, investing engineering effort into a robust 24-bit design yields better cost, power and complexity trade-offs than adopting a nominal 32-bit solution.