123 Main Street, New York, NY 10001

DC & Low-Frequency Precision Sampling with ADCs

← Back to:Analog-to-Digital Converters (ADCs)

This page explains how to design and choose precision ADC signal chains for DC to tens-of-hertz bandwidths, focusing on 0.1–10 Hz noise, drift, mains rejection and latency trade-offs so slow signals stay accurate, stable and trustworthy over time.

What this page solves

This page focuses on DC and low-frequency precision sampling with ADCs, where the effective signal bandwidth extends from true DC up to a few tens of hertz. Typical targets include weigh scales, bridge sensors, precision temperature channels, and slow voltage or current monitors that demand stable, repeatable readings over minutes, hours, or even days.

In this regime, the primary challenge is not pushing the sampling rate, but controlling low-frequency noise, offset and gain drift, mains interference, and long-term stability. The goal is to turn datasheet specifications such as 0.1–10 Hz noise, offset error, temperature drift, and digital filter settings into a predictable error budget for real-world DC measurements.

The content is structured to help build and tune complete DC / low-frequency precision ADC signal chains, from sensor and front-end choices through architecture selection, oversampling and filtering, and calibration strategy.

  • Clarify what “DC / low-frequency precision ADC” means and which measurement problems fall into this band.
  • Interpret low-frequency noise, offset, drift, and filter-related specs on ADC datasheets for metrology-grade use.
  • Outline a robust precision signal chain from sensor to digital output, with attention to mains rejection and long-term stability.
Sampling bands versus precision domains Frequency axis with DC and low-frequency precision, IF sampling, and RF sampling bands. The DC band is highlighted with typical applications such as weigh scales, bridge sensors, temperature and slow control loops. Input frequency DC 10 Hz 1 kHz 10 MHz RF DC / Low-Frequency Precision band IF Sampling RF Sampling Weigh scale Bridge sensor Temperature channels Slow control loops See IF / RF sampling pages for higher-bandwidth designs

Definition & positioning of the DC / low-frequency precision band

The DC / low-frequency precision band refers to measurements whose effective bandwidth spans from true DC up to roughly a few tens or a few hundred hertz. Signals in this range change slowly, and the priority is accurate averages and smooth trends over time rather than tracking fast transients or wideband content.

This band is distinct from mid- or high-frequency sampling, where SNR is dominated by aperture jitter, front-end bandwidth and driver distortion. It is also different from “ultra-high-resolution” discussions that focus purely on the number of bits. Here the emphasis is on low-frequency noise, offset and gain drift, long-term stability, and rejection of low-frequency interference such as 50/60 Hz mains.

Typical applications in this band include weigh scales and load cells, pressure and slow flow sensors, RTD and thermocouple temperature channels, battery and busbar current or voltage monitors, and slow process-control feedback loops. All of these require predictable accuracy over minutes to hours with minimal drift.

  • Clarify the frequency range that is considered “low frequency” for precision ADC designs.
  • Differentiate DC precision needs from high-speed ADC requirements dominated by jitter and bandwidth.
  • Map common metrology and sensing applications to this DC / low-frequency precision band.
Definition of the DC and low-frequency precision band Frequency axis highlighting the 0 to 100 hertz band as the DC and low-frequency precision region, with labels for 0.1–10 hertz noise, offset and drift, and long-term stability, and an arrow pointing to jitter-limited higher-frequency regions. Input frequency DC 10 Hz 100 Hz 1 kHz 10 kHz DC / Low-Frequency Precision band (0–100 Hz) 0.1–10 Hz noise Offset & drift Long-term stability Higher-frequency region Jitter-limited SNR See IF / RF sampling pages for detailed high-frequency behavior

Low-frequency error sources: noise, offset and drift

DC and low-frequency precision measurements are limited by a mix of random noise, static errors and slow drift rather than by bandwidth or jitter. At very low frequencies, 0.1–10 Hz noise and 1/f behaviour dominate how stable the reading looks over seconds and minutes, while broadband noise sets the short-term RMS resolution for each conversion.

Offset, gain error and total unadjusted error define the static accuracy that remains even when measurements are averaged. Their temperature coefficients and long-term drift describe how this accuracy degrades as the system warms up, cools down or ages in the field. These parameters must be read carefully from the ADC datasheet and combined with sensor and reference contributions.

Low-frequency external interference such as 50/60 Hz mains, mechanical vibration and slow supply variation adds further error components. Averaging and digital filtering can reduce random noise, but they cannot fully cancel offset, drift or deterministic mains peaks, so a structured error budget that groups each contribution is essential for metrology-grade DC measurements.

The following signal-chain view highlights where these error sources arise and how they map onto the frequency domain, forming a basis for building a complete DC error budget.

Error contributions versus frequency for DC precision Block diagram of a DC precision signal chain from sensor to output next to a frequency plot showing low-frequency 1/f noise and drift, mains interference lines, and broadband noise, with arrows mapping each block to its dominant error region. Sensor Offset Front-end 1/f noise ADC 0.1–10 Hz Digital Filter Output Reading Sensor drift Offset & 1/f Noise & TUE Mains notch Frequency Noise / error DC 10 Hz 50/60 Hz 1 kHz 1/f + 0.1–10 Hz Mains Broadband noise

Chopping, auto-zero and low-frequency noise mitigation

Chopper-stabilized and auto-zero techniques are widely used in DC and low-frequency precision ADC signal chains to suppress offset and 1/f noise. Chopping periodically modulates the input so that low-frequency noise and offset are translated to a higher frequency band, where they can be attenuated by filtering. Auto-zero periodically measures and subtracts internal offset so that slow drift is reduced.

These techniques significantly improve the 0.1–10 Hz noise performance that dominates metrology readings, but they introduce trade-offs. Chopping and auto-zero consume time within each conversion cycle, add modulation ripple around the chopping frequency, and limit usable bandwidth. The surrounding sensor interface, filtering and power supplies must tolerate the switching activity without introducing extra artefacts.

For some DC precision designs a high-order sigma-delta ADC with well-chosen oversampling and digital filtering is sufficient. For the most demanding low-frequency noise targets, chopper-stabilized front-ends or auto-zero stages can be justified, provided that ripple, latency and implementation complexity are acceptable for the application.

Chopping and auto-zero concepts for low-frequency ADCs Block diagram of a chopper-stabilized front-end driving an ADC, with two small plots comparing noise spectra without chopping and with chopping. The chopped case shows flattened low-frequency noise and a small ripple peak at the chopping frequency. Sensor Chop De-chop Amplifier ADC Chopper-stabilized front-end Without chopping Noise Freq 1/f noise + offset With chopping / AZ Noise Freq Flattened low-frequency noise Ripple at chop f

Digital filters, oversampling and 50/60 Hz rejection

In DC and low-frequency precision systems, oversampling and digital filtering are key tools for reducing noise and suppressing mains interference. When a sigma-delta modulator runs at a high modulator clock and the output data rate is much lower, the oversampling ratio creates room for a digital filter to push quantization and broadband noise out of the low-frequency band of interest.

Simple moving-average filters act like boxcar integration; they lower random noise and offer evenly spaced zeros in the frequency response. Built-in SINC filters in sigma-delta ADCs extend this idea, providing deep notches at predictable frequencies related to the output data rate. More complex FIR filters implemented in a controller or FPGA can shape the passband and notches more precisely when required by demanding metrology specifications.

For weigh scales, bridge sensors and other slow sensors operating in environments with strong 50/60 Hz fields, the output data rate and filter order are chosen so that notches fall directly on mains frequencies and their harmonics. Poorly chosen sampling rates can leave shallow rejection, cause beat-like low-frequency envelopes or make the readings sensitive to small clock deviations. A clear view of the oversampling ratio, modulator clock and output rate avoids these pitfalls.

The diagram below links a sigma-delta signal chain to an example magnitude response, highlighting how oversampling and digital filter design place notches on 50/60 Hz mains while keeping the DC passband flat.

Oversampling and digital filter notches for DC precisionBlock diagram of a sigma-delta modulator, digital filter and decimator feeding an output data stream, next to a magnitude response with a flat low-frequency passband and deep notches at mains frequencies.SensorInputSigma-deltamodulatorDigital filterSINC / FIRDecimatorOSROutputfOUTfMOD (modulator clock)OSR = fMOD / fOUTControls filter notchesMagnitudeFrequencyDC10 Hz50 Hz60 Hz200 Hz1 kHz50/60 Hz notchesFlat DC passband

Choosing architectures for DC / low-frequency precision

Once the bandwidth is limited to DC and a few tens or hundreds of hertz, the main architectural choices for ADCs are medium-resolution SAR devices, high-resolution sigma-delta converters with integrated filters, and delta-sigma modulators that rely on external digital filtering. Each option occupies a different region of the trade-off space between precision, output rate, latency, channel count and design complexity.

SAR ADCs offer low latency, moderate to high output rates and relatively simple integration, which suits slow control loops, power-supply and motor-feedback monitoring, and many industrial channels that need good but not extreme metrology performance. High-resolution sigma-delta ADCs add on-chip oversampling and digital filtering with strong 50/60 Hz rejection, providing excellent DC precision for weigh scales, bridge sensors and precision temperature measurements at the cost of higher latency and lower bandwidth.

Delta-sigma modulators combined with external digital filters are attractive when isolated or multi-channel sensing is required and an FPGA or DSP is already present. They keep the DC precision advantages of sigma-delta conversion while allowing custom filter shapes and data rates, but place more responsibility on the system designer for clock planning, filter design and synchronization.

The architecture map below positions these three families in a two-dimensional view of effective bandwidth and precision, with typical DC applications indicated for each region.

Architecture trade-off map for DC precisionTwo-dimensional plot with bandwidth on the horizontal axis and precision on the vertical axis, showing regions for high-resolution sigma-delta ADCs, mid-resolution SAR ADCs, and delta-sigma modulators with external filters, annotated with typical DC applications.PrecisionBandwidth / output rateLow ODRHigher ODR (up to kSPS)ModerateHigherHigh-res ΣΔ ADCMetrology focusΔΣ modulatorExternal filterMid-res SAR ADCLow latencyWeigh scaleBridge / temperatureIsolated sensingMulti-channelSlow controlIndustrial monitoring

System-level design hooks for DC precision

DC and low-frequency precision performance is set by more than the ADC core. Sensor choice, front-end topology, reference quality, thermal environment, grounding and firmware routines all introduce error terms that are either difficult or impossible to remove later by filtering. Treating these points as explicit design hooks helps keep offset, drift and 0.1–10 Hz noise within the required budget.

On the analog side, bridge sensors, shunts and precision temperature elements define signal level and source impedance, which in turn dictate front-end gain, input buffering and input range selection. Low-noise, low-drift voltage references and their buffers set the stability of the full-scale conversion range, so their noise, temperature coefficient and long-term drift must match the target accuracy. Mechanical mounting and thermal gradients alter offset and gain over time, so warm-up behaviour and physical stress around sensitive components are part of the error model.

At the board level, shielding, differential routing, guard structures and star-ground schemes reduce low-frequency pickup from mains fields, motors, relays and long cables. On the digital side, built-in self-calibration commands and system-level tare or zero procedures provide controlled opportunities to re-centre offset and gain after temperature changes or mechanical adjustments. A consistent strategy for when these routines are invoked is essential for stable DC readings in long-term service.

The diagram below highlights a typical DC precision signal chain from sensor to processor and marks the main hooks where design decisions directly influence low-frequency accuracy.

DC precision measurement signal chain hooksSignal chain from sensor and front-end through ADC and reference into an MCU or DSP, with labels for sensor type, reference quality, PCB shielding and firmware calibration hooks.SensorBridge / RTD / ShuntFront-endGain & impedanceADC coreResolutionReferenceLow-noise, low-driftMCU / DSPAveraging & calPCB and environmentShielding · guard · star ground · thermal designSensor level & rangeNoise and 1/f controlRef noise, TC, driftSelf-cal and tare

Typical application patterns from the sampling-band view

Many DC and low-frequency applications share similar sampling-band requirements even when their physical quantities differ. By grouping them into a few patterns it becomes easier to select ADC architecture, output data rate and digital filter settings without starting from a blank page for every design.

Bridge and weigh-scale systems operate with very low-level signals and bandwidths below a few hertz, prioritising high resolution and strong 50/60 Hz rejection. Temperature and other slowly varying sensors move even more slowly, so long averaging windows are acceptable and drift and long-term stability dominate. Slow current and voltage monitors, such as battery or busbar measurements, must balance precision against moderate step-response speed. Precision slow control loops require accurate feedback within a limited bandwidth but are very sensitive to conversion delay.

The tiles below summarise four representative low-frequency patterns from the sampling-band perspective, highlighting typical bandwidth, filter strategy and accuracy focus for each.

Low-frequency precision application tilesFour tiles representing bridge and weigh scales, temperature and slow sensors, DC current and voltage monitoring, and slow control loops, each with a simple sensor to ADC to filter to output chain and labels for bandwidth and precision needs.Bridge / weigh scale / load cellSensorADCFilterOutputBW: < 5 Hz · high resolutionStrong 50/60 Hz rejectionTemperature & slow sensorsSensorADCFilterOutputBW: < 1 Hz · long averagingLow drift · long-term stabilityDC current / voltage monitoringSensorADCFilterOutputBW: up to tens of HzNoise vs step-response balancePrecision slow control feedbackSensorADCFilterOutputBW: loop Hz rangeLow latency · low noise

Engineering checklist for DC / low-frequency precision projects

Successful DC and low-frequency precision designs start with a clear set of requirements rather than with a favourite ADC part number. Before committing to hardware, every project should define its sampling band, target resolution, noise floor, environmental conditions, latency limits, channel structure and calibration plan. This checklist turns those topics into explicit questions that can be answered and shared across the team.

For bandwidth and dynamics, the first step is to specify the effective measurement band, for example “DC to 3 Hz” or “DC to 40 Hz”, and to decide whether step changes or short transients must be captured. Resolution and noise requirements follow from the physical range and smallest meaningful step, which translate into LSB size, effective bits and acceptable RMS and 0.1–10 Hz noise. Environmental questions cover temperature range and rate of change, mechanical vibration, nearby motors or transformers and the strength of 50/60 Hz mains fields and harmonics.

Latency and architecture questions determine whether sigma-delta converters with long digital filters are acceptable or whether low-latency SAR devices are required. The number of channels, need for isolation and any multi-board synchronisation drive choices between integrated ADCs and delta-sigma modulators feeding shared digital filtering. Finally, calibration strategy needs to be agreed early: factory calibration, field zero/tare procedures and periodic self-calibration routines all influence how much drift and long-term change the system can tolerate.

The checklist card below groups these questions into six areas so that each new DC precision project can be started with a consistent engineering brief.

DC precision design checklist card A large checklist card divided into six groups for bandwidth, resolution and noise, environment and mains, latency and architecture, channels and isolation, and calibration strategy. DC precision design checklist Bandwidth Resolution & noise Environment & mains Latency & architecture Channels & isolation Calibration strategy • Band: DC–? Hz • Steps vs slow trends • Range & min step • ENOB / LSB / noise • Temp & vibration • 50/60 Hz & harmonics • Max delay / loop • SAR vs ΣΔ choice • Channel count / sync • Isolation needed? • Factory vs field cal • Self-cal intervals

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs for DC / low-frequency precision ADC design

This FAQ section clears common doubts on low-frequency bandwidth, 0.1–10 Hz noise, mains rejection, architecture choice, calibration and error budgeting so DC precision designs can be specified and debugged with confidence.

What bandwidth counts as low-frequency for a precision ADC?
For precision ADC work, low-frequency usually means that the information of interest is concentrated from DC up to a few tens of hertz, sometimes a few hundred hertz at most. Many weigh-scale, bridge and temperature systems effectively operate below 1–10 Hz, while slow control and monitoring loops often sit in the 1–50 Hz band. Above this region, jitter, wideband driver distortion and front-end bandwidth become dominant concerns and are better treated as IF or RF sampling problems.
How is 0.1–10 Hz noise specified and why does it matter for DC precision?
The 0.1–10 Hz noise specification is typically measured by feeding the ADC or amplifier with a constant input, recording the output over many seconds and then band-limiting that data to 0.1–10 Hz. The resulting peak-to-peak or RMS noise directly represents the slow wander that appears on DC or very low-frequency readings. In applications such as weighing, bridge sensing and precision temperature control, this band dominates visible flicker and dictates how stable a reading looks on a display over time.
How much oversampling is needed to gain one extra bit of resolution in DC measurements?
In the ideal case where noise is white and uncorrelated, a 4× increase in oversampling, combined with proper averaging or decimation, improves SNR by about 6 dB, or roughly one bit of effective resolution. In practice, low-frequency 1/f noise and drift limit how far this scaling holds. Oversampling is most effective when the dominant noise is broadband; once 0.1–10 Hz flicker or thermal drift dominates, additional samples bring diminishing returns and attention must shift to better devices, front-ends and layout.
How can both 50 Hz and 60 Hz mains be rejected with a sigma-delta ADC?
Sigma-delta ADCs use digital filters whose notches occur at well-defined multiples of the output data rate. To reject 50 Hz or 60 Hz, the output data rate is set so that these frequencies land at filter zeros, often at integer multiples of fOUT/N. Some devices offer preset 50/60 Hz rejection modes that fix modulator rate and filter shape internally. In global designs, a compromise mode may be chosen that provides adequate attenuation at both 50 and 60 Hz, or the firmware switches data-rate and filter settings when the mains standard is known.
How should output data rate and filter order be chosen for a stable low-noise reading?
Output data rate, filter order and stopband notches form a three-way trade-off. Lower data rates and higher-order filters reduce in-band noise and mains interference but increase settling time and group delay. For slow-moving quantities such as weight or temperature, a low data rate with a steep filter is often acceptable. For control loops and faster feedback, a moderate data rate and lower-order filter provide a balance between noise and latency, with any remaining noise handled by additional averaging in the processor.
When should a chopper-stabilized ADC or amplifier be chosen instead of a normal sigma-delta ADC?
Chopper-stabilized stages are most useful when offset, offset drift and 0.1–10 Hz noise dominate the error budget and the signal bandwidth is very low. In such cases, a chopped input buffer or precision zero-drift amplifier in front of a sigma-delta ADC can significantly reduce slow wander. The trade-offs are limited bandwidth, possible residual ripple and increased complexity. For many bridge and temperature applications, a well-designed sigma-delta ADC with good low-frequency noise is sufficient, and chopper techniques are reserved for the highest accuracy or most demanding stability requirements.
Can multiple SAR ADC readings be averaged to match a sigma-delta ADC’s noise performance?
Averaging SAR conversions can improve noise performance when the SAR’s noise is largely white and uncorrelated between samples. Under those conditions, averaging N readings reduces RMS noise by roughly √N, and oversampling with decimation can reclaim several bits of effective resolution. However, fixed offset, gain error, low-frequency 1/f noise and reference drift do not average away. A dedicated sigma-delta ADC usually integrates oversampling and filtering more efficiently and provides better low-frequency behaviour, especially in the 0.1–10 Hz band.
How to choose between SAR, sigma-delta ADC and delta-sigma modulator for low-frequency precision?
Architecture choice is driven by the required bandwidth, latency, resolution and channel structure. For single or few channels with very high resolution and relaxed latency, a precision sigma-delta ADC with integrated filters and mains rejection is usually preferred. For low-latency control loops with moderate resolution, a SAR ADC provides fast conversions and simple timing. When many isolated channels are needed, a delta-sigma modulator per channel feeding shared digital filters in an MCU, DSP or FPGA offers scalability, at the cost of more firmware or logic design.
Why do DC readings drift after power-up or warm-up?
After power-up, components such as sensors, voltage references, amplifiers and the PCB itself change temperature and mechanical stress as they warm. Offsets, gains and bridge resistances move until the system reaches thermal equilibrium, which can take minutes in precision gear. DC drift during this period is therefore normal. Designs usually allocate a warm-up time, perform an internal self-calibration and often require a user zero or tare action once the system has stabilised, especially in high-accuracy weigh and instrumentation applications.
How often should self-calibration be run on a precision ADC in DC applications?
Self-calibration frequency depends on the stability requirement, operating temperature range and mechanical environment. A common pattern is to run self-calibration at power-up, after significant temperature changes and at scheduled intervals such as every few hours or once per shift. Instruments with very tight accuracy specifications may also trigger calibration when internal temperature sensors detect a threshold change. The calibration plan should match the error budget: more aggressive schedules are justified when long-term drift must be kept to a small fraction of the total allowed error.
Why do step changes appear when the enclosure or cables are touched in low-frequency measurements?
Touching enclosures or cables often injects additional capacitive coupling to mains fields or disturbs shield and ground potentials. In high-impedance, low-frequency measurement chains, this can appear as step changes or slow shifts in the reading. The effect indicates that shielding, cable routing, reference points and guard structures are not yet robust enough. Improving shield connections, using true differential inputs, breaking ground loops and adding appropriate low-frequency filtering usually reduces touch sensitivity and other environmental artefacts.
How can total error (TUE) be estimated from datasheet parameters in low-frequency use?
Total error in DC applications combines several contributors: offset and gain error, integral nonlinearity, short-term noise and the drift of the ADC, reference and front-end over temperature and time. A practical approach is to convert each term into an equivalent percentage of full scale or number of LSBs over the specified operating range, then combine truly random contributions such as noise in RMS fashion and worst-case systematic terms by straight addition. Many datasheets provide a TUE figure over temperature that already packages several terms; remaining external sources such as reference drift and sensor behaviour must then be added on top to form a complete error budget.