123 Main Street, New York, NY 10001

ADC Linearity & Errors: INL, DNL, Noise and Distortion

← Back to:Analog-to-Digital Converters (ADCs)

This page explains how ADC linearity, noise, distortion and drift metrics translate into a single system-level error budget, and shows how to use these metrics to select devices, design calibration and verify accuracy from prototype to production.

What this page solves

This page explains how ADC linearity and error metrics fit together into one accuracy picture. It focuses on how INL/DNL, THD/SFDR, SNR/ENOB and drift/aging combine into a usable error budget for real designs. The goal is to translate datasheet numbers into system-level accuracy, not to redesign the reference, clock or driver.

The content stays strictly on the side of metrics and error accounting. It notes that reference quality, clock jitter, input driver linearity and temperature gradients can worsen these metrics, but detailed design rules for those topics are covered in dedicated pages such as Reference & Buffering, Clocking & Jitter, Driver & Anti-Alias Filter and Thermal & Drift.

Readers looking for a quick translation of datasheet terms like “INL = ±1 LSB, SFDR = 90 dBc, SNR = 72 dB, ENOB = 11.5 bits, offset drift = 2 µV/°C” will find the definitions and trade-offs here. Readers who need a full error budget for metering, instrumentation or RF systems can use the later sections of this page as a step-by-step guide.

For hardware and system engineers, this page answers questions such as:

  • How linearity, noise, distortion and long-term drift each limit ADC accuracy.
  • How to read and compare linearity and error specs across different ADC families.
  • How these metrics are later combined into a DC or AC error budget.

Users seeking layout guidance, reference design or clock-tree design should move to the corresponding pages. Users who stay on this page will obtain a clear mental map of the main ADC error families before diving into detailed static, dynamic, noise and drift sections.

High-level ADC error families around the converter core Block diagram showing an ADC core in the center with four surrounding error families: static linearity, dynamic distortion, noise and ENOB, and long-term drift and aging. ADC Core INL / DNL Static THD / SFDR Dynamic SNR / ENOB Noise Drift / Aging Long-term

Error taxonomy & metrics

ADC accuracy is shaped by several families of errors. Grouping them into a clear taxonomy helps engineers decide which metrics matter for DC precision, AC waveform quality or long-term stability, and where to find them in a datasheet.

The main families used on this page are:

  • Static transfer errors – offset, gain error, INL, DNL, monotonicity and missing codes that distort the code–voltage relationship and limit DC accuracy.
  • Dynamic distortion – THD, SFDR and SINAD derived from FFT testing with sine waves, describing spur and harmonic content in the frequency domain.
  • Noise-driven metrics – SNR, noise floor and ENOB that capture random variations from quantisation and analogue noise sources.
  • Drift & aging metrics – offset and gain tempco, long-term drift and aging figures that describe how errors grow with temperature and operating time.

Static transfer errors dominate DC metering and instrumentation. Dynamic distortion and SNR/ENOB drive AC waveform quality and communication links. Drift and aging determine whether a product can stay within tolerance over years without frequent recalibration. Later sections of this page expand each family into definitions, typical ranges and their role in an error budget.

Implementation details such as how capacitor mismatch generates a specific INL pattern or how pipeline residue amplifiers create distortion are left to the architecture pages (SAR, pipeline, flash, sigma-delta, and hybrids). Here the focus remains on what the metrics mean and how they are organised.

ADC error families by domain and time scale Four-quadrant diagram with axes DC versus AC and instantaneous versus long-term. Static transfer errors occupy the DC and instantaneous quadrant, dynamic distortion the AC and instantaneous quadrant, noise metrics near the DC side, and drift and aging in the long-term quadrants. DC AC Long-term Instantaneous Static transfer DC accuracy Drift & aging Long-term Noise metrics SNR / ENOB Dynamic distortion THD / SFDR / SINAD Offset, Gain INL, DNL Tempco, drift Aging, recal Noise floor SNR, ENOB SFDR, THD SINAD

Static transfer errors – Offset, gain, INL, DNL and missing codes

Static transfer errors describe how the actual code–voltage relationship of an ADC deviates from the ideal straight line under slow or DC conditions. Offset, gain, INL and DNL together determine how far any given output code is from the expected value before noise and frequency-dependent behaviour are considered.

Offset and gain error capture first-order shifts of the transfer curve. After removing these, integral nonlinearity (INL) and differential nonlinearity (DNL) quantify residual curvature and step-size variations. Monotonicity and missing-code guarantees are then derived from DNL limits and are critical for closed-loop control and precision measurement systems.

Different ADC architectures often produce characteristic INL and DNL shapes, but the detailed device-level mechanisms are treated on architecture pages. This section stays focused on how these static errors are defined, how they appear in datasheets and how they affect DC accuracy and calibration effort.

  • Understand ideal versus real transfer curves and the geometric meaning of offset and gain error.
  • Decode INL definitions such as endpoint and best-fit references and how they change the reported value.
  • Relate DNL to code width, monotonicity and missing-code risks in control and measurement systems.
Static ADC transfer errors: offset, gain, INL, DNL and missing codes Upper panel shows ideal versus actual stair-step ADC transfer curve with offset and gain error. Lower panel shows INL and DNL error curves versus code, marking missing codes and non-monotonic regions. Input voltage (0 → FS) Output code Ideal Offset Gain Ideal vs real transfer curve Code Error (LSB) INL DNL Missing code Non-monotonic INL and DNL versus code

Dynamic distortion – THD, SFDR, SINAD and full-scale sine tests

Dynamic performance describes how clean the ADC output spectrum remains when measuring a sine wave at a given input frequency and level. Full-scale or near full-scale sine tests with FFT analysis reveal harmonic content, spurious tones and noise floor, which are summarised by THD, SFDR and SINAD.

Total harmonic distortion (THD) captures the combined power of all harmonics relative to the fundamental. Spurious-free dynamic range (SFDR) focuses on the single largest spur in the spectrum. Signal-to-noise-and-distortion ratio (SINAD) includes both random noise and distortion products and is typically used to derive AC effective number of bits (ENOB) using the common relationship ENOB = (SINAD − 1.76) / 6.02.

Practical results depend on more than the converter core. Clock jitter, input driver linearity, front-end filtering and reference noise can all degrade the measured spectrum. This page highlights how to read and interpret the dynamic metrics themselves, while detailed clocking, driver and layout techniques are covered in dedicated topics.

  • Relate THD, SFDR and SINAD to visible features in a sine-wave FFT plot.
  • Understand how AC ENOB is extracted from SINAD and why it falls with input frequency.
  • Identify common causes when measured SFDR or ENOB are worse than datasheet values.
AC ENOB versus input frequency Curve showing AC effective number of bits (ENOB) versus input frequency, with gentle roll-off toward higher frequencies. Input frequency ENOB (bits) AC ENOB Low fin Near Nyquist ENOB versus input frequency

Noise, SNR & ENOB – Noise-based effective resolution

Noise metrics translate ADC accuracy into an effective number of bits. Signal-to-noise ratio (SNR) excludes harmonic distortion and focuses on random noise, while SNDR or SINAD includes both noise and distortion. These values are typically extracted from an FFT of a sine-wave test and can be converted into ENOB so that different converters and operating points can be compared on a common scale.

In an ideal N-bit ADC dominated by quantisation noise, the theoretical SNR for a full-scale sine input is approximately 6.02·N + 1.76 dB. Real converters see extra analogue noise from amplifiers, reference and layout. The gap between the ideal SNR and the measured SNR or SINAD indicates how much performance is being lost to non-ideal noise and distortion sources.

ENOB has two practical flavours. DC or low-frequency ENOB is set by low-band noise and static errors and matters for metrology, weighing and slow sensor measurements. AC ENOB is derived from SINAD over a given input frequency and describes how many bits of resolution remain for wideband or high-speed signals. AC ENOB usually drops as input frequency increases due to jitter, front-end bandwidth limits and dynamic distortion.

Oversampling and averaging can improve SNR and ENOB when noise is random and uncorrelated. Doubling the number of independent samples improves SNR by roughly 3 dB, and about 16× oversampling gives roughly 1 extra effective bit. However, averaging does not correct static transfer errors (INL/DNL) or harmonic distortion (THD/SFDR), so it cannot turn a non-linear ADC into a linear one.

  • Relate SNR and SINAD to noise and distortion seen in FFT plots.
  • Understand how ideal quantisation noise sets the 6.02·N + 1.76 dB SNR limit.
  • Distinguish DC ENOB from AC ENOB and know how oversampling changes noise-limited ENOB.
ADC noise sources and ENOB improvement with oversampling Upper panel: block diagram showing signal path through an ADC with quantisation noise and analogue noise combining into output noise and SNR. Lower panel: ENOB versus oversampling ratio curve showing fractional bit gains with increased averaging. Signal in ADC core Output code Quantisation noise Analogue noise Output noise SNR / ENOB Quantisation and analogue noise Oversampling / averages ENOB (bits) 16× 64× Noise-limited ENOB ≈ +0.5 bit ≈ +1 bit ENOB versus oversampling or averaging

Drift & aging metrics – How to read temperature and aging specs

Drift and aging metrics describe how ADC errors grow with temperature and operating time. Offset temperature coefficient is often expressed in µV/°C or LSB/°C, while gain drift appears in ppm/°C. Some datasheets also show INL changes versus temperature. These numbers turn a simple 25 °C accuracy figure into a full temperature-range error picture.

Long-term drift captures slow changes over months or years, for example specifications like 20 ppm/√kHr or 50 ppm/1000 h. These figures are statistical in nature, but they are still vital for metering and instrumentation that must stay within tolerance without constant calibration. They help estimate how much extra error appears over the intended service life.

In an error budget, temperature and aging terms are added on top of static transfer errors and noise. Designs with tight accuracy requirements use these coefficients to size margins and plan recalibration intervals. Thermal design, power distribution and PCB layout can reduce gradients and hot spots, but those implementation details are discussed on dedicated thermal and layout pages.

  • Convert offset and gain tempcos into full temperature-range error.
  • Interpret long-term drift numbers and their impact on system accuracy over time.
  • Use drift metrics to estimate calibration intervals for precision converters.
ADC offset and gain drift with temperature and aging Left panel shows offset and gain error versus temperature with different slopes. Right panel shows total error slowly increasing with operating time due to drift and aging, approaching a specification limit. Temperature (°C) Error -40 25 85 Offset tempco Gain tempco Offset and gain versus temperature Operating time Total error Spec limit Drift Aging Total error versus operating time

Building an ADC error budget – Combining offset, linearity, noise and drift

An ADC error budget converts individual datasheet specifications into a single system-level accuracy figure. Static errors such as offset, gain and INL, noise-based terms such as SNR or ENOB, distortion metrics such as THD and SFDR, and slow effects such as temperature drift and aging are all mapped into a common units framework such as percent of full scale or ppm. External contributors from reference, driver, sensor and clock are treated as separate error channels rather than hidden inside the ADC line item.

For each error source, the budget records the specified value, the converted contribution in the chosen units, and whether it is treated as a worst-case bounded term or as a random term suitable for RSS (root-sum-square) combining. Deterministic errors such as INL, gain error and reference accuracy are often constrained directly, while independent noise and drift contributions are combined using RSS to obtain a realistic total. The result is a clear picture of which error sources dominate and where design effort or cost is best spent.

DC accuracy budgets focus on offset, gain, INL, noise and drift over temperature and lifetime, together with reference and sensor errors. AC accuracy budgets emphasise SNR, SINAD, SFDR and ENOB over the intended input frequency range, while still respecting reference and driver limitations. In both cases, a structured error table and a visual view of contributions help ensure that the ADC and its surrounding circuitry are correctly matched to the system specification.

  • Define a complete set of error channels: ADC internal, reference, driver, sensor and clock.
  • Convert all errors into a common unit (for example %FS or ppm) and combine them using worst case or RSS.
  • Separate DC-focused budgets from AC-focused budgets so that the right metrics are optimised.
ADC error budget structure and dominant contributions Upper panel shows an error budget block diagram with ADC internal, reference, driver, sensor and clock error blocks feeding a total error block. Lower panel shows a bar chart of relative error contributions, highlighting dominant sources. ADC internal INL / DNL Offset / Gain Noise Drift Reference Driver / Sensor Clock / jitter Total error DC / AC budget Error budget channels and total accuracy Error sources Contribution to total INL Noise Ref Driver Drift Sensor Clock Dominant contributors Visualising error contributions

IC selection logic – Choosing an ADC from linearity and error metrics

ADC selection from a linearity and error perspective starts with the application class. Precision DC and metrology projects focus on INL, offset, gain error, low-frequency noise and drift. High-speed AC and RF sampling projects prioritise SNR, SINAD, SFDR and ENOB at the required input frequency. Mixed-signal systems that must capture both slow and fast phenomena need balanced DC and AC accuracy, often combining a high-resolution converter with careful calibration or a dual-ADC architecture.

An error budget provides target windows for each metric. From the allowed total system error, a portion is allocated to the ADC and its reference. This allocation is then translated into limits on INL, noise, SNR, SFDR and drift. When comparing datasheets, it is essential to match conditions: resolution, sampling rate, input frequency, input level, temperature range and whether values are typical or maximum. Attractive headline numbers at a single operating point may not hold under the actual system conditions.

In many designs, there is a trade-off between using a high-linearity, low-drift converter with only light system calibration, and using a lower-cost converter with more aggressive multi-point calibration and digital correction. High-linearity parts reduce complexity and test time but increase BOM cost. Lower-cost parts can be viable when the system can tolerate extra calibration overhead and when long-term drift and noise remain within the budget after correction.

  • Classify the application as DC-focused, AC-focused or mixed to prioritise relevant metrics.
  • Use the error budget to back-calculate required INL, SNR, SFDR and drift limits for candidate ADCs.
  • Check datasheet test conditions and decide between higher-linearity devices and lower-cost parts with calibration.

Representative ADC part numbers by error profile

The following devices are examples of ADCs often used in different accuracy regimes. The focus is on resolution, linearity, noise and drift performance; interface and power details belong on dedicated selection pages and BOM tables.

  • Precision DC / metrology (high INL accuracy, low noise, low drift)
    Examples: AD7177-2 (24-bit ΣΔ), ADS1262 (32-bit ΣΔ), LTC2500-32 (ΣΔ with low noise and flexible filtering).
  • Industrial control / motor and power control (mid- to high-resolution SAR)
    Examples: AD7606B (16–18-bit simultaneous-sampling SAR), ADS8688 (16-bit SAR with integrated front-end), LTC2380-24 (24-bit SAR for precision control loops).
  • High-speed AC / RF and IF sampling (dynamic performance and SFDR)
    Examples: AD9680 (14-bit high-speed ADC), AD9234 (14-bit dual RF-sampling ADC), ADS54J60 (16-bit RF-sampling JESD ADC).
  • Mixed-signal systems (balanced DC and AC accuracy)
    Examples: AD7768 (24-bit wideband ΣΔ), LTC2387-16 (16-bit high-speed SAR), ADS8881 (18-bit SAR for precision and mid-speed tasks).
ADC selection decision tree from linearity and error metrics Decision tree starting from application type and branching into DC-focused, AC-focused and mixed-signal paths, with each branch highlighting the most important error metrics for selection. Application type DC-focused AC-focused Mixed INL / Drift Noise / DC ENOB SFDR / SINAD AC ENOB DC metrics AC metrics ADC selection decision tree

Engineering checklist – Linearity & error related tasks along the project flow

This checklist turns linearity and error concepts into concrete project actions. Each phase of the design flow includes specific items to review: defining DC and AC accuracy targets, checking datasheet conditions, reserving calibration hooks in the schematic, confirming that PCB implementation does not break error budget assumptions, and establishing production and field test practices for INL, DNL, SNR and drift.

The goal is to avoid treating linearity and error metrics as static datasheet numbers. Instead, requirements, selection, implementation and test are aligned around a single error budget. At each stage, a few targeted questions for suppliers, FAE support and internal teams can prevent later surprises in accuracy or stability.

Requirements phase – Define DC / AC accuracy and environment

Specify DC accuracy as a numeric budget over temperature and lifetime, for example total error in percent of full scale or ppm. For AC performance, define required SNR, SINAD, SFDR and ENOB at explicit input frequencies and sampling rates. Document operating temperature range, storage conditions and whether periodic recalibration is allowed in the field. These values form the top-level constraints for the later ADC error budget.

Typical supplier questions at this stage include requests for reference designs with similar accuracy, guidance on realistic margins for tempco and drift, and clarification of which metrics are most critical in comparable customer applications.

Selection phase – Verify linearity, noise and drift conditions

During ADC selection, every key metric is checked under its stated test conditions. INL and DNL definitions (endpoint versus best-fit, typical versus maximum, temperature range) are matched to the intended environment. SNR, SINAD, SFDR and ENOB are verified at the input frequencies and sampling rates that matter for the design. Noise data, including low-frequency noise or noise-free code information, is compared against DC accuracy needs.

Temperature coefficients and long-term drift numbers are read alongside reference and driver recommendations. Questions for FAE support often focus on ENOB versus frequency curves, SFDR behaviour across the band, and typical drift distributions beyond what is printed in the datasheet.

Schematic phase – Reserve calibration and observability hooks

The schematic stage ensures that the design includes ways to measure and correct errors. Paths or switches for zero and near full-scale inputs allow offset and gain calibration without redesign. Temperature sensors near the ADC and reference help characterise drift in system conditions. Test points at critical nodes such as ADC inputs, reference nodes and clock lines support debug and correlation to the error budget.

In addition, digital interfaces and configuration pins are checked to confirm that on-chip calibration registers, digital filters and test modes can be accessed. Suppliers can be asked for recommended calibration topologies and which internal features are most useful for offset and gain trimming.

PCB & layout review phase – Confirm error budget assumptions on the board

Layout review checks whether the PCB implementation still matches the assumptions used in the ADC error budget. Analogue and digital return paths, reference routing and decoupling, clock routing and return paths, and thermal coupling between the ADC, reference and heat sources are reviewed against the original accuracy targets. High impedance nodes and sensor connections are examined for leakage, contamination risk and unexpected loading.

Vendor evaluation board layouts and application notes are useful references for highlighting which regions are most sensitive to INL, noise and drift. Layout details themselves are treated as implementation topics on dedicated layout and clocking pages; this checklist focuses on verification of assumptions.

Lab validation phase – Correlate INL, noise and SNR to the budget

Laboratory validation confirms that the assembled system meets the planned error budget. DC tests such as ramp or multi-point measurements provide estimates of offset, gain and linearity performance. AC tests using single-tone FFT at key frequencies validate SNR, SINAD, SFDR and ENOB against datasheet curves and model assumptions. Low-frequency noise and noise-free code metrics are checked under realistic operating conditions.

Temperature sweeps or spot checks at several temperatures verify offset, gain and noise drift. For each discrepancy between measured and expected behaviour, the error budget can be updated and responsibility assigned to ADC, reference, driver, sensor or layout.

Production & field phase – Production test and long-term control

In production, a minimal but effective test set is derived from the full lab validation plan. Typical checks include offset and gain at one or two calibration points and a simplified FFT-based SNR or SINAD test on a sample basis. Golden units or golden boards are used to monitor test stability over time. Test limits are linked directly to the error budget so that margin to system requirements is controlled at the line.

Field procedures define recommended recalibration intervals and trigger conditions for deeper investigation. When deployed systems show unexpected drift or accuracy loss, recorded ADC readings, temperatures and operating hours support joint analysis with the supplier.

  • Tie requirements, selection, implementation and test to one coherent error budget.
  • Use each design phase to confirm assumptions about linearity, noise and drift.
  • Carry a small but focused set of INL, noise, SNR and drift checks into production and field support.
Engineering checklist flow for ADC linearity and error Flow diagram from requirements through selection, schematic, layout review, lab validation and production, with short reminders for linearity and error checks at each step. Linearity & error checklist along the project flow Requirements Targets & budget Selection Specs & tests Schematic Cal hooks Layout review Layout vs budget Lab validation FFT & linearity Production Sampling tests At each step, verify INL / DNL, noise, SNR and drift against the planned error budget.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs – Linearity, noise and error budgeting in ADC designs

Why do two vendors quote different INL for “similar” 16-bit ADCs?

INL values often differ because the underlying definition and test conditions are not the same. Some datasheets use endpoint INL, some use best-fit line, and others use monotonic curve or segmented approximations. These methods can produce noticeably different numeric results even for similar silicon quality.

INL may be quoted as typical or maximum, and it may be specified at 25 °C only or over the full temperature range. A device with a tight typical INL at room temperature can still have a looser maximum INL over temperature. Some vendors also exclude a small range near full scale or certain codes from the headline figure.

To compare devices fairly, align definitions and conditions: check endpoint versus best-fit, typical versus maximum, temperature range, code range and reference configuration. Then convert INL into the units used in the system error budget and evaluate whether each candidate meets the same DC accuracy target.

If two INL numbers still look very different after normalising conditions, the device with tighter maximum INL under realistic operating conditions usually provides a more predictable and calibration-friendly system.

How should DC and AC ENOB numbers from datasheets be compared?

DC ENOB or “noise-free resolution” is derived from low-frequency noise and refers to how many bits remain stable when the input is static or very slow. It is typically extracted from RMS noise or peak-to-peak code jitter with heavy filtering or low data rates.

AC ENOB is calculated from SINAD measured in an FFT test with a sine-wave input and a specific bandwidth. It reflects both noise and distortion at a particular input frequency and sampling rate. AC ENOB usually decreases as input frequency approaches the Nyquist region.

For precision DC applications, DC ENOB and noise-free bits are more relevant. For RF, IF and wideband acquisition, ENOB versus input frequency curves are the key reference. Mixed-signal systems need both: DC ENOB for slow quantities and AC ENOB for waveform fidelity.

When comparing different datasheets, always align bandwidth, input frequency, sample rate and filtering. ENOB numbers taken under different conditions should not be compared directly without understanding those differences.

Is 0.5 LSB INL enough for a 20-bit measurement with averaging?

Averaging and oversampling reduce random noise but do not reduce deterministic INL. A 0.5 LSB INL figure for a 16-bit ADC is excellent in its own domain, but “20-bit measurement” usually implies a much tighter effective linearity requirement once the larger code range is considered.

For extended-resolution measurements, the key question is how much of the INL can be modelled and corrected. If the INL shape is stable versus time and temperature, multi-point calibration or table-based correction can translate a modest INL figure into a much more linear system response.

Noise reduction through averaging can provide additional bits of resolution, but any residual INL after calibration still limits absolute accuracy. The error budget should define how much of the total error can be attributed to static non-linearity and whether 0.5 LSB at 16 bits is compatible with the 20-bit target.

In many control and measurement systems, 0.5 LSB INL is sufficient when combined with careful calibration and noise reduction. High-end metrology with true 20-bit absolute accuracy may require converters with tighter INL specifications or dedicated linearity calibration schemes.

Can digital calibration completely remove INL and DNL errors?

Digital calibration is very effective for offset and gain error and can significantly reduce predictable INL. By measuring the transfer curve at selected points and applying linearisation or look-up tables, much of the static non-linearity can be compensated.

However, DNL and missing codes represent structural properties of the converter. Calibration can hide some effects at the system level but cannot create codes that do not exist or fully remove discontinuities in the analogue front-end. These artefacts often show up as residual distortion or localised non-linearity.

Time- and temperature-dependent changes in INL are also difficult to remove completely. Frequent recalibration or temperature-dependent correction tables can mitigate drift, but residual variation remains part of the error budget.

Digital calibration therefore reduces INL and DNL impact rather than eliminating them. System requirements and calibration capability should be balanced against the intrinsic linearity of the chosen ADC architecture.

Why is measured SFDR or SINAD much worse than the datasheet value?

Large gaps between measured and datasheet SFDR or SINAD are often caused by test conditions rather than the ADC itself. Signal source purity, driver amplifier distortion, reference noise, clock jitter, layout coupling and grounding can all degrade dynamic performance.

Datasheet values are normally taken under carefully controlled conditions: very low-distortion signal generators, optimised drivers, clean references, short analogue paths and well-designed evaluation boards. Measurement bandwidth, windowing, record length and averaging also influence the FFT result.

When SFDR or SINAD appears much lower than expected, first align input frequency, amplitude, sample rate, bandwidth and processing with datasheet conditions. Then check source distortion, driver headroom, reference routing and clock jitter. Only after these contributors are bounded should the device itself be suspected of falling short.

A modest reduction compared to datasheet values is normal in real systems. Large gaps usually indicate a measurement or implementation issue that can be corrected.

How does clock jitter show up in THD, SFDR and SNR measurements?

Random clock jitter introduces uncertainty in the sampling instant, which appears primarily as additional noise in the FFT. At higher input frequencies this extra noise reduces SNR and ENOB, causing the familiar roll-off of AC performance versus input frequency.

Deterministic jitter or phase modulation can generate discrete sidebands around the input tone. These sidebands behave like spurs and reduce SFDR and SINAD rather than only affecting broad-band noise. The effect is most visible when the ADC and analogue front-end are otherwise very linear.

The relative importance of jitter increases with input frequency and with the desired effective number of bits. High-frequency, high-resolution applications therefore require very low jitter and careful clock distribution to maintain both SNR and SFDR.

Detailed jitter budgeting and clock tree design are handled on dedicated clocking and jitter pages; this FAQ focuses on how jitter influences common FFT-based metrics.

How many averages or oversampling ratio are needed for 1 extra bit of ENOB?

When noise is random and uncorrelated, oversampling and averaging improve SNR by reducing the noise bandwidth or by averaging out noise in the time domain. Doubling the number of independent samples ideally improves SNR by about 3 dB.

Since 1 bit of ENOB corresponds to approximately 6 dB of SNR, an oversampling ratio of about 4× yields roughly 0.5 bit improvement, and an oversampling ratio of about 16× yields roughly 1 bit improvement. The exact result depends on noise spectrum, filtering and correlation between samples.

Oversampling and averaging only help with random noise. They do not improve INL, DNL or fixed distortion components, and they do not correct drift. These deterministic error sources still need adequate initial performance and appropriate calibration in the error budget.

For many designs, combining a moderate oversampling ratio with a low-noise reference and driver offers a practical way to gain additional effective bits without changing the ADC architecture.

What is the practical impact of long-term drift in utility meters or lab instruments?

Long-term drift describes slow changes in offset, gain and sometimes linearity over hundreds or thousands of operating hours. Even small ppm-per-kilohour numbers can accumulate into a noticeable percentage of full scale over the service life of a meter or instrument.

In utility meters and weighing systems, drift directly affects billing or measurement fairness. If drift is not controlled or recalibrated, the system may eventually exceed regulatory accuracy limits even if initial calibration was precise. In laboratory instruments, drift affects the reliability of reference readings and long-term experiments.

The error budget should therefore include a dedicated term for long-term drift. This term, combined with temperature drift, determines an appropriate recalibration interval and the required margin between initial accuracy and the worst acceptable accuracy at end of life.

Devices with lower specified long-term drift or better characterised drift behaviour simplify maintenance and reduce the need for frequent calibration in the field.

Do monotonicity guarantees need to hold over the full temperature range?

Monotonicity ensures that the output code never decreases when the input increases. This property is important for control loops, protection functions and threshold-based decisions, where a non-monotonic step can cause oscillation or incorrect triggering.

For systems that operate over a wide temperature range, monotonicity only at room temperature may not be sufficient. Temperature-dependent code transitions can introduce local reversals at extremes if DNL worsens with temperature. Designs that rely on smooth, predictable response should therefore prefer parts with guaranteed monotonicity over the full operating range.

In less critical monitoring or logging applications, occasional minor non-monotonic behaviour may be tolerable, especially when data is averaged or filtered. In these cases, other metrics such as noise, drift and cost may dominate the selection decision.

The need for full-range monotonicity should be derived from the control and protection requirements rather than from ADC resolution alone, and then reflected in the selection criteria and error budget.

When should linearity (INL/SFDR) be prioritised over noise (SNR/ENOB)?

Linearity is critical when harmonic content, spur levels or intermodulation behaviour directly affect system performance. Spectrum analysers, communication receivers and precision waveform capture are examples where INL, SFDR and SINAD heavily influence usable dynamic range and spur-free operation.

Noise is dominant in applications that average or integrate signals over time, such as energy metering, weighing and slow sensor measurements. In these systems, a few extra bits of noise-free resolution may be more valuable than marginal improvements in SFDR, as long as non-linearity stays within budget.

Because noise can often be reduced by oversampling and filtering, while non-linearity is harder to correct, high-precision and high-dynamic-range designs frequently prioritise inherently good linearity first and then improve noise through system techniques.

The error budget should capture this balance explicitly: identify whether spectral purity or low random noise is the primary bottleneck, then allocate more margin to the metric that most constrains system performance.

How should the error budget be allocated between ADC, reference and sensor?

A practical approach is to start from the total allowable system error and then split it between three main contributors: sensor and environment, ADC and reference, and everything else such as wiring, layout and auxiliary circuitry. The split depends on which elements are easiest to improve and which are hardest to control.

Sensor and environment errors are often difficult to reduce and may deserve a significant portion of the budget. ADC and reference performance can usually be improved through device selection and calibration, so their allocated share can be tighter. Layout and secondary effects receive a smaller but explicit margin to avoid surprises.

Within the ADC and reference block, the budget is further divided among INL, offset, gain, noise, temperature drift and long-term drift. This allocation then drives the choice of converter family, reference grade and calibration strategy.

There is no single universal ratio. The most effective split is one where the hardest-to-change contributors have sufficient margin and the more flexible contributors are pushed closer to their practical limits.

Can a histogram test alone guarantee adequate ADC linearity for an application?

Histogram or code-density testing is a powerful method for assessing INL, DNL and missing codes across a range of input codes. It is very sensitive to static linearity errors and is widely used both in production and in laboratory evaluation.

However, a histogram test does not fully characterise dynamic behaviour. It does not directly reveal harmonic distortion or intermodulation performance under high-speed or wideband operation, and it is limited to the code range and conditions exercised during the test.

For precision DC applications, histogram results combined with a few additional DC checks and drift measurements are often sufficient to confirm linearity. For AC and mixed-signal applications, histogram testing should be complemented with FFT-based SNR, SINAD and SFDR measurements at representative input frequencies.

A histogram test is therefore an important part of the linearity verification toolbox but not a complete solution for all application types on its own.