Linearity & Errors in DACs: INL/DNL, Gain & Offset
← Back to:Digital-to-Analog Converters (DACs)
This page shows how INL/DNL, gain/offset, and drift translate into real setpoint error, and how to measure, budget, and calibrate them without being misled by test artifacts. The goal is stable, monotonic, production-ready accuracy across temperature, time, and load—not just good-looking datasheet numbers.
What this page solves (linearity errors in real systems)
Static DAC accuracy fails in predictable patterns. This page maps common field symptoms (wrong setpoint, uneven steps, poor repeatability) to the specific error families that create them: gain/offset (absolute accuracy), DNL/monotonicity (code-to-code consistency), INL (curve shape), and drift (time/temperature stability).
The goal is not to memorize definitions. The goal is to assign ownership and pick the first measurements that can separate “DAC core” effects from reference/load/board effects, without jumping straight into calibration and accidentally masking the real problem.
Accuracy (gain/offset)
What shows up: the output is consistently high/low, or the full-scale span is wrong.
What it maps to: endpoint errors dominated by gain/offset, reference accuracy, and output loading.
First check: measure a few stable codes near 0%, 50%, 100% FS and compare against the same measurement chain each time.
Monotonicity (DNL)
What shows up: small trim steps feel uneven; occasional “no change” or backwards movement.
What it maps to: code-width variation (DNL) and margin to noise/settling that can hide or mimic DNL.
First check: step through a small code window with repeated measurements per code; look for non-repeatable vs repeatable patterns.
Drift (temperature/time)
What shows up: the same code gives different outputs after warm-up, airflow, or seasonal temperature.
What it maps to: offset drift, gain drift, and shape drift (INL moving with temperature gradients and aging).
First check: hold a midscale code and log output vs time and temperature; repeat after a controlled thermal change.
Define these before talking about calibration
- Allowed end error: use a system unit (mV, ppm, %FS) and name the decision point (pass/fail).
- Environment window: temperature range, warm-up time, and whether airflow or load changes are allowed.
- Test method: how outputs are sampled, filtered, and averaged (repeatability vs absolute accuracy).
Static error taxonomy: offset, gain, INL, DNL, monotonic
Clear language prevents bad decisions. The same DAC can look “great” or “bad” depending on definitions, reference lines, excluded codes, and measurement uncertainty. The taxonomy below is written in engineering terms: what the error means, how it is expressed, and the most common pitfall.
Offset error
Meaning
A near-constant shift of the transfer curve up or down.
Expression
LSB, mV, or %FS (often specified at or near zero-scale or midscale).
Common pitfall
Attributing a load-induced drop or reference error to “DAC offset”.
Gain error
Meaning
A slope error: the output span is too small or too large across the code range.
Expression
ppm of full-scale, %FS, or LSB at full-scale after offset removal.
Common pitfall
Using gain specs without stating the reference accuracy and measurement reference points.
INL (integral nonlinearity)
Meaning
The worst-case deviation of the transfer curve from an ideal reference line (shape error).
Expression
LSB or %FS, defined relative to endpoint or best-fit reference (must be stated).
Common pitfall
Treating INL like a constant offset; INL is code-dependent and can change with temperature gradients.
DNL (differential nonlinearity)
Meaning
The error in each code step width relative to 1 LSB (step uniformity).
Expression
LSB (often reported as min/max across codes; excluded codes must be stated).
Common pitfall
Confusing noise/settling jitter with true DNL; repeatability is required to call it DNL.
Monotonicity
Meaning
Increasing code never produces a decreasing output (in the stated measurement conditions).
Expression
A guarantee statement, often supported by a minimum DNL spec and a test method.
Common pitfall
Assuming monotonicity without margin to noise, temperature drift, and measurement uncertainty.
Common mistakes that break specs in the lab
- Calling an endpoint shift “INL” (INL is code-dependent; offset is not).
- Budgeting accuracy from best-fit INL without matching the system error definition.
- Claiming “bad DNL” when the measurement chain is noise/settling limited and not repeatable.
- Assuming monotonic behavior without stating temperature window and noise margin.
How datasheets specify INL/DNL (endpoint vs best-fit, codes excluded)
INL and DNL numbers are only meaningful when the reference line, code range, and measurement conditions match the intended system definition. The same DAC can report very different INL depending on whether it is referenced to endpoint, best-fit, or a segmented-fit line. “Good-looking” typical plots can also hide excluded codes, restricted code ranges, or temperature limits that matter in precision setpoint systems.
This section provides a practical reading checklist that prevents three common failures: budgeting errors (wrong accuracy roll-up), guarantee errors (assuming monotonicity or worst-case behavior without proof), and reproducibility errors (lab results that do not match the vendor’s test method).
Datasheet reading checklist (use before comparing parts)
- INL reference line: confirm endpoint vs best-fit vs segmented-fit. (Different reference lines can change the reported INL even when the silicon is identical.)
- Units and scaling: LSB vs %FS vs ppm, and whether LSB is at full-scale or midscale. (Convert to the system unit used for end accuracy.)
- Code range and excluded codes: check if endpoints, major-carry regions, or “missing codes” are excluded. (A guarantee cannot rely on a range that the system will actually use.)
- Temperature conditions: verify the stated TA/TJ, range, warm-up, and whether self-heating is implicit. (Shape error can move with thermal gradients.)
- Output configuration and load: buffer enabled/disabled, load current, and capacitive loading. (Load-dependent drops can look like gain/INL shifts.)
- Typical vs max: use max (or guaranteed limits) for monotonicity and worst-case accuracy. (Typical values describe “what might happen,” not “what must happen.”)
- Test method cues: step sequence, averaging, filtering, and settling rules. (Noise-limited measurement can mask true DNL or exaggerate it.)
Quick decision: which INL/DNL numbers matter
- Absolute setpoint accuracy: endpoint-aligned definitions and worst-case limits are the safer basis for error budgeting.
- Monotonicity and small-step trims: look for minimum DNL (and its test range) plus margin to noise and temperature drift.
- After calibration (fit/LUT): best-fit style numbers can approximate residual shape, but only when the calibration method is stated.
Where nonlinearity comes from (architecture-agnostic root causes)
Nonlinearity is not a single mechanism. It is the visible output of physical mismatches, parasitics, coupling paths, and thermal gradients. Understanding root causes helps predict risk during selection and prevents wasted calibration effort on problems that move with load, timing, or temperature.
The most useful level of detail is: root cause → which error terms it tends to create → one fast observation that can separate it from other causes. The cards below follow that exact pattern.
Mismatch (resistors / current sources)
Maps to: INL (shape) and local DNL patterns that repeat at the same codes.
Typical signature: code-dependent error that is stable across repeats and relatively insensitive to modest load changes.
Fast check: re-measure the same code sweep with the same instrument chain; confirm repeatability before calling it DNL/INL.
Switch + output parasitics (Ron, charge, routing)
Maps to: DNL and localized INL shifts, often strongest near large transitions (major-carry regions).
Typical signature: error changes when output configuration, load current, or output buffer mode changes.
Fast check: sweep linearly at two load levels; if the curve shape moves, the output path is participating.
Reference + supply coupling (impedance and return paths)
Maps to: gain/offset shifts plus code-dependent bending that can mimic INL when reference impedance is stressed.
Typical signature: curve shape changes with update rate, output current, or decoupling placement.
Fast check: hold a code and change only update activity (or neighboring digital noise); observe whether the output baseline shifts.
Self-heating + thermal gradients
Maps to: drift and “shape drift” (INL changing with temperature distribution, not just a simple offset).
Typical signature: the same sweep changes after warm-up, airflow, or a nearby heat source turns on.
Fast check: repeat the same sweep after a controlled thermal step; compare the curve shape, not only the endpoints.
Monotonicity assurance: what it really means and how to guarantee it
Monotonicity is a guarantee about behavior, not a one-time observation. It means that increasing code never produces a decreasing output within a stated window: temperature range, warm-up state, load range, update conditions, and measurement uncertainty. Without these conditions, “monotonic” becomes an ambiguous claim that can be broken by noise, drift, or incomplete settling.
DNL is closely related because it describes code-to-code step width. A common rule of thumb is DNL > −1 LSB, but the practical guarantee still needs margin: measurement noise and temperature drift can hide a true negative step or create an apparent one. For multi-channel systems, “system monotonicity” must also be defined: monotonic per channel is not the same as monotonic for a derived function such as comparisons, thresholds, or channel-to-channel subtraction.
Four conditions for a monotonic guarantee
- DNL(min) with margin: margin must cover measurement uncertainty, output noise, and expected drift within the verification window.
- Settling and filtering defined: step-to-step evaluation must use a fixed settling time and a fixed averaging/filter rule.
- Environment window stated: temperature, warm-up state, supply limits, and load range must match the intended deployment.
- System monotonicity defined (multi-channel): specify whether monotonic applies per output or to a derived rule (thresholds, deltas, ratios).
Five common scenarios that break monotonic behavior in practice
- Noise or ripple dominates the step: the observed output can “move backward” even when the DAC is monotonic.
- Insufficient settling: steps are evaluated before the output and load have stabilized.
- Rapid temperature change or thermal gradient: drift over the step window destroys the intended ordering.
- Load changes with code: output drop or headroom variation can look like negative DNL.
- Multi-channel comparisons: channel-to-channel offset/gain spread breaks monotonic ordering for thresholds or differences.
Temperature drift: separating offset drift, gain drift, and INL shape drift
Drift is not one number. It is a combination of three behaviors that look similar in a single-point check but behave very differently across the code range. Separating them early prevents the wrong calibration strategy and improves how recalibration intervals are planned.
Offset drift shifts the whole curve, gain drift changes the span, and shape drift changes the curvature of INL. Offset and gain are often corrected with one-time 1-point or 2-point calibration. Shape drift is harder because it moves the residual error across codes, often with thermal gradients and self-heating.
Offset drift (shift)
What changes: the curve moves up/down by nearly the same amount across codes.
What it breaks: absolute setpoint accuracy, especially near zero-scale and midscale.
First mitigation: 1-point trim or periodic zero/reference check (if the temperature correlation is stable).
Gain drift (tilt)
What changes: the slope changes; full-scale span expands or shrinks with temperature.
What it breaks: endpoint accuracy; errors grow with code distance from the trim point.
First mitigation: 2-point trim using stable references; verify at an intermediate code to catch residual shape.
INL shape drift (bend)
What changes: curvature changes; midrange and segment boundaries move relative to endpoints.
What it breaks: post-trim residual accuracy across codes; multi-point setpoints no longer align.
First mitigation: narrow the temperature window or use multi-point compensation only when coefficients stay stable over time.
Calibration feasibility (typical difficulty)
Easy
Offset drift
Medium
Gain drift
Hard
Shape drift
Temperature sensing is only useful when it correlates with the drifting mechanism. A sensor that tracks board ambient but not the local gradient can produce weak correlation and unstable compensation.
Long-term drift & aging: what moves, how fast, and how to plan recalibration
Long-term stability is defined by which error terms move over time and whether that motion can be controlled by calibration or must be contained by design. Aging mechanisms usually appear as slow changes in offset and gain, while some environments can also change INL shape and invalidate a previously good multi-point correction.
The most reliable planning language is a recalibration window: define an allowed drift threshold in system units, estimate drift rate with margin, and choose a recalibration interval that balances maintenance cost and accuracy risk. Production planning also must separate batch spread (unit-to-unit variation at time zero) from time drift (change of a given unit over time).
Recalibration planning sheet (write these 6 fields during design)
1) Allowed drift threshold
Define the trigger in system units (±mV, ±ppm, ±%FS, or ±LSB) for the critical setpoints.
2) Target recalibration interval
Choose an interval (weeks/months) based on maintenance cost and the required probability of staying within the threshold.
3) Temperature window & warm-up state
State Tmin–Tmax and warm-up requirement so drift is not confused with short-term thermal settling.
4) Aging conditions
Declare the expected storage and operating stress (humidity, thermal cycling, and hours at temperature) to set margin.
5) Calibration and verification points
Specify codes near zero, midscale, and full-scale (plus one extra midrange check) to separate offset, gain, and shape motion.
6) Logging & trace strategy
Record timestamp, temperature, code, measured output, and instrument ID to separate real drift from test-chain drift.
Batch spread vs time drift
- Batch spread: unit-to-unit variation at the same time; primarily managed by production calibration and acceptance limits.
- Time drift: change of one unit over time; managed by recalibration interval and drift margin under stated conditions.
Error budgeting: rolling up INL/DNL + gain/offset into end accuracy
Budgeting turns isolated specifications into system performance. End accuracy is not determined by INL or gain alone; it is the combined effect of DAC static errors, reference accuracy and drift, output driver regulation, load interaction, and the measurement chain used for calibration and verification.
A practical budget should produce three outputs: absolute worst-case accuracy, short-term repeatability, and post-drift accuracy. Offset and gain are often reducible by calibration. INL shape and configuration-dependent errors usually remain and must be controlled by component choice and system conditions.
Error budget checklist (end-to-end)
- DAC static: offset, gain, INL, DNL (and the stated code range and temperature window).
- Reference: initial accuracy and drift (treated as a budget term, not a selection tutorial here).
- Output driver / front-end: offset and regulation vs load (budget term only).
- Load and wiring: drops, leakage, and boundary conditions that change with code or load state.
- Measurement chain: uncertainty and noise that limit how well calibration and verification can separate terms.
- Margin: reserve margin for temperature gradients, unit-to-unit spread, and unmodeled coupling paths.
Typically calibratable
- Offset: 1-point trim works when the condition window is stable.
- Gain: 2-point trim works when the span is measured with sufficient uncertainty margin.
Not truly removable by simple calibration
- INL shape: often remains after 2-point trim; multi-point compensation only helps when coefficients stay stable.
- Configuration-dependent errors: load, output stage, and reference coupling can change with mode and invalidate coefficients.
Measurement & test: how to measure INL/DNL correctly (and not fool yourself)
Measuring INL/DNL is a system problem: the result depends on the stimulus, the settling and sampling rule, the reference line definition, the measurement chain uncertainty, and the thermal state during the sweep. A “good looking” curve can still be wrong if noise, timing, or baseline nonlinearity is being fitted into the result.
A reproducible approach starts with a stable test chain and ends with statistics: repeat readings at the same code to quantify noise, compare forward vs reverse sweeps to detect code-order coupling, and keep the reference line definition consistent (endpoint vs best-fit). When results depend on sweep style, the measurement chain is writing artifacts into the data.
Three common traps (mechanism → symptom → fast check)
1) Noise reshapes INL/DNL
Noise on each code reading becomes exaggerated after differencing. DNL can look spiky or artificially smooth. Fast check: repeat the same code and compare its standard deviation to 1 LSB-equivalent.
2) The baseline is not linear
A non-ideal reference line or measurement front-end can add its own curvature into the INL result. Fast check: change range/sweep method and see whether the INL “shape” follows the setup.
3) Timing creates code-dependent bias
Sampling before settling, or filtering windows that couple to sweep order, can generate fake code-dependent errors. Fast check: forward vs reverse sweeps and different step sizes should agree if the result is real.
Pre-test checklist (10 items to lock down repeatability)
- Warm-up complete: start only after thermal stabilization to avoid drift across the sweep.
- Output configuration fixed: range, buffer state, and common-mode must be recorded and held constant.
- Load fixed and documented: resistance/capacitance/fixture impedance must not change during the run.
- Supply and reference stable: ripple/noise must stay below the step scale being evaluated.
- Settling time defined: a single rule for update → wait → sample prevents hidden code-order bias.
- Measurement range appropriate: avoid edge-of-range conditions that can add nonlinearity and drift.
- Forward vs reverse sanity check: results should not depend on sweep direction when artifacts are controlled.
- Repeat-at-code check: quantify noise with repeated reads at the same code before trusting DNL.
- Reference-line definition stated: endpoint vs best-fit must match the intended datasheet comparison.
- Logging complete: time, temperature, mode, code, output, and instrument ID enable root-cause separation.
Calibration strategies: what calibration can and cannot fix
Calibration is not a magic eraser. It reduces error terms that are low-dimensional and repeatable under the stated condition window. Offset and gain usually calibrate well. INL shape and configuration-dependent errors often remain unless multi-point compensation stays stable across temperature, time, load, and operating modes.
Calibration also injects new limits: measurement uncertainty can be written into coefficients, coefficients can drift, and interpolation and storage quantization can create residual errors. When noise or drift dominates, more calibration points can simply fit the measurement chain rather than the DAC.
1-point / 2-point (offset / gain)
- Use when: error looks like shift and tilt within a stable condition window.
- Benefit: fast, low maintenance burden, production-friendly.
- Risk: shape error hides behind the trim points; mode/load changes can invalidate coefficients.
- Verify: always check an independent midrange code not used for fitting.
Multi-point (segmented fit)
- Use when: the INL shape is repeatable and coefficients remain stable across the deployment window.
- Benefit: reduces midrange residual error that 2-point trim cannot remove.
- Risk: longer test time; measurement error can be fitted into the model; drift can reshape residuals.
- Verify: re-test after temperature and time changes to confirm shape stability.
LUT (table + interpolation)
- Use when: maximum absolute accuracy is required and storage/compute/test time are acceptable.
- Benefit: handles complex shape more directly than low-order fits.
- Risk: interpolation and quantization add error; coefficients are condition-sensitive; maintenance complexity is high.
- Verify: check off-grid points and confirm robustness across temperature and load changes.
When calibration is not worth it
- Noise-dominated: measurement uncertainty is near the target error; more points fit noise.
- Drift-dominated with weak correlation: temperature/time changes reshape errors unpredictably.
- Condition mismatch: field load/mode differs from calibration conditions; coefficients do not transfer.
FAQs: INL/DNL, gain/offset, drift, measurement, and calibration
Short, actionable answers for static linearity work: definitions, risk judgement, drift planning, measurement repeatability, and calibration boundaries.
INL: best-fit or endpoint? What changes in system error?
Endpoint INL ties the curve to the end codes and directly reflects absolute span error, while best-fit INL removes a best-fit line and emphasizes midrange shape residuals. They answer different system questions, so comparing numbers without matching the method is misleading.
- Check: confirm datasheet method (endpoint vs best-fit) and any code range exclusions.
- Action: use endpoint for end-accuracy budgets; use best-fit to judge shape residual after gain/offset trim.
DNL typical looks great, but worst-case is missing. How to judge monotonic risk?
Typical DNL does not guarantee monotonic behavior across units, temperature, and time. Monotonic assurance usually depends on the minimum DNL margin (often related to staying above −1 LSB), and the guarantee must specify conditions and the evaluated code range.
- Check: look for “monotonic” guarantee or a specified minimum DNL over temperature.
- Action: request min DNL / monotonic statement from the vendor, or test corners with repeated sweeps and temperature coverage.
Why does INL shape change with temperature, and 2-point trim cannot fix it?
Two-point trim corrects offset and gain (shift and tilt). If temperature reshapes the curve (shape drift), the error is no longer captured by a single line, so residual midrange error remains even after perfect 2-point correction.
- Check: compare INL curves at T1 and T2; if the shape differs, it is not just offset/gain drift.
- Action: shrink the condition window, improve thermal/return-path stability, or use compensation only if shape remains stable.
For INL/DNL tests, is a higher-resolution DMM always better?
Not always. Higher resolution can come with longer integration time and more sensitivity to drift during a sweep. The limiting factors are often noise, stability, range linearity, and the update→settle→sample rule, not the last digit on the display.
- Check: repeat the same code and confirm the standard deviation is well below the step you want to resolve.
- Action: prioritize stability and repeatability; choose integration time and sweep speed that keep drift below the target error.
Why does multi-point LUT calibration make production harder?
More calibration points increase test time and require a more accurate, stable measurement chain. LUTs also add storage, version control, interpolation error, and sensitivity to condition mismatch; coefficients can become invalid if temperature, load, or mode differs from calibration.
- Check: confirm the measurement uncertainty margin is far below the target residual error before adding points.
- Action: start with 1–2 point trim plus an independent midrange verification point; only use LUT if shape is stable and throughput allows.
A buffer was added, but linearity got worse. What are the two most common coupling paths?
The most common causes are (1) output-stage interaction with load and settling that creates code-dependent residuals, and (2) supply/ground coupling that turns output current changes into reference or ground modulation. Both can reshape INL rather than just shift offset/gain.
- Check: compare INL shape across two loads (R-only vs R‖C) and two analog rail impedance setups.
- Action: shorten output/return loops, isolate analog rails, and validate settling with the final buffer + load combination.
Long-term drift: which parts matter most, and how to define a recalibration interval?
Long-term stability is usually dominated by the reference path, the analog supply regulation, and precision networks and buffers near the DAC. Define recalibration using a drift threshold in system units and a drift-rate margin under the stated temperature and operating conditions.
- Check: separate unit-to-unit spread (time zero) from drift (change over time for the same unit).
- Action: set an allowed drift threshold and log time + temperature + code + measured output to trigger recalibration by evidence.
For sweep tests, how long should each code wait to be “truly settled”?
Settling is not a fixed time; it is a rule that ensures the output and measurement chain are within a defined stability threshold. Different step sizes and loads can settle differently, so the wait must be validated under the worst step and final load.
- Check: sample multiple times after an update and confirm the delta between samples falls below a chosen threshold.
- Action: define and lock an update → wait → sample rule; compare forward vs reverse sweeps to detect timing artifacts.
How to translate INL/DNL into “setpoint error” in engineering units?
Convert LSB to volts or amps using full-scale range and resolution, then map each term by meaning: offset and gain affect absolute setpoint error, INL adds midrange shape residual, and DNL describes step-to-step consistency and monotonic margin rather than absolute error alone.
- Check: state the output range and whether the spec is endpoint or best-fit before converting.
- Action: budget worst-case setpoint error using offset/gain/endpoint terms, then separately track step consistency using DNL/monotonic margin.
Does multi-channel mismatch break “system monotonicity” even if each channel is monotonic?
Yes, it can. Channel-to-channel offset and gain spread can cause one channel’s output to cross another channel’s output for the same commanded code, which breaks monotonic ordering in systems that compare channels or depend on relative setpoints.
- Check: define monotonicity at the system level (absolute per-channel vs relative between channels).
- Action: measure channel spread across temperature and include it in the system monotonic margin and calibration plan.
Datasheets mention “codes excluded.” Why does it matter for INL/DNL decisions?
Excluding codes can hide edge behavior (near zero or full-scale) where switching or output compliance effects are worst. A smaller evaluated code window can make INL/DNL numbers look better while leaving real end-of-range risks in the system.
- Check: identify the valid code range for INL/DNL and confirm it matches the system’s used range.
- Action: test the actual application code window, especially near endpoints and major-carry boundaries.
Forward and reverse sweeps disagree. What does that usually imply?
Disagreement usually indicates measurement artifacts: insufficient settling, time drift during the sweep, filtering windows that couple to code order, or thermal transients. True static INL/DNL should not depend on sweep direction when conditions are controlled.
- Check: repeat-at-code statistics and update→wait→sample timing under the worst step and final load.
- Action: slow down or re-time sampling, stabilize temperature, and confirm the result is invariant to sweep direction and step size.