Segmented DAC (Thermometer + Binary): Design & Glitch Control
← Back to:Digital-to-Analog Converters (DACs)
Segmented DACs (thermometer + binary) hit the practical sweet spot for mid/high-speed, high-linearity outputs with low update glitch. This page shows how to choose the segmentation boundary and validate worst-case codes so glitch, settling, spurs, and temperature drift stay under control.
What this page solves
Segmented DACs (thermometer MSBs + binary LSBs) exist to hit a practical “sweet spot”: strong linearity, low update disturbance, and useful speed—without the extreme area/power of a pure thermometer design.
This page shows how segmentation reduces worst-case switching events, where the true risk concentrates (the segmentation boundary), and how to validate the design with the right tests so that glitch, settling, and linearity are managed together rather than optimized in isolation.
The sweet spot (when segmented DACs are the right tool)
- Mid/high update rates where large-step transitions must remain clean and repeatable.
- High linearity requirements (INL/DNL and monotonic margin matter at the system level, not just typical curves).
- Low disturbance sensitivity where output transients can upset bias points, control loops, or downstream analog stages.
Three recurring pain points this page fixes
1) Binary major-carry glitch is too large
Worst-case code transitions can flip many weighted switches at once. Small timing skews and charge injection do not average out—they add up into a measurable impulse. Segmentation moves the highest-weight decisions into a thermometer-coded region to reduce worst-case simultaneous switching.
2) Pure thermometer (string-like) area/power becomes impractical
Thermometer coding improves monotonic behavior but the unit-element count and decoding/drive overhead scale aggressively with resolution. Segmentation keeps a small binary region for fine steps to preserve area efficiency and speed.
3) Glitch, settling, and linearity must be balanced together
A small glitch impulse does not guarantee fast settling, and great INL/DNL does not guarantee clean spectra. The output network (load, driver, reference, return paths) can dominate the observed behavior. This page treats the problem as a coupled system and ties each risk to a specific verification test.
Scope boundary (what is intentionally not expanded here)
This page stays focused on segmentation structure, boundary behavior, and validation. Interface timing (e.g., JESD alignment), full reconstruction filter design, and ultra-wideband RF DAC topics belong to dedicated sibling pages and should be treated as separate deep dives.
Practical navigation rule
If the main question is “which architecture,” stay here. If the main question is “high-speed link synchronization,” “anti-image filter design,” or “direct-RF synthesis,” jump to the corresponding sibling page.
Diagram focus: the left column is the design pressure, the center is the segmentation mechanism, and the right column shows where deeper sibling topics live to avoid scope overlap.
Definition & taxonomy
Definition
A segmented DAC splits its conversion elements into two coordinated regions: the MSBs use thermometer coding (many equal unit elements turned on in a contiguous way), while the LSBs use binary weighting (a small set of weighted elements for fine steps). Both regions sum at the output node to create the final analog level.
The architectural goal is not “maximum of one metric,” but a stable operating point where monotonic margin, worst-case switching disturbance, area/power, and update speed are simultaneously acceptable.
Two blocks, two jobs
Thermometer MSB region
- Prioritizes monotonic-friendly behavior and reduces DNL risk by avoiding large binary-weighted carry events.
- Localizes switching: transitions are spread across equal unit elements, making worst-case simultaneous flips less severe.
- Dominant risks shift toward unit-element mismatch and thermal gradients (layout and matching matter).
Binary LSB region
- Preserves area efficiency and supports fine resolution without exploding the unit count.
- Supports higher update rates with simpler decoding, but is more sensitive to weighted mismatch and boundary-related step events.
- Requires careful attention to the segmentation boundary where the two regions interact.
The segmentation boundary is the engineering “risk concentrator”
Most metrics look excellent in typical conditions, but worst-case behavior often clusters near the boundary: combined switching, mismatch mapping, and code patterns that repeatedly stress the same elements. Future sections will treat boundary codes as a dedicated verification set rather than relying on averaged results.
Taxonomy (common variants, kept intentionally brief)
- Voltage-domain vs current-domain core: changes how switching errors appear at the output node and how the driver/load interact.
- Single-ended vs differential output: impacts even-order distortion rejection and common-mode control.
- Buffered vs unbuffered output: determines how much settling is dominated by external load and amplifier stability.
- Calibrated vs uncalibrated: determines whether mismatch becomes a trimmed coefficient or a residual static error.
Quick glossary (terms used throughout this page)
Unit element
A repeated, equal-weight element (resistor segment or current cell) used by the thermometer region.
Decoder
Logic that converts MSB code into a contiguous number of enabled unit elements.
Summing node
The node where MSB and LSB contributions add; often the most sensitive point for glitch and settling.
Boundary code
A code transition that crosses the segmentation split; often a worst-case set for verification.
Diagram focus: the thermometer region reduces worst-case switching, the binary region preserves efficiency, and the dashed “boundary” line highlights where verification must concentrate.
Next section preview
The next step is choosing the segmentation split (thermometer bits vs binary bits) and defining a boundary-code test set so that linearity, glitch impulse, and settling are verified together.
Why segmentation works
Core mechanism (what actually sets glitch energy)
Update disturbance is dominated by three coupled factors: how many switches move, how well they move together (timing skew), and how much charge is injected into the sensitive output node. Segmentation works by reshaping worst-case transitions, so that large-weight decisions do not require many weighted switches to flip simultaneously.
Glitch impulse is not the same as “spike height”
A narrow spike can look small on a limited-bandwidth measurement while still delivering meaningful disturbance into a downstream amplifier, bias node, or control loop. That is why many datasheets and lab procedures focus on glitch impulse: an energy-like measure that captures how much transient charge is pushed into the output network during an update.
Practical reading rule
If two “glitch” numbers were taken with different bandwidth, load, or code transition patterns, they are not comparable. Worst-case codes matter more than averages.
Why binary-only DACs suffer worst-case “major carry” events
- Some code transitions force many weighted switches to change state at once (carry propagation across multiple bits).
- Even tiny timing differences turn “simultaneous switching” into temporary imbalance at the output node.
- Charge injection and feedthrough from multiple switches add up, producing a measurable impulse and ringing that can dominate settling.
The key point is not that every update is bad—only that a small set of worst-case transitions can dominate system behavior and must be verified explicitly.
How segmentation reshapes worst-case switching
Thermometer MSBs handle large-weight decisions
Major transitions are expressed as enabling/disabling a contiguous set of equal unit elements, reducing the need for many weighted bits to flip at once. This typically reduces the amplitude of worst-case imbalance at the output node.
Binary LSBs keep fine steps efficient
Fine resolution is produced by a compact weighted region, limiting unit count and decoder burden. This preserves speed/area efficiency while the MSB region tames the transitions that create the worst glitches.
Boundary behavior still needs dedicated management
Segmentation does not eliminate risk; it concentrates it. Transitions that cross the segmentation split can combine switching activity from both regions. Boundary codes should be treated as a mandatory worst-case verification set (glitch impulse, large-step settling, and static linearity), not as something that will be “averaged out” by sweeping many codes.
Diagram focus: the goal is not “every code is perfect,” but “the worst-case transition becomes controllable,” with boundary transitions treated as a special test set.
Segmentation planning
Decision thesis
Segmentation is not chosen by resolution alone. It is chosen by worst-case behavior under the target update rate and load: select a split that keeps boundary transitions controllable while staying inside area/power limits.
The split moves five coupled dimensions
1) Monotonic margin / DNL risk
Increasing the thermometer region typically improves worst-case step regularity, but boundary codes must still be checked for localized DNL stress.
2) INL visibility (systematic error)
As the unit array grows, gradients and systematic mismatch become more visible. Layout symmetry and thermal uniformity become first-order concerns.
3) Glitch impulse (worst-case codes)
More thermometer MSBs usually reduce the worst-case simultaneous switching energy. Boundary transitions remain the mandatory test set.
4) Area / power (unit count + drive)
Larger thermometer regions increase unit count and decoder/drive overhead. Power is spent not only in the core but also in switching and routing.
5) Update / settling (output disturbance)
Faster updates and heavier loads can shift dominance to output network settling. Low glitch is valuable, but the system still needs enough drive and clean return paths.
If thermometer MSBs increase
- Typically improves worst-case switching behavior and monotonic margin.
- Costs more unit elements, more decoding/drive, and higher sensitivity to gradients (layout/thermal).
- Requires a deliberate boundary test plan rather than relying on typical behavior.
If binary LSBs increase
- Typically improves area efficiency and speed potential with smaller unit arrays.
- Increases exposure to worst-case weighted switching patterns, making boundary and carry-like transitions more sensitive.
- Demands stronger worst-case verification coverage (boundary codes and large steps).
A practical four-step decision flow (structure first, algorithms later)
- Start with the system target: required INL/DNL behavior and allowable update disturbance. If output transients directly impact a sensitive analog node, bias point, or loop stability, the split should bias toward more thermometer MSBs.
- Add update rate and load: heavier loads and faster updates often shift the limiting factor to settling in the output network. The split must still keep boundary transitions controllable, but the system also needs adequate drive and clean return paths.
- Check area and power budget: large unit arrays and decoding/drive overhead can dominate. If budget is tight, a larger binary region may be necessary, but only with a stronger worst-case verification plan.
- Lock a boundary verification set: define boundary transitions as mandatory tests (static linearity + glitch impulse + large-step settling). Do not rely on a broad code sweep to “average out” worst-case behavior.
Scope note
Calibration algorithms can further reduce mismatch effects, but they are treated as a separate deep dive. This section focuses on choosing a robust structural split and a verification plan.
Diagram focus: the quadrant shows how the split shifts trade-offs, while the flow ensures boundary codes are treated as an explicit test set rather than averaged away.
Static linearity deep dive (INL, DNL, monotonic margin)
What the metrics really mean in a segmented DAC
- DNL describes local step regularity. It is the metric that directly threatens missing codes and monotonic behavior.
- INL describes the global transfer-shape error. It becomes a setpoint accuracy and waveform amplitude error at the system level.
- Monotonic margin is a safety margin, not a boolean. It asks whether the smallest step still remains positive under temperature drift, noise, and load disturbance.
Mismatch maps differently in the thermometer region vs the binary region
Thermometer unit mismatch behaves like local randomness
- Each step is built from equal unit elements, so errors tend to appear as local step-to-step variation.
- DNL risk often concentrates where the active unit set changes rapidly (near boundary) or where physical gradients bias a local cluster.
- The dominant question becomes: “Is the smallest expected step still safely above disturbance?” (monotonic margin).
Binary weighted mismatch behaves like coefficient / weight error
- Weighted elements amplify mismatch, so errors look more like structured weight distortion than local randomness.
- High-weight errors can imprint a repeatable INL “shape,” and boundary interactions can make sensitivity appear in specific code bands.
- The dominant question becomes: “Which weights dominate the transfer-shape error that the system sees as setpoint inaccuracy?”
Gradients turn “random mismatch” into systematic INL shape
Large unit arrays are vulnerable to thermal, IR-drop, and process gradients. Unlike purely random mismatch, gradients create repeatable, code-dependent structure: the same physical region contributes to a wide span of codes, making INL appear as a stable “shape” that can drift with temperature and supply conditions.
That is why linearity numbers are meaningful only when their measurement conditions (reference, supply, temperature, load) are matched to the intended application.
Switch non-idealities can masquerade as static linearity errors
- Code-dependent Ron / impedance changes the effective gain seen at the output node and can imprint a shaped INL error.
- Reference interaction (finite reference impedance and distribution) can convert switching-dependent current draw into code-dependent droop.
- Load interaction can make “linearity” vary with output swing and settling window, even if the core array is well matched.
A practical takeaway is to treat linearity errors as a system mapping problem, not only a device matching problem.
Boundary codes are where non-local error mapping shows up
Near the segmentation boundary, the thermometer and binary regions change contribution together. This can re-map mismatch and switch non-idealities into code bands that look “unexpectedly sensitive.” A small physical bias (gradient, reference droop, or switch impedance) can affect more than one adjacent code step, creating non-local behavior.
Because of this, boundary-adjacent codes should be treated as a mandatory static-linearity check set, not as a detail that will average out in a wide sweep.
Monotonicity: theoretical vs effective (under disturbance)
Theoretical monotonic
Adjacent codes do not reverse in ideal conditions. This is a structural property of the array and coding.
Effective monotonic margin
The smallest step must remain positive after temperature drift, noise, reference variation, and load disturbance. A “guarantee” is meaningful only with a margin model.
Diagram focus: static linearity is a mapping from physical sources into metrics, with boundary-adjacent codes often behaving as a dedicated sensitivity band.
Glitch & major-carry (impulse and worst-case codes)
Glitch impulse is an energy-like disturbance metric
Glitch impulse measures how much transient disturbance is injected during an update. It is not the same as settling time: a design can have a small impulse but slow settling (weak drive / heavy load), or a fast settling window but large impulse (strong simultaneous switching).
Typical sources (grouped by what they control)
How much gets injected
- Timing skew between switching elements.
- Charge injection / feedthrough from switch gates and parasitics.
How large it appears
- Coupling paths from digital edges into sensitive nodes.
- Output node impedance and reference distribution impedance.
Define a structural worst-case code set (do not average it away)
Boundary-adjacent codes
Transitions crossing the split can combine activity from both regions. This is the primary “segmented-specific” hotspot.
Large MSB step transitions
Big steps maximize switching stress and expose skew, injection, and reference droop behavior that small steps can hide.
Extremes (near 0 and full-scale)
Extremes often activate the most coupling paths and supply/return stress, revealing problems that mid-scale patterns can miss.
Major-carry: how the risk shifts in segmented DACs
In binary-only architectures, carry-related transitions can flip many weighted bits and create the largest impulses. Segmentation reduces this by assigning large-weight decisions to a thermometer-coded region. The resulting worst-case set often shifts toward boundary crossings and large-step patterns, which must be verified explicitly.
Control strategy (segmentation-related only)
- Use the thermometer region to reduce the number of high-weight simultaneous flips and lower typical worst-case impulse energy.
- Treat boundary codes as a dedicated verification subset (impulse + large-step settling + static linearity), not as a detail that will average out.
- When selecting the split, use worst-case code behavior as an input—not only typical specs or resolution targets.
Diagram focus: worst-case behavior clusters in structural hot zones; boundary and large steps must be verified as an explicit set rather than averaged across broad sweeps.
Settling vs glitch (do not mix the metrics)
Three different time-domain phenomena
Glitch (impulse)
The immediate disturbance injected at the update edge. It is an energy-like metric and is strongly code-pattern dependent.
Settling (to an error band)
The time to enter an error band (0.5 LSB / 1 LSB / ppm) and remain inside it for the defined window.
Overshoot / ringing
A dynamic response dominated by loop stability and parasitics. It can violate the error band long after the initial spike is gone.
Why glitch can look small while settling is still poor
A segmented DAC can reduce worst-case simultaneous switching and therefore reduce the injected impulse, yet the output may still settle slowly if the external network dominates: Rout + Cload + driver loop set the dominant pole(s). In that case, the spike can be modest while the tail remains long, delaying entry into a tight error band.
Reading rule
If the settling number changes strongly with load, bandwidth, or output configuration, the external chain is the dominant lever—not the coding method.
Overshoot and ringing are usually not “segmentation problems”
- Driver stability under capacitive load can create underdamped behavior and long ring-down.
- Parasitic L and C in the output path can form resonances that dominate time-domain response.
- Return-path mistakes can inject ground bounce into the output reference point, turning a clean step into oscillation.
Compensation details belong in the driver design deep dive; here the focus stays on separating metrics and interpreting conditions correctly.
How to compare settling specs without being misled
Error band
0.5 LSB, 1 LSB, and ppm-level bands are not equivalent. A tighter band almost always extends the reported settling time.
Step amplitude / code pattern
Small steps can hide slow tails and ringing that appear on large steps or boundary-adjacent worst-case patterns.
Output configuration
Buffer choice, output mode, and reference distribution can change the dominant pole(s). Compare only when configurations match.
Bandwidth / hold window
Bandwidth limits can hide spikes, while “enter-and-stay” windows decide whether ringing is counted as settled.
Minimal selection habit
For segmented DACs, evaluate settling and ringing on the structural worst-case set (boundary + large MSB steps + extremes), not only on typical mid-scale steps.
Diagram focus: glitch impulse describes the update-edge disturbance, while settling is defined by an error band and conditions; ringing can violate the band long after the spike.
Output/load interaction (external chain dominates what is observed)
System reality: the output chain defines the delivered behavior
A segmented core can reduce worst-case switching behavior, but the system sees the combined chain: DAC core → buffer/driver → simple RC/LPF → load plus the reference and return paths. Many “DAC-looking” artifacts are dominated by this external network rather than by the coding method.
Why differential outputs tend to be more robust
- Even-order suppression improves distortion behavior when the external chain remains symmetric.
- Controlled common-mode reduces sensitivity to external coupling and reference noise injection.
- Return-path discipline is easier to enforce when the signal is carried as a pair with a defined reference structure.
Cload creates a common measurement illusion
Increasing load capacitance can make the glitch spike look smaller because the high-frequency content is filtered by the RC behavior. However, the same capacitance often worsens settling by lowering the dominant pole and increasing the demand on the driver loop.
Comparison rule
Compare glitch and settling only under matched load, bandwidth, and output configuration. A “smaller glitch” can be a filtering artifact, not an inherently better core.
Minimum output-chain guidance (keep it short and repeatable)
- Short current loops: minimize loop area from driver output to load and back to the reference return.
- Reference return is sacred: keep reference and output return paths clean and predictable.
- No digital return through the output reference point: prevent fast digital currents from crossing sensitive analog reference nodes.
Deeper driver compensation and filter design belong in dedicated topics; this section keeps only the levers that most strongly affect boundary and large-step behavior.
Diagram focus: the same core can look “better” or “worse” depending on return paths, reference distribution, and load; keep sensitive nodes clean and loops short.
Dynamic performance & code-dependent spurs (SFDR/THD)
Separate the metrics before blaming the core
- THD is dominated by harmonic generation (nonlinearity mapping).
- SFDR is dominated by the single largest spur (harmonic or non-harmonic).
- Code-dependent spurs are often created when a repeated code sequence modulates small static errors into discrete tones.
Key reading rule
A worse SFDR does not automatically mean “more nonlinearity.” It may be a sequence-bound spur that a different code pattern would move or remove.
Common spur attribution paths (and what they imply)
Repeated code pattern
Periodic sequences act like a modulator that makes element mismatch and gradients visible as discrete tones. If the spur follows the sequence, the pattern is the lever.
Boundary mapping hotspot
Segmentation boundaries can re-map static errors into specific code bands. Repeated visits to boundary-adjacent codes can produce “structural” spurs.
Output-chain nonlinearity
Driver and load nonlinearities can amplify small code-dependent errors into harmonics or IMD. If distortion changes strongly with load or swing, the chain is the lever.
Minimal isolation experiment set (change one variable at a time)
- Change code pattern (same amplitude): keep the same output swing while changing the sequence order and periodicity. Watch whether spurs move or vanish.
- Change update rate / sampling rate: check whether the spur follows the update process (location scales with fs or its artifacts).
- Change clock cleanliness: adjust jitter/PLL/clock source and observe whether the floor or specific tones respond.
- Change load / driver configuration: observe whether harmonics or IMD change disproportionately with load or swing.
A reliable conclusion requires a controlled comparison. If code pattern and clock are changed together, the attribution becomes invalid.
Quick decision rules (pattern-bound vs clock-bound vs chain-bound)
Pattern-bound spur
The spur changes when the sequence periodicity or ordering changes at the same amplitude. Boundary-adjacent activity can be a dominant contributor.
Clock-bound issue
The floor or tones respond strongly to clock quality or PLL configuration. If jitter dominates, the floor typically shifts with clock changes.
Chain-bound distortion
Harmonics/IMD change strongly with load, swing, or driver selection. The same core can look dramatically different under different output chains.
Note on update shaping (kept intentionally brief)
Some DAC families offer update shaping options that alter spectral distribution. System-level RTZ/NRZ trade-offs belong to the current-steering / RF DAC deep dive and are not expanded here.
Diagram focus: isolate spurs by changing one lever at a time—sequence, then clock, then load/driver—until the spur clearly binds to a root cause.
Engineering checklist (bring-up, verification, production test)
Golden rule: if conditions are not comparable, results are meaningless
Record and lock: output configuration, load, bandwidth, window/threshold definitions, code pattern, and temperature state. Only then can data be compared across builds or devices.
Bring-up (establish a clean baseline first)
- Reference stability: confirm reference noise and return integrity before measuring linearity or FFT.
- Supply noise: verify analog/digital partitioning and decoupling effectiveness under update activity.
- Return paths: ensure fast digital return currents do not cross output/reference nodes.
INL/DNL (sweep strategy and repeatability)
- Sweep coverage: include full sweeps plus mandatory boundary-adjacent code coverage.
- Integration time: keep integration consistent across codes to avoid mixing noise with linearity.
- Thermal state: separate warm-up drift from static shape by controlling temperature and time.
- Repeatability: run multiple passes and confirm whether the INL “shape” is stable and reproducible.
Glitch impulse (make the measurement comparable)
Lock the condition fields
- Trigger definition
- Bandwidth limit
- Probe ground method
- Integration window
- Code-pattern set (boundary + MSB steps + extremes)
Interpretation habit
Report both typical and worst-case values, and always identify which code zone produced the worst case. Do not rely on averages to represent boundary behavior.
Large-step verification (overshoot and settling criteria)
- Key step set: boundary crossings, MSB-step patterns, and extremes transitions.
- Overshoot criterion: check whether ringing crosses the defined error band and for how long.
- Settling criterion: verify enter-and-stay behavior to 0.5 LSB / 1 LSB / ppm under the documented condition set.
Dynamic FFT (tone choice and spur identification)
- Tone selection: avoid accidental periodicity that can create misleading sequence-bound spurs.
- Windowing consistency: keep window and record length consistent for comparisons.
- Spur isolation: apply the H2-9 flow (pattern → clock → load/driver) before concluding root cause.
Production strategy (risk-based: must-test vs sample-test)
Must-test
Baseline bring-up health, boundary-adjacent worst-case steps, and any metric that historically fails in hot zones or under load sensitivity.
Sample-test
Stable shape metrics that show strong repeatability under controlled conditions, and secondary FFT checks once the root cause levers are verified.
Diagram focus: a repeatable test bench is defined by locked conditions and a structural worst-case subset (boundary, large steps, extremes), not by one-off “good-looking” plots.
Applications & IC selection logic (segmented DAC)
Scope boundary
This section explains why segmented (thermometer + binary) DACs fit certain workloads and how to select parts by mapping fields to risks. It intentionally avoids scenario encyclopedias and keeps sync/interface implementation details as jump-outs to dedicated topics.
Applications (why segmentation fits, without expanding the universe)
Mid/high-speed, high-accuracy control waveforms (calibration, stimulus, closed-loop)
Why it fits
These workloads often need clean, repeatable updates rather than ultra-wideband RF synthesis. Segmentation reduces worst-case simultaneous switching in high-weight regions, lowering the probability of large update-edge disturbances on structurally hard transitions.
What to check
- Large-step settling definition (threshold + window + step size + load)
- Boundary-adjacent worst-case behavior (not only typical mid-scale)
- Driver/load sensitivity (does performance collapse with Cload or swing?)
What can ruin it
Output-chain poles and stability can dominate. A modest glitch spike can coexist with poor settling if the driver loop and return paths are not controlled.
Low-glitch update setpoints (sensitive AFE biasing, disturbance-sensitive nodes)
Why it fits
Setpoint updates often care more about the update-edge disturbance than about wideband spectral purity. By moving large-weight transitions into thermometer-coded behavior, segmentation can reduce structurally large glitches on major transitions.
What to check
- Glitch impulse with full test conditions (BW, window, load, update pattern)
- Worst-case code set: boundary + large MSB steps + extremes
- Step response: overshoot and “enter-and-stay” settling under the same conditions
What can ruin it
Load capacitance can “hide” the spike while worsening settling. A smaller visible glitch is not automatically a better system outcome.
Multi-channel phase-aligned control (value is relative consistency; sync details are jump-outs)
Why it fits
Many multi-channel controllers need repeatable relative behavior (channel-to-channel step shape, boundary behavior, and drift coherence). Segmentation helps by making worst-case code zones definable and testable, improving the practicality of relative consistency screening.
What to check
- Inter-channel gain/offset and drift coherence over temperature
- Simultaneous update capability and relative step/settling match
- Boundary-zone consistency across channels
What can ruin it
Clock/trigger distribution skew and board-level return-path differences can dominate. Relative verification requires locked conditions across channels.
Jump-outs (details elsewhere)
Sync/trigger implementation, interface timing, and protocol details should be handled in dedicated synchronization/interface topics.
Diagram focus: pick an application bucket, then verify the dominant requirement dimension and the external chain sensitivity; use jump-outs for sync/interface details.
IC selection logic (fields → risks → inquiry questions)
Procurement-first habit
Ask for definitions and conditions before comparing numbers. If test conditions differ (bandwidth, window, load, pattern), the comparison is invalid.
1) Segmentation scheme / thermometer bits / monotonic guarantee
Risk mapping
Boundary-adjacent anomalies, DNL degradation, structural spurs concentrated in specific code zones.
Inquiry questions
- Which bits are thermometer-coded vs binary-coded, and where is the segmentation boundary?
- Is monotonicity guaranteed over temperature and supply, or only typical at 25°C?
- Are boundary-adjacent codes treated as a specified worst-case subset in characterization?
2) INL/DNL over temperature
Risk mapping
Error amplification under thermal gradients, unstable INL shape, calibration coefficient drift, inconsistent multi-unit behavior.
Inquiry questions
- Provide INL/DNL limits over the full temperature range and the test conditions.
- Is the INL “shape” repeatable across temperature cycling and supply variation?
3) Glitch impulse + test conditions
Risk mapping
False comparisons caused by different bandwidth/window/load definitions; missed worst-case boundary and large-step transitions.
Inquiry questions
- State glitch impulse with bandwidth limit, integration window, load, output mode, and update pattern.
- Which transitions produce the worst-case glitch (boundary, large MSB steps, extremes)?
4) Settling spec definition
Risk mapping
System fails to meet “enter-and-stay” settling; closed-loop or sampled systems show repeatable but unexplained errors on large or boundary steps.
Inquiry questions
- Define settling threshold (0.5 LSB / 1 LSB / ppm) and the “hold window” requirement.
- Specify the step size, load, output mode, and measurement bandwidth used for the settling number.
- Is the settling spec guaranteed for boundary-crossing large steps?
5) Output drive / compliance / differential options
Risk mapping
Output-chain dominates distortion or ringing; performance collapses under real load; “core looks good” but system does not.
Inquiry questions
- Provide output compliance, recommended load network, and conditions for specified SFDR/THD.
- Which output mode (single-ended/differential) is required to meet dynamic targets?
- How sensitive are THD/SFDR and settling to Cload and swing under the recommended driver chain?
Representative part numbers (examples to anchor discussions)
These examples help anchor “segmented” discussions in procurement/FAE conversations. They are not a comprehensive list.
- DAC1001D125 — segmented scheme described as 7-bit thermometer + 3-bit binary (dual DAC, 10-bit class).
- DAC5652 — segmented architecture highlighted for reduced glitch energy and improved DNL/SFDR (dual 10-bit class).
- DAC904 / DAC900 family — advanced segmentation architecture used to improve SFDR in high-speed reconstruction (14-bit class).
- AD9775 — TxDAC family referenced as a segmented DAC example in high-speed DAC materials.
Diagram focus: the selection conversation stays productive when fields map to risks and every number includes comparable conditions.
FAQ: Segmented DAC (thermometer + binary)
These FAQs stay within this page’s scope: segmentation ratio, boundary codes, glitch vs settling, code-dependent spurs, temperature drift attribution, and condition comparability. No interface protocol, RF direct-sampling system design, or reconstruction filter deep dives are expanded here.