123 Main Street, New York, NY 10001

AWG / Function Generator: DAC, Filters, Calibration & Output Buffer

← Back to: Test & Measurement / Instrumentation

An AWG/function generator is judged by the waveform delivered at the DUT port—amplitude accuracy/flatness, timing/phase repeatability, and spectral purity—not by DAC bits or sample rate alone. A solid design combines a clean high-speed DAC and clocking with reconstruction filtering, robust output buffering, and calibration/self-test loops that keep performance repeatable and traceable from lab to field.

H2-1 · What an AWG / Function Generator actually guarantees

The real promise is not “what the DAC generates,” but what arrives at the DUT port. Sampling rate and bit depth are ingredients; the delivered waveform is a port-level system result shaped by timing, analog reconstruction, output buffering, and calibration.

Port-level delivery has 3 dimensions
  • Amplitude: accuracy, flatness vs frequency, offset, range consistency under the intended load (50 Ω / Hi-Z).
  • Time: trigger-to-output repeatability, phase coherency on re-trigger/burst, channel-to-channel skew (multi-channel).
  • Spectrum: noise floor, spurs, and distortion (SFDR / THD / IMD) that determine “cleanliness” at the port.
Where Function Generator ends and AWG begins
  • Function generator (DDS/NCO): best for continuous waveforms with strong phase continuity and fast frequency changes.
  • AWG (ARB / memory + sequencer): best for custom transients, long sequences, segmented patterns, and deterministic bursts.
  • Hybrid: combines coherent carrier generation with programmable envelope/segments, but relies heavily on calibration and timing alignment.
Common pitfall to catch early

“14-bit, 1 GS/s” can still deliver poor results when reconstruction filtering, output buffering, load behavior, and calibration are not designed as a single port-level chain. If performance changes sharply with load (50 Ω vs Hi-Z), range, or temperature, the limitation is often port delivery rather than the DAC headline spec.

One-line acceptance (Release Gate)

Under the specified frequency band and load, the amplitude error / phase repeatability / SFDR meet targets and remain repeatable across ranges and temperature corners used by the application.

F1 — Port-level delivery triangle for AWG/function generator outputs Diagram showing three delivery dimensions—Amplitude, Time, Spectrum—converging to a single waveform at the DUT port, with key measurable tags such as flatness, offset, jitter, skew, SFDR, IMD, and noise floor. Port-level delivery: what the DUT actually receives Waveform at DUT port Amplitude accuracy · flatness · offset Time trigger · phase · skew Spectrum noise floor · spurs · IMD flatness offset jitter skew SFDR IMD Pitfall: “Port ≠ DAC”

H2-2 · Architecture map: DDS, ARB, and Hybrid (NCO + interpolation + memory)

Different architectures trade waveform freedom, phase coherency, and spur behavior. The purpose of this map is to locate every later discussion (DAC, reconstruction filter, output buffer, calibration) on a single signal path.

Three common generator styles (practical view)
  • DDS / Function: strongest phase continuity for CW tones; typically limited by algorithm/LUT choices and output chain.
  • ARB (memory + sequencer): best for transients and long/segmented patterns; must manage segment boundaries, switching delay, and marker alignment.
  • Hybrid: coherent NCO carrier plus programmable segments/envelope; powerful but depends on tight timing and calibration tables.
Key modules (module → symptom if weak)
  • Waveform memory → long patterns drift or repeat with boundary artifacts if formatting/quantization is not controlled.
  • Sequencer → segment transitions can create short spikes/steps; marker timing may slip during jumps.
  • Interpolation / NCO → mode changes can reshuffle deterministic spurs and images; phase continuity needs explicit handling.
  • DAC interface → data/clock relationships can turn into repeatable spurs and “mysterious” frequency-dependent artifacts.
  • Trigger / marker path → burst start phase and latency repeatability degrade when timing is not coherently referenced.
  • Calibration injection → without frequency/range/temperature tables, port flatness and phase alignment usually drift.
Decision questions (fast selection)
  • Need long sequences and segment programming? Favor ARB / Hybrid.
  • Need strict phase continuity on re-trigger/burst? Favor DDS / Hybrid with coherent timing.
  • Need multi-channel coherency? Prioritize shared reference + calibrated skew/phase alignment at the port.
What to measure (minimum proof)
  • Phase continuity: repeated trigger/burst overlays should start with stable phase (small distribution, not random drift).
  • Segment boundary glitch: zoom the A→B boundary; check for spikes/steps that exceed the allowed transient budget.
  • Sequence latency: measure marker-to-waveform alignment across sequence paths; confirm repeatable timing.
F2 — AWG signal-generation pipeline (memory → sequencer → interpolation → DAC → filter → buffer → port) Block diagram of an AWG pipeline with side injections for trigger/marker, low-jitter clock, and calibration LUT, ending at a 50-ohm output port. Architecture map: where each spec is “born” Memory waveform Sequencer segments Interpolation NCO (opt.) High-Speed DAC spurs / SFDR Reconstruction filter Output buffer range · offset · protection 50Ω Trigger / Marker Low-jitter clock Calibration LUT flatness · phase · range tables Every later chapter maps to one block: DAC, filter, buffer, timing, calibration

H2-3 · High-speed DAC: root causes that turn into SFDR spurs

In an AWG, many “mysterious” spurs are not random—they are deterministic fingerprints of DAC mechanisms. The goal is to map each visible spur pattern to a small set of root causes that directly affect port-level SFDR.

Four spur families (mechanism → typical symptom)
  • Static linearity (INL/DNL, segmented mismatch): repeatable spurs that strongly depend on frequency ratio and code periodicity.
  • Dynamic glitch (switch transients, code-edge charge injection): spurs that worsen with large transitions, high output level, or mode changes.
  • Clock coupling (clock feedthrough / timing coupling): spurs whose positions move with sample-rate, interpolation mode, or clock routing changes.
  • Reference / supply modulation: symmetric sidebands and “breathing” spurs that track system activity, temperature, or power conditions.
Update mode: NRZ vs RZ/RTZ (practical spectral impact)
  • NRZ: holds the value for the whole sample; image distribution and high-frequency energy depend strongly on interpolation and reconstruction filtering.
  • RZ / RTZ: returns toward zero within each sample; the out-of-band energy shape changes, often shifting filter pressure and spur visibility.

The key point is not “better vs worse,” but that update mode can reshuffle the spur pattern by changing how energy spreads near Nyquist and beyond.

Why some frequencies look worse (code periodicity → deterministic spurs)

Periodic waveforms create repeating code sequences. Any repeating DAC error (mismatch, timing skew, edge transient) can add coherently in frequency, producing discrete spurs. When the output tone forms a short repeating pattern relative to the sample clock, spur energy can concentrate more strongly, making specific frequency regions look disproportionately bad.

Verification: build a “spur map” fingerprint
  • Single-tone sweep: step frequency across the target band; record SFDR and top spurs (offset + level).
  • Amplitude sweep: repeat at multiple output levels; strong level dependence suggests nonlinearity/glitch or output stress.
  • Temperature sweep: repeat at hot/cold corners used in operation; drift suggests reference/supply or mismatch sensitivity.
  • Mode sweep: change interpolation/update mode or sample rate; spur movement is a strong hint of clock-coupled mechanisms.
F3 — DAC spur root-cause tree and a simplified spectrum sketch Diagram with a central DAC core connected to four spur families: mismatch, glitch, clock feedthrough, and reference/supply modulation. A small spectrum sketch shows a main tone with a few spurs and symmetric sidebands. Spur root-cause tree (deterministic fingerprints) DAC core spurs → SFDR Mismatch INL / DNL segment error Glitch switch transient code-edge Clock feedthrough mode moves spurs Ref / Supply modulation sidebands Spectrum sketch frequency level noise floor main spurs sidebands

H2-4 · Clock & Trigger: how jitter becomes amplitude error

Timing noise is not an abstract number. For high-frequency outputs, sampling jitter converts time error into amplitude noise, raising the noise floor and reducing usable dynamic range. Trigger quality determines whether waveform start phase and latency are repeatable.

Engineer’s formula (usable acceptance tool)
SNR_jitter ≈ −20·log10( 2π · f_out · σt )
  • Higher f_out makes the same σt much more damaging (high-frequency tones “pay” for jitter).
  • Use the equation to back-solve σt from a target SNR/SFDR budget at the highest required f_out.
Trigger & phase repeatability (what “good” looks like)
  • Start phase: repeated triggers should launch the waveform with stable start phase (not random rotation).
  • Retrigger: repeated runs should keep phase and timing within a tight distribution at the port.
  • Burst: envelope edges and marker alignment should remain consistent across bursts and sequence paths.
Multi-channel coherency (system view, no PLL deep-dive)
  • Skew: channel-to-channel time offset must be measurable and correctable to maintain alignment.
  • Phase: coherent channels require a shared reference and calibrated phase relationship at the output ports.
  • Shared reference vs independent clocks: shared references simplify coherency; independent clocks tend to widen phase distributions over time.
Verification: repeat-trigger distributions
  • Repeat trigger N times: measure arrival-time distribution (latency spread) and phase distribution (start phase spread).
  • Sweep f_out: check whether distributions worsen at higher frequency, consistent with jitter sensitivity.
  • Multi-channel run: measure skew distribution and phase difference distribution at the ports.
F4 — Reference clock and trigger alignment path for phase-coherent outputs Diagram showing reference input into a clock tree feeding DAC clocks for multiple channels with skew blocks, and a trigger input path through time alignment to marker output and waveform start, labeled with sigma-t and phase coherent. Clock & trigger paths (σt, skew, phase coherency) Clock path Ref In internal / external Clock tree distribution / clean-up DAC clock sampling Channels Ch A Ch B Ch C skew σt Trigger path Trigger In Time align delay / sync Marker Out alignment Waveform start phase coherent

H2-5 · Reconstruction filter: why “flatness” is a system property

A reconstruction filter is not just a component after the DAC. It is a port-delivery control element that sets the trade space between passband flatness, image suppression, and phase / group delay behavior. As a result, “flatness” is rarely guaranteed by the DAC alone— it emerges from the entire chain (update mode + interpolation + filter + buffer + load).

Reconstruction goals (what must be controlled)
  • Suppress Fs ± Fin images and DAC high-frequency energy so the DUT only sees intended content.
  • Stabilize passband flatness so amplitude calibration does not collapse away from a single “sweet spot” frequency.
  • Manage phase / group delay so time-domain waveforms (edges, bursts, shaped pulses) stay predictable.
Key trade-offs (selection guidance)
  • Flatness vs image suppression: steeper cutoff often improves images, but can increase ripple or complicate calibration.
  • Phase linearity vs steepness: “gentler” filters often keep group delay smoother; very sharp edges can distort pulse shapes.
  • Waveform type matters: a clean single-tone may tolerate more phase ripple than a burst/pulse that must keep edges and timing.
Time-domain impact (intuitive cause → effect)

Pulses and steps contain wideband energy. The reconstruction filter shapes that energy: sharper frequency cutoffs tend to create more visible overshoot and ringing. Passband ripple and non-smooth group delay can further reshape edges and burst envelopes. For time-critical patterns, group delay smoothness is often as important as amplitude flatness.

Verification (minimum curves to capture)
  • Magnitude sweep: passband amplitude vs frequency (flatness map).
  • Phase / group delay: smoothness across the band (time-domain risk indicator).
  • Image suppression: compare image energy before/after filtering (no need for absolute units to see the trend).
F5 — Reconstruction filter impact: images suppressed, flatness and group delay shaped Top panel compares spectrum before vs after filtering, showing reduced image peaks. Bottom panel sketches passband ripple and group delay behavior without numeric units. Reconstruction filter: frequency & timing effects Spectrum (before → after) Before filter main images After filter images down Passband ripple & group delay (shape only) Ripple / flatness Group delay non-smooth

H2-6 · Output buffer & 50Ω drive: the port is the product

Users do not buy a “DAC pin.” They buy a specified waveform at the output port. The output buffer, range switching, offset injection, protection, and thermal behavior determine whether amplitude accuracy, distortion, and repeatability hold under real loads and cables.

50Ω vs Hi-Z: why calibration is different
  • 50Ω load demands higher current and stresses output swing; amplitude and distortion are typically harder at high frequency.
  • Hi-Z load can look “cleaner,” but can mislead if the real application expects 50Ω termination or long cable behavior.
  • Specifications must be read as load-conditional: “meets flatness and SFDR at the port under the intended termination.”
Why high-frequency, large amplitude is hardest
  • Bandwidth & drive limits: output stages compress or add distortion as frequency and swing increase.
  • Load sensitivity: heavier loads (50Ω, long cables, imperfect terminations) amplify gain error and nonlinear behavior.
  • Thermal rise: higher output power raises temperature, shifting gain and distortion unless the system compensates.
Range switching, offset, protection (system effects)
  • Range switching changes gain paths; consistency across ranges is a key part of “port accuracy.”
  • Offset injection is a port feature; it must not destabilize noise floor or distortion beyond the stated limits.
  • Protection (short/over-current/over-temp) should fail predictably and recover cleanly without corrupting normal performance.
Verification: load sweep (port-level proof)
  • Loads: compare 50Ω vs Hi-Z (and a representative cable condition).
  • Amplitude & frequency: test low/mid/high frequency with small/mid/near-full-scale swing.
  • Metrics: track amplitude error, distortion/SFDR, and changes from cold start to thermal steady state.
F6 — Port delivery chain: DAC → filter → buffer → range switch → 50Ω output Block diagram of the output chain, with side blocks for offset injection, protection, and thermal sensing, emphasizing that the port is the product. Output chain: the port is the product DAC samples Reconstruction filter Output buffer drive & linearity Range switch 50Ω port Offset inject Protection OCP / OTP Thermal sensor Port-level checks load sweep · cable sensitivity · thermal steady state

H2-7 · Amplitude/phase calibration: flattening, compensation & predistortion

Calibration is what turns “good hardware” into repeatable port delivery. The same DAC and output chain can behave like a higher-grade AWG when amplitude flatness, phase vs frequency, and channel-to-channel coherency are corrected with the right coefficient structure and application points.

Calibration layers (what each layer delivers)
  • DC layer (offset / gain): establishes a reliable baseline so higher-frequency corrections stay meaningful.
  • Amplitude vs frequency (flatness): reduces frequency-dependent gain error across the specified band at the port.
  • Phase vs frequency: stabilizes phase slope/shape so bursts and shaped waveforms remain predictable.
  • Multi-channel consistency: aligns channel-to-channel amplitude/phase/skew to enable coherent outputs.
Coefficient/LUT organization (selection-grade logic)

Coefficients must be indexed in the same way the output chain changes. A single “one-size table” often fails because the signal path is not constant.

  • Range-indexed: each gain/attenuation path has distinct errors and must be corrected independently.
  • Temperature-indexed: use buckets or corner points so drift does not leak into “flatness” and phase performance.
  • Band/mode-indexed: correction density can change by band; mode changes (interpolation/filter setting) may warrant separate sets.
Predistortion (when it helps vs when it backfires)
  • Works best when output-chain nonlinearity is stable, modelable, and within the intended bandwidth and load condition.
  • Degrades when behavior changes sharply with temperature, range switching, protection limiting, or heavy load/cable sensitivity.
  • Practical framing: predistortion is a targeted “port polish” for specific waveforms and bands, not a universal cure.
Verification (before vs after, the only proof that matters)
  • Flatness curve: amplitude vs frequency improvement across the band.
  • Phase curve: phase vs frequency (or equivalent delay shape) improvement.
  • Distortion improvement: IMD / ACPR trend reduction under the same band, range, and load condition.
F7 — Calibration closed loop: measure → update → apply Block diagram showing a closed loop: port sampling into a generic power/phase detector, a calibration engine updating a coefficient table, digital correction applied before the DAC, and the output chain to the port. Small tags indicate indexing by range, temperature, and band. Calibration loop (measure → update → apply) Digital correction DAC samples Output chain filter + buffer Port delivered wave Output sample Power/Phase detect Cal engine error → coeff Coeff table LUT range temp band Proof points flatness curve · phase curve · IMD/ACPR trend (before vs after) ON / OFF

H2-8 · Spur management: traceable causes and systematic suppression

Spurs become manageable when treated as fingerprints, not mysteries. The most useful approach is to classify spur shapes, then run a small multi-dimensional test matrix to see how peaks move with frequency, amplitude, and mode. This isolates the dominant coupling paths that are actually controllable inside the AWG.

Three spur families (shape → what it suggests)
  • Deterministic (code/clock related): stable line spurs that may move with sample-rate or mode settings.
  • Intermodulation (nonlinearity): grows strongly with output level and worsens in “high-frequency + large swing + heavy load.”
  • Modulation (coupling): symmetric sidebands or a raised noise skirt that tracks system activity or ripple.
Suppression priority (stay inside AWG-controllable knobs)
  1. Clock & synchronization: stabilize repeatability first, otherwise spur readings drift and diagnosis fails.
  2. Output-chain linearity: reduce IMD-like behavior before relying on “cosmetic” digital fixes.
  3. Filtering: reduce out-of-band energy that makes certain spur components visible at the port.
  4. Digital compensation: apply targeted corrections after the dominant path is controlled.
Fingerprint rules (fast diagnosis)
  • Strong amplitude dependence → suspect nonlinearity / compression / range path behavior.
  • Moves with sample-rate or interpolation mode → suspect clock- or digital-path coupling.
  • Symmetric sidebands / noise skirt → suspect modulation-style coupling (ripple / activity).
  • Jumps with range switching → suspect range switch coupling and path discontinuities.
Verification: spur fingerprint matrix
  • Matrix: sweep fout × amplitude × mode (mode = interpolation/filter setting that can be toggled).
  • Record: top spurs (frequency offset + relative height) and whether a noise skirt appears.
  • Goal: link each spur family to a dominant coupling arrow (clock / digital / PSU ripple / range switch).
F8 — Spur coupling map: four arrows into the output chain + spectrum fingerprint Diagram with a simplified output chain in the center and four coupling arrows (clock, digital bus, PSU ripple, range switch) pointing into it. A small spectrum sketch at right shows deterministic spur lines and a noise skirt. Spur coupling map (traceable fingerprints) Digital DAC Filter mode Buffer range clock digital bus PSU ripple range switch Fingerprint noise skirt main deterministic spur

H2-9 · Reading specs: SFDR/ENOB/IMD/ACPR vs real waveforms

Datasheet metrics become useful only when mapped to waveform use-cases at the DUT port. A clean single-tone number does not guarantee clean multi-tone, modulation, or arbitrary-waveform delivery. The practical approach is to match each waveform type to the metric that best predicts “will this be easy to use?”

Single-tone (one sine)
  • SFDR: distance from the main tone to the worst spur (the “largest unwanted line”).
  • THD: total harmonic distortion (how strongly harmonics reshape a pure tone).
  • Noise floor: background noise level (small-signal cleanliness and wideband noise behavior).
Two-tone (IMD stress)

Two-tone tests expose IMD3 products that often land in sensitive bands and scale aggressively with output level. This is a strong predictor of “real interference” when more than one spectral component exists.

  • IMD3 rising quickly with amplitude often points to output-chain nonlinearity limits.
  • IMD sensitivity to range/load hints that the port path dominates the outcome.
Modulated / wideband (energy leakage)
  • ACPR (adjacent leakage) reflects how much energy spreads outside the intended band (skirts/leakage, not just “lines”).
  • Improving ACPR typically requires stable behavior across the intended band and load condition.
Arbitrary waveform (ARB-only realities)
  • Crest factor: higher peaks at the same RMS can trigger compression/limits earlier and reshape the waveform.
  • Segment switching transient: sequencer jumps can create brief discontinuities (glitch/step/phase jump).
  • Burst/trigger repeatability: start phase and time-of-arrival stability define whether results are repeatable.
Accuracy vs flatness (must not be confused)
  • Amplitude accuracy: how correct a calibrated point is (absolute correctness at a point).
  • Flatness: how consistent amplitude remains across a band (correctness across a range).
F9 — Minimum test set: map waveform types to the right metrics Four compact test blocks: single tone, two tone, pulse/step, and triggered burst. Each block has a tiny waveform/spectrum icon and a metric tag. Minimum test set (waveform → metric) Single tone SFDR spur Two tone IMD3 IMD Pulse / Step settling ringing Burst (triggered) phase repeat trig scatter Record trends: vs frequency · vs amplitude · vs mode

H2-10 · Validation & production checklist: what proves it’s done

“Done” means the AWG delivers repeatable waveforms at the port across expected ranges, modes, temperature, and load. The most robust proof is a closed acceptance flow from bring-up to calibration, verification sweeps, stress coverage, fast production screening, and traceable output records.

R&D validation (three layers)
  1. Functional correctness: waveform modes, trigger/burst behavior, segment switching, markers — with clear pass/fail outputs.
  2. Performance scanning: flatness and distortion trends across frequency/amplitude/mode — stored as curves/maps.
  3. Boundary coverage: temperature points, load conditions, warm-up drift, long-run behavior — captured as drift/consistency records.
Production test (fast, high-signal screening)
  • Golden waveform: a small set of representative waveforms that quickly reveal gain/path issues.
  • Limit line: simple pass/fail boundaries for key outcomes (trend-based, no need for full sweeps on every unit).
  • Short cycle time: focus on catching gross deviations early while preserving traceability.
Traceability (what must be recorded)
  • Calibration version ID: which coefficient set was applied.
  • Temperature points: which buckets/corners were used for correction validity.
  • Range table: which output ranges have independent corrections.
  • Self-test log: time-stamped pass/fail and key summaries for quick field triage.
Acceptance table template (method → condition → pass/fail)
  • Metric (flatness / IMD3 / burst repeatability) + method (sweep / two-tone / triggered repeat).
  • Conditions (range, mode, load, temperature bucket) kept explicit and consistent.
  • Decision expressed as “within limit line” and linked to a stored curve/log ID.
F10 — Acceptance flow: bring-up → cal → verify → stress → production → ship Flowchart showing the complete acceptance pipeline with compact icons and short output tags under each step. Acceptance flow (done = closed loop) Bring-up tests Cal coeff ID Verify sweep curves Stress temp/load 50Ω drift Production test limits Ship report Traceable outputs cal version · temp bucket · range table · self-test log ID · curve/map IDs CalVersionID TempBucket RangeTable SelfTestLog Curve/Map IDs

H2-11 · Self-test & field evidence: BIST hooks and traceability

Passing in the lab is not enough. A practical AWG needs field-proof evidence: a short, repeatable self-test that confirms the waveform delivered at the DUT port has not drifted, and a log trail that explains what happened when it did. The goal is not a full sweep in the field, but high-signal checks that correlate strongly with real user failures.

A) BIST loop (reference → inject → sense → score → log)
  • Reference stimulus: generate a known tone/burst (or a short “golden” waveform) that is stable and easy to verify.
  • Injection point: support at least one controlled injection path so the test result maps to a known section of the signal path.
  • Independent sensing: measure at a point that correlates with port delivery (power/amplitude, phase, and frequency/count).
  • Health score: convert raw checks to a 3-state outcome (OK / Monitor / Service) plus a numeric score for trend tracking.
  • Evidence log: store pass/fail, counters, timestamps, and context (range/mode/temp bucket) for fast field triage.
Recommended self-test modes
  • Power-on quick check (seconds): confirm basic path, timebase status, and range switching sanity.
  • On-demand health report (30–60 s): run spot-check points for flatness, a phase check point, and frequency/count verification.
  • Background monitoring: low-duty spot checks and event counting without disrupting normal operation.
B) Drift monitoring (catch it early, avoid false alarms)
  • Flatness spot-check: pick a small set of representative points (low/mid/high band). Track delta vs baseline/limit line.
  • Temperature-correlated offset: record a temp bucket and compare against the expected bucket baseline to separate warm-up effects from true drift.
  • Range switching consistency: test the same target under adjacent ranges and compare amplitude/phase deltas to reveal path-dependent errors.
False-alarm reduction (field-safe rules)
  • Run health checks after warm-up or record “thermal state” explicitly (cold / warming / stable).
  • Record load assumption (50Ω / Hi-Z). If load detection is not available, store the configured mode.
  • Allow one controlled retry for transient conditions; log both attempts to preserve evidence.
C) Evidence logs (what must be recorded)
Self-test
  • Self-test fail count (lifetime + last 30 days)
  • Last fail reason (amp / phase / freq / temp / range / trigger)
  • Last pass timestamp, last fail timestamp
  • Calibration expiry warning count + last shown timestamp
Protection & thermal
  • Overtemp events (count + peak temperature + duration bucket)
  • Overload/short events (count + range + output state)
  • Any throttle/limit flags that can change delivered amplitude
Trigger & timing anomalies
  • Trigger anomaly count (miss / duplicate / marker error)
  • Burst repeatability flags (phase / time-of-arrival out-of-limit)
D) One-click health report (user-facing fields)
  • Identity: model, serial, firmware build ID
  • Calibration: Cal Version ID, last cal date, cal due date
  • Context: output range, mode, load assumption, temp bucket + measured PCB temp
  • Key checks: amp spot-check (f1/f2/f3), phase check (one mid-band point), frequency/count status, range consistency delta
  • Events summary: self-test fails, overtemp/overload, trigger anomalies (lifetime + last 30 days)
  • Conclusion: health score (0–100) + state (OK/Monitor/Service) + recommended action
E) Example BOM hooks (not exhaustive, practical parts)
Amplitude / power detection
  • ADI ADL5902 (RMS detector), ADI AD8318 (log detector), ADI AD8361 (RMS/power detector class)
Phase / gain ratio detection
  • ADI AD8302 (phase & gain detector)
ADC for detector readback
  • ADI AD7982 (18-bit SAR), TI ADS8866 (16-bit SAR), TI ADS127L01 (ΣΔ, low-speed high-resolution trending)
Timing / frequency evidence (optional enhancement)
  • TI TDC7200 (time interval / TOA measurement class, for burst timing scatter evidence)
Temperature bucket sensing
  • TI TMP117, ADI ADT7420, Microchip MCP9808
NVM for traceability logs
  • Infineon/Cypress FM24CL64B (I²C FRAM for frequent log writes)
  • Winbond W25Q series (SPI NOR for larger records), Microchip 24LC256 (I²C EEPROM for IDs)
F11 — BIST loop and field evidence: reference injection, sensing, health score, and logs Diagram of a self-test (BIST) loop: Ref gen to Inject to Output chain to Sense/Detect to Health score to Log. Temperature bucket and range-state feed the score. A compact health report block shows essential fields. Self-test loop & field evidence Ref gen golden tone Inject path select Output DAC FILT BUF DUT port Detect amp/φ/f Score OK / Monitor Log OT FAIL TRIG Temp bucket Range state One-click health report Serial · FW build · Cal ID Range/mode · Load · Temp bucket Amp/Phase/Freq checks · Events Evidence Amp Phase Freq / Count Field proof = short checks + context + logs (repeatable, trendable, traceable).

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12 · FAQs (AWG / Function Generator)

These FAQs translate common spec-sheet questions into practical “port delivery” decisions: amplitude, timing/phase repeatability, and spectral purity.

1) Higher sample rate vs higher bits: how should an AWG be chosen?
Choose by the waveform at the DUT port, not the headline numbers. Higher sample rate mainly moves images farther away and relaxes reconstruction filtering, helping high-frequency flatness. Higher effective resolution (and linearity) mainly improves small-signal distortion and multi-tone IMD. Verify with a single-tone sweep (flatness/SFDR) plus a two-tone IMD test at the intended amplitude and load.
2) Same “1 GHz bandwidth” but worse spurs—where is the root cause?
Bandwidth does not guarantee spectral purity. Spur differences usually come from DAC mismatch/glitch behavior, clock feedthrough or clock-related coupling, reference/PSU modulation, range switching artifacts, and how well the output chain is calibrated. A fast way to locate deterministic spurs is a spur map across frequency × amplitude × mode (interpolation/filter/range), looking for repeatable “fingerprints.”
3) How can jitter impact on high-frequency sine amplitude noise be estimated?
A practical estimate is: SNR_jitter ≈ −20·log10(2π·f_out·σt), where σt is RMS time jitter (seconds). The penalty rises rapidly with frequency, often showing up as noise skirt or amplitude instability rather than a fixed spur. Use the correct jitter number for the AWG clock at the DAC, then verify by repeating triggered captures and comparing phase/time-of-arrival scatter under the same conditions.
4) When is a higher-order reconstruction filter required, and what is the trade-off?
A higher-order filter is needed when images (Fs ± Fin) or DAC out-of-band energy can interfere with the DUT or violate spectral masks. The trade-off is usually more group-delay variation, less phase linearity, and potentially worse pulse/step ringing. Confirm by comparing (a) image suppression and (b) group delay/phase behavior across the same band, then re-check pulse settling at the DUT load.
5) Why is the amplitude “rated” differently for 50Ω vs high-impedance load?
Many AWGs define amplitude at the port under a 50Ω system assumption, where source impedance and load form a known divider and the output stage current is controlled. Under high-impedance, the delivered voltage can differ because the divider condition changes, and some ranges are calibrated specifically for 50Ω. Verify by switching between 50Ω termination and Hi-Z while recording amplitude error and distortion versus frequency.
6) Pulse/step ringing: is it the filter or the output buffer?
Separate causes by controlled variation. If ringing changes strongly with reconstruction filter settings (and is consistent across loads), the filter/group-delay behavior is the likely driver. If ringing changes strongly with load, cable length, or range switching, the output buffer stability or port matching is the likely driver. A minimal test is: fix amplitude, toggle filter modes, then sweep load conditions (50Ω/Hi-Z) and compare settling.
7) Flatness vs accuracy: what is the difference and why does it matter?
Accuracy is a “point” specification: how close the amplitude is at a defined calibration condition. Flatness is a “band” specification: how consistent amplitude remains across a frequency range. A generator can be accurate at one frequency but still vary noticeably across the band. For sweeps, wideband stimuli, and modulation, flatness often dominates usability. Verify by a band sweep and compare the curve shape to the single calibration point.
8) How should a spur map be run to quickly locate deterministic spurs?
Use a small but structured matrix: frequency × amplitude × mode (interpolation/filter/range). Keep the timebase/reference consistent and repeat each point to confirm repeatability. Deterministic spurs tend to appear at repeatable offsets or patterns when mode/frequency changes, while nonlinearity shows stronger amplitude dependence. Record “worst spur level + location” for each cell; the fingerprint usually points to clock-related coupling or code-dependent behavior.
9) When does pre-distortion help, and when can it make things worse?
Pre-distortion works when the output path distortion is predictable and stable for a given range, bandwidth, and temperature. It can backfire when the path changes with range switching, load, thermal drift, or when the waveform’s crest factor pushes the buffer into compression. Treat pre-distortion as “range + band + temp bucket” dependent. Verify by comparing IMD/ACPR or THD before/after across temperature and ranges, not only at one condition.
10) Multi-channel coherent output: common causes of phase drift or de-sync?
The most common causes are: channels not sharing the same reference/timebase, trigger/marker distribution mismatch, uncorrected channel-to-channel skew, and retrigger/burst start conditions that are not phase-deterministic. Coherence usually requires shared reference, deterministic arming, and per-channel delay/phase correction tables. Verify with repeated triggered bursts and compute the distribution of phase difference and time-of-arrival difference across many runs under identical settings.
11) Minimal production test: how can flatness/SFDR be released credibly?
Start from R&D baselines, then select the smallest set of “most sensitive” checks: representative band points (low/mid/high), the most demanding range(s), and one or two stress modes (interpolation/filter) that historically reveal issues. Use a short golden waveform plus limit lines for pass/fail, and record test context (range, mode, temp bucket). Periodically correlate this reduced set against full sweeps to confirm coverage remains valid.
12) Which self-test fields best prove “field output is still trustworthy”?
The best fields combine “still meets limits” with “why it failed.” Minimum set: serial/model, firmware build ID, calibration version ID and due date, test context (range/mode/load assumption/temp bucket), amplitude spot-check results at a few band points, one phase/timing repeatability check, frequency/count status, and range consistency delta. Add event summaries with timestamps: self-test failures, overtemp/overload, and trigger anomalies.