Programmable-Gain Amplifier (PGA) for Multi-Range DAQ
← Back to:Operational Amplifiers (Op Amps)
A Programmable-Gain Amplifier (PGA) lets one DAQ chain measure from mV to V across many channels by switching gain automatically—without sacrificing integrity.
The real success criteria are system-level: each range must settle fast enough after switching, stay linear within headroom limits, and keep crosstalk and drift below LSB/ppm targets under real source impedance and ADC loading.
What this page solves (PGA in multi-range DAQ)
A Programmable-Gain Amplifier (PGA) turns one measurement chain into multiple usable ranges—millivolts to volts—without manual rewiring. By switching gain (and often channels) under digital control, the ADC can stay near its optimal full-scale usage across ranges, improving effective resolution and enabling repeatable auto-ranging in multi-channel DAQ and production fixtures.
Where a PGA is the right tool
- Multi-range DAQ: one input must cover multiple ranges (e.g., ±10 V / ±1 V / ±100 mV) without wasting ADC dynamic range at low level or saturating at high level.
- Multi-channel sensing: channels differ in amplitude and source impedance, yet the system needs consistent logging, thresholds, and calibration behavior.
- Production/ATE fixtures: fast range switching with predictable settling windows improves throughput and reduces operator-dependent variation.
The three failure modes this page helps avoid
- Dynamic-range waste or saturation: low range is noise/quantization-limited while high range clips; the same ADC looks “good” in one range and “bad” in another because full-scale usage is inconsistent.
- Range-switching artifacts: a gain-code change injects a step/glitch and needs settling time; sampling too early creates “random” readings that are actually deterministic settling error.
- Multi-channel integrity issues: crosstalk, channel mismatch, and digital edge coupling distort per-channel statistics and break comparability across ranges.
Practical “done” criteria (measurable)
- Per range: no clipping + noise target met + linearity target met under real source impedance and bandwidth.
- After switching: a defined valid window begins after Tsettle (specified in LSB/ppm), and sampling never occurs outside that window.
- Across channels: crosstalk and channel-to-channel gain/offset mismatch remain within limits over frequency and temperature.
- In production: measurements are traceable by range code, temperature point, calibration version, and test limits.
Diagram focus: the PGA enables auto-ranging, but range changes create a settling window and potential coupling paths that must be verified and controlled.
PGA fundamentals: architectures & where gain error comes from
A PGA is not a “mystery amplifier.” It is a gain network that can be switched, an amplifier core that closes the loop, and control logic that applies gain codes (often with channel selection). Treating the PGA as these three blocks makes its behavior predictable: every gain code has its own error, drift, noise, bandwidth, and switching transient—so per-range verification and (often) per-range calibration are expected rather than optional.
Common gain implementations (system-level differences)
- Resistor ladder switching (R-ladder): gain set by selecting resistor ratios; system impact is dominated by resistor matching/tempco and switch on-resistance interacting with bandwidth and linearity.
- Charge-domain / switched-cap: gain set by capacitor ratios and switching phases; system impact often shows up as switching-related artifacts and different settling behavior across ranges.
- Transconductance / current-mode: gain expressed via currents and feedback; system impact often appears in bandwidth/THD trade-offs and output drive limits under different codes.
Gain error sources (cause → symptom → system risk → verification hook)
- Network mismatch & drift: gain error and gain drift differ by code; range stitching can show steps across ranges. Verify: sweep input levels per range, add temperature bins, store per-range coefficients.
- Switch on-resistance and parasitics: code-dependent bandwidth, settling time, and nonlinearity; higher gain codes can become slower or more distorted. Verify: step response and sine THD across source impedance conditions.
- Finite loop gain and input bias: offset and gain can shift with common-mode and output swing; high source impedance makes bias/leakage visible. Verify: common-mode sweep, output swing sweep, and high-Z input emulation.
- Reference / common-mode interactions (when present): effective range boundaries move; channel comparability degrades. Verify: small reference perturbations, PSRR/CMRR checks under real supply noise.
Gain steps & coding: what the system actually cares about
- Switching transient size: some codes create larger glitches, so Tsettle and dummy samples may need to be code-dependent.
- Monotonicity and stitching risk: non-ideal steps can create “range boundary surprises” where adjacent codes overlap or leave gaps.
- Per-range ownership: do not assume one global gain/offset fits all codes; treat each range as a separate measurement mode with its own limits.
Diagram focus: model the PGA as switch + gain network + amplifier core; then verify mismatch, Ron/parasitics, bias/finite loop gain per range code.
Key specs that actually matter (spec → system risk mapping)
A PGA datasheet is useful only when each number is tied to a failure mode and a test condition. The same “good” spec can become irrelevant (or misleading) if the source impedance, bandwidth, output swing, temperature range, or switching behavior in the real system differs from the datasheet setup. The goal is to translate a few core PGA specs into system-visible risks, then define minimal verification hooks that expose those risks early.
Spec groups that drive real DAQ outcomes
- Gain accuracy: gain error, gain drift, and gain INL (linearity vs code and level) set how well ranges stitch together and how stable thresholds remain over time and temperature.
- Noise: input-referred noise vs gain, low-frequency noise (0.1–10 Hz for precision ranges), and bandwidth-defined integrated noise determine effective resolution at low-level signals.
- Dynamic behavior: bandwidth and phase (and group delay when scanning) define amplitude/phase consistency across ranges and avoid code-dependent frequency response surprises.
- Switching behavior: gain-change glitch, settling time, and overload recovery decide whether auto-ranging can be trusted without inserting dummy samples and guarded valid windows.
- Multi-channel integrity: crosstalk, channel-to-channel match, and mux feedthrough decide whether one channel’s large signal corrupts other channels and whether a shared calibration model is valid.
Spec → symptom → system risk → minimal verification hook
- Gain error / drift / INL: range boundaries shift and “stitching steps” appear when the same true input maps to different codes. Hook: per-range 3-point sweep (low/mid/high), add at least cold/room/hot bins, store per-range coefficients and limits.
- Input-referred noise (vs gain): ENOB collapses in low-level ranges because the noise floor dominates the LSB. Hook: define measurement bandwidth/window first, then measure RMS noise per range and compare to an LSB-equivalent target.
- 0.1–10 Hz noise (precision ranges): slow wandering breaks stable thresholds and long averaging. Hook: low-frequency noise test per range using a fixed observation window; check stability across temperature.
- BW / phase / group delay: frequency response differs by range code; scanning systems show amplitude/phase mismatches that look like sensor drift. Hook: spot-check gain/phase at key frequencies per range; verify the worst range code, not only the typical one.
- Gain-change glitch / settling: samples taken too early are deterministic settling error, not random noise. Hook: code-step response with real source impedance; define a valid window after Tsettle in LSB/ppm.
- Overload recovery: saturating or over-driving a range can “poison” subsequent samples for a long time, confusing auto-range logic. Hook: intentional overload then time-to-valid measurement; record worst-case recovery and its distribution.
- Crosstalk / mux feedthrough: a large signal on one channel creates spurs or bias shifts on another channel. Hook: crosstalk matrix test: drive one channel, sweep level/frequency, observe other channels and log spur levels per range code.
- Channel match: per-channel calibration diverges; shared thresholds become invalid. Hook: same-input histogram across channels per range; track gain/offset spread vs temperature.
A minimal verification set (fast, PGA-focused)
- Per-range accuracy: 3-point sweep per range code + temperature bins; record limits and per-range coefficients.
- Per-range noise: define bandwidth/window; measure RMS noise per range and compare to an LSB-equivalent goal.
- Switching validity: code-step settling to a defined “valid window after Tsettle” + overload recovery worst-case.
- Multi-channel integrity: crosstalk matrix + channel match histogram across key range codes.
Diagram focus: translate specs into system risks, then attach a minimal test hook to each risk under real bandwidth, source impedance, and switching conditions.
Noise & dynamic range planning (how to budget for auto-ranging)
Auto-ranging works only when each gain range is engineered as a complete measurement mode: an input span, a headroom margin, a defined noise bandwidth, and a switching validity rule. The planning goal is simple: use as much ADC full-scale as possible in every range without overflow, while keeping the input-referred noise low enough that low-level signals remain measurable. Because PGA behavior changes by range code, noise and effective resolution must be budgeted per range rather than assumed constant.
Design targets (per range)
- Use ADC full-scale: map the expected input span to near full-scale to preserve effective resolution.
- No overflow margin: leave headroom for peaks, transients, and sensor tolerance so the range boundary is not fragile.
- Bandwidth-defined noise: always define the measurement bandwidth or averaging window before comparing noise numbers.
- Switching validity: samples are valid only after the settling window; use dummy samples when needed.
A minimal budgeting method (no formula overload)
- Set the input span per range: define the maximum expected input for each gain code, including sensor tolerance and peak behavior.
- Pick a headroom policy: reserve a fixed margin so “near full-scale” stays safe under worst-case peaks.
- Define the noise bandwidth/window: specify either an analog bandwidth or a digital averaging window; without it, noise comparisons are meaningless.
- Compare noise in one domain: convert everything to input-referred noise per range (sensor + PGA + ADC-equivalent), then check which term dominates.
- Check against LSB/ENOB intent: require that the per-range RMS noise is below an LSB-equivalent or ENOB-derived goal for the range’s purpose.
Two practical auto-range strategies
- Coarse-then-fine: take a quick low-gain reading to estimate magnitude, then switch to the best range for a precise measurement. This is stable because switching occurs between measurement phases, not inside the final window.
- Measure-while-switching: switch ranges during ongoing sampling to maximize throughput. This requires strict hysteresis and a protected “valid window after Tsettle” to prevent switching artifacts from entering the dataset.
Range stitching rules that prevent oscillation
- Overlap: adjacent ranges should overlap so boundary decisions are not brittle.
- Hysteresis: use different up/down thresholds so noise does not trigger range flapping.
- Dwell time: enforce a minimum time or sample count before another range change is allowed.
- Validity gating: block samples until Tsettle has elapsed (or discard N dummy samples) after any switch.
Diagram focus: each range has its own dominant noise term and its own rules (headroom, bandwidth-defined noise, and validity after switching).
Gain switching & settling: the #1 hidden failure mode
Most “random” samples right after a gain or channel change are deterministic settling error. A PGA/MUX switch changes charge and impedance states, so the front-end must re-equilibrate before readings are valid. If sampling starts too early, the dataset looks noisy or unstable even when the analog chain is fundamentally fine—because the system is repeatedly measuring a transient rather than the intended steady-state value.
Why switching creates transients (what changes physically)
- Switch charge injection: internal switches inject charge at the moment of a code change, creating a short glitch or step at the input/output nodes.
- RC redistribution: the selected gain network and parasitic capacitors re-charge to a new operating point, producing an exponential tail.
- Source impedance coupling: higher source impedance slows settling and increases residual error because node charge must flow through a weaker source.
- Memory and overload history: saturation or a large prior range can extend recovery time and bias the first samples in the next range.
What “settled” means (a sign-off definition)
- Define the error limit: settled means the residual error is within a stated bound (e.g., ±0.5 LSB or a ppm target) for the intended measurement bandwidth.
- Separate glitch vs tail: the initial glitch can be brief, while the settling tail can persist and dominate accuracy if sampling starts early.
- Use real conditions: settling time depends strongly on source impedance, step size (range jump), load, and temperature—so it must be verified per range under the real chain.
A safe sampling sequence (MCU/FPGA-friendly)
- Change range/channel: apply gain code and channel select in a controlled phase.
- Wait: delay by Tsettle (fixed, or code-dependent from a measured table).
- Dummy conversions: discard N samples to flush switch/RC memory.
- Valid window: start the measurement window only after settling criteria are met.
- Acquire and average: collect M samples and apply averaging/decimation to the specified bandwidth.
Practical mitigations (what helps, and why)
- Dummy samples: the most universal fix; trades throughput for correctness and is often code-dependent.
- Precharge / bias-to-known: reduce redistribution time by starting from a known node voltage; adds control complexity.
- Isolation resistor / small RC: tame glitch energy and ringing; trades bandwidth and noise against stability.
- Segmented switching: large jump followed by a smaller correction can reduce worst-case transient size; costs extra steps.
- Dwell policy: hysteresis and minimum dwell reduce frequent switching so settling tails do not dominate the dataset.
Diagram focus: define Tsettle in LSB/ppm terms, discard dummy samples, and gate sampling so only the valid window enters the dataset.
Multiplexing & crosstalk control (channel-to-channel integrity)
In multi-channel PGA/MUX systems, crosstalk is often correlated and repeatable rather than random noise. A strong signal on one channel can inject a spur, a small step, or a bias shift into another channel through capacitive coupling, shared reference/ground impedance, or digital control edges. The goal is to identify dominant coupling paths, interpret crosstalk numbers under the right conditions, and apply structural controls in layout, power/ground, and timing.
Where crosstalk comes from (dominant paths)
- Trace coupling: adjacent input traces couple through parasitic capacitance/inductance, especially at higher frequencies and long parallel runs.
- Switch parasitics: internal switch capacitances and feedthrough couple activity from one channel into the selected node.
- Shared ground/reference impedance: ground bounce or reference movement becomes a shared error seen by multiple channels.
- Digital edge injection: control bus edges and code changes inject noise through supplies, substrate, or return paths near sensitive analog nodes.
How to read crosstalk numbers (what changes them)
- Frequency: capacitive coupling worsens with frequency; always interpret crosstalk(dB) vs frequency, not a single number.
- Gain code: switch states and impedances change by code, so crosstalk can vary significantly across ranges.
- Source impedance: higher source impedance is easier to perturb, making injected charge/voltage more visible.
- Timing: switching and digital activity near sampling can convert coupling into measurable spurs in the captured data.
Structural controls (layout, power/ground, timing)
- Layout isolation: increase channel spacing, avoid long parallel runs, use guard traces/rings for high-impedance nodes, and keep sensitive nodes short.
- Power/ground partition: strong local decoupling, continuous return paths, and separation of digital return currents from analog reference nodes reduce shared-impedance coupling.
- Digital quiet window: prevent gain/channel updates and minimize control edges during the sampling window; schedule switching in a dedicated phase.
A practical test hook (make crosstalk measurable)
- Crosstalk matrix: drive one channel with a swept tone/level, hold others at known conditions, and log spur levels across gain codes and bandwidth windows.
- Worst-case timing: repeat with switching edges near sampling to expose timing-sensitive coupling and confirm quiet-window effectiveness.
Diagram focus: identify the dominant coupling path, then apply layout isolation, power/ground control, and a quiet sampling window to keep channels independent.
Input/Output constraints that break designs (CM range, headroom, loading)
Many PGA failures are not noise-limited. They are range- and loading-limited: the input common-mode is outside the usable window, the output runs out of headroom at certain gain codes, or the load (especially an ADC sampling network) turns into ringing and slow settling. These issues often look like “random instability” or “mysterious drift,” but they are deterministic violations of the front-end’s valid operating region.
Input-side traps (common-mode, clamps, recovery)
- Common-mode range: when the input CM approaches or exceeds the usable window, gain and linearity can change abruptly. The result is range-stitching steps and threshold drift that do not average out.
- Clamp/overvoltage events: even brief excursions can trigger internal limiting behavior and leave a recovery tail. Treat this as a validity problem: define a post-event invalid window rather than mixing recovery samples into statistics.
- Verification hook: sweep CM in a few defined points (low/mid/high) while holding signal level constant; for recovery, apply a controlled over-range pulse and record time-to-valid in LSB/ppm terms.
Output headroom (linear window per supply and gain code)
- Output swing is not constant: usable swing depends on supply, gain code, load, and signal polarity. Near the rails, distortion rises and gain compression can appear.
- System symptom: clipping or “soft” compression breaks auto-range decisions and produces cross-range discontinuities that look like calibration failure.
- Verification hook: per range, sweep input until the output approaches headroom limits and mark the linear window; use this to set safe thresholds and margin.
Loading and stability (ADC input networks and capacitive loads)
- ADC sampling load: the input network behaves like a dynamic capacitive load with charge bursts. This can create ringing and longer settling if the driver is not isolated.
- RC and cable capacitance: external RC filters or long interconnects can turn into a challenging capacitive load, increasing overshoot and settling time.
- Engineering fixes: add a small isolation resistor, use a light snubber RC when needed, and validate with the real ADC network to define a valid sampling window per range code.
Design sign-off checklist (fast risk closure)
- Common-mode: worst-case CM points verified per range under real source impedance.
- Headroom: linear output window measured per range; thresholds include margin for peaks and tolerance.
- Loading: settling verified with the real ADC input network and any RC/cable load; valid sampling window defined.
- Recovery: post-event invalid window defined for clamp/over-range behavior; time-to-valid characterized.
Diagram focus: define the real usable overlap of input span, PGA linear window, and ADC full-scale. Most failures happen outside that overlap.
Calibration & matching strategy (gain/offset per range without overfitting)
A practical calibration strategy corrects structured, repeatable error—per range code—without turning calibration into a fragile curve fit. The baseline approach is range-specific offset and gain, then additional complexity is justified only when coefficients remain stable over time and temperature and the measurement uncertainty is clearly below the error being corrected. The goal is reliable range stitching, predictable thresholds, and traceable production records.
Baseline: offset + gain per range (why it must be per-range)
- Do not share coefficients across gain codes: each range has its own network state and error signature, so shared coefficients can create stitching steps.
- Two-point method: for each range, measure a low and high reference point inside the linear window to fit offset and gain.
- Best practice: choose points away from headroom limits and away from clamp/recovery regions to avoid fitting nonlinearity.
Temperature handling: bins vs continuous models
- Temperature bins: robust when measurement uncertainty is comparable to the drift being corrected. Store separate coefficients per temp bin to avoid unstable curve fits.
- Continuous models: useful only when drift is smooth, temperature sensing is reliable, and the coefficient estimation is clearly more accurate than the residual error target.
- Stability first: if coefficients move with time, stress, or lot, prioritize monitoring and re-cal policy over adding more polynomial terms.
Avoid overfitting: add complexity only with proof
- Separate fit vs verify: reserve verification points (or a verification sweep) that are never used for fitting; require improvement on the verification set.
- Stepwise escalation: two-point → multi-point → LUT only when the added correction survives temperature and time without coefficient drift dominating.
- Drift monitoring: add a small set of health-check points during production or field operation to detect coefficient invalidation early.
Production record fields (minimal, traceable)
- Identity: device ID, lot, channel ID, gain code, temperature bin.
- Stimulus and readings: applied reference levels, measured codes, bandwidth/window definition.
- Coefficients: per-range offset/gain (or LUT version), validity flags, limits pass/fail.
- Versioning: calibration version, date/time, and firmware/NVM schema identifiers.
Diagram focus: fit per-range coefficients, store with temp bins and versioning, and enforce a verification gate so added complexity improves real performance.
Applications & reference designs (within PGA scope)
A PGA is most valuable when one hardware chain must measure a wide range of amplitudes with high throughput and predictable stitching across ranges. The application focus here stays inside PGA scope: where to place the PGA in the chain, which failure modes dominate, and which minimal verification hooks make the design reliable without turning this section into a sensor encyclopedia.
Multi-range DAQ (±10 V / ±1 V / ±100 mV …)
- Goal: keep the ADC near full-scale across ranges and remove manual range switching.
- Dominant risks: post-switch settling tail contaminating early samples, headroom limits near thresholds, and stitching steps from range-specific errors.
- Quick hook: sweep across range thresholds with real source impedance; log Tsettle, dummy count, and cross-range consistency at the same true input.
Bridge / strain / pressure (range switching + drift management)
- Goal: cover low-level and higher-load conditions with stable thresholds over time and temperature.
- Dominant risks: low-frequency drift creating threshold wander, and switching artifacts that look like slow offset movement.
- Quick hook: hold fixed stimulus points for long windows across temperature bins per range; verify per-range offset/gain stability and stitching.
High-Z measurement (photo/electrochem behavior under auto-range)
- Goal: handle large amplitude variation while protecting high-impedance nodes from injection and coupling.
- Dominant risks: switching injection is more visible with high source impedance, and channel coupling/digital edges can corrupt readings.
- Quick hook: repeat settling and crosstalk tests with worst-case source impedance (or an equivalent R source) across key gain codes.
Production test fixtures (throughput by fast range switching)
- Goal: one fixture covers many tests by switching ranges quickly while keeping pass/fail decisions stable.
- Dominant risks: invalid windows after switching, and cable/RC loading that increases ringing and settling time.
- Quick hook: characterize “time-to-valid” per range and define a fixed discard policy; validate under the real harness and load.
Diagram focus: across DAQ, bridge, high-Z, and production fixtures, the core PGA placement is similar—dominant risks shift between settling, drift, coupling, and throughput.
IC selection logic (parameters → risk mapping → vendor questions)
A good PGA selection process is not “reading a datasheet harder.” It is converting parameters into system risks, then forcing comparable answers from vendors under explicit test conditions. The structure below turns requirements into hard gates, maps each parameter group to failure modes, and provides a copy-ready question set that prevents ambiguous comparisons.
Start with requirement gates (avoid wrong part classes)
- Dynamic range: smallest and largest signals, required margin to rails, and the maximum safe range-switch rate.
- Channels and topology: number of channels, MUX strategy, and whether synchronized updates are required.
- Bandwidth/throughput: required measurement bandwidth and the allowed “invalid time” per range change (dummy + wait).
Parameter groups → system risks (what breaks if unclear)
- Gain behavior: steps/coverage, gain error/drift, gain INL, and range-change behavior → stitching steps, threshold drift, and invalid samples after switching.
- Noise behavior: input-referred noise density, 0.1–10 Hz noise (when applicable), noise vs gain code → low-level ENOB collapse and unstable thresholds.
- Dynamic limits: bandwidth, settling to LSB/ppm, THD when AC matters → wrong valid window and spur-limited measurements.
- Multi-channel integrity: crosstalk vs frequency and vs code, channel match, sync update → channel pollution and hard-to-debug correlated errors.
- Supply/package: single/dual supply, I/O range, thermal traits, load guidance → CM/headroom failures and load-induced ringing/slow settling.
Risk validation hooks (minimum tests to confirm claims)
- Settling vs code: apply gain-code steps with the real source impedance and ADC input network; record time-to-±LSB/ppm.
- Crosstalk matrix: drive one channel while observing others across frequency and gain codes; repeat with worst-case timing.
- Range/window mapping: sweep CM and amplitude to mark the linear window per range and set safe headroom thresholds.
Vendor question template (copy-ready, forces test conditions)
- Provide the gain table (steps and code mapping) and the gain error/drift per code, with temperature points and measurement bandwidth.
- Provide gain INL or gain linearity per code, and state the input CM and output load used for the test.
- Provide range-change glitch and settling time to a stated error (LSB/ppm), including the stimulus step size and source impedance.
- Provide overload recovery time after a defined over-range condition (amplitude and duration), and the criterion for “recovered.”
- Provide noise density and, when relevant, 0.1–10 Hz noise, with the gain code and filter bandwidth used.
- Provide noise vs gain code data (or curves) so low-range and high-range behavior can be compared fairly.
- Provide crosstalk(dB) vs frequency and state whether it is measured vs adjacent channel, vs any channel, and under what gain codes.
- Provide channel match specs (gain/offset match) and state whether channels share the same die/package and the temperature gradient assumptions.
- Confirm synchronized update options (if any), update timing/skew, and required control sequencing for safe switching.
- State input CM range and output swing/headroom limits per supply and load, including linearity expectations near the rails.
- State load guidance (maximum capacitive load, need for isolation resistor), and provide a reference circuit for driving an ADC input network.
- Provide thermal data (RθJA) and any drift vs power/self-heating characterization relevant to multi-channel use.
- Provide recommended calibration flow (per-range coefficients, temp bins), and the minimum fields needed for traceability and versioning.
- Provide production test coverage recommendations for verifying settling and crosstalk across gain codes.
Diagram focus: apply hard gates first (I/O range, gain steps, settling), then verify risk-sensitive metrics (noise and crosstalk) with explicit test hooks to reach a defensible shortlist.
Engineering checklist (design review + validation tests + layout hooks)
This checklist closes the loop from design intent to measurable sign-off. It is structured for fast reviews and repeatable validation: layout hooks that prevent coupling, test cases that expose range-switching and multi-channel failure modes, and pass/fail criteria expressed in LSB, ppm, dB, and throughput time.
Design review checklist (layout + schematic + timing)
Layout review (highest priority first)
- Analog input symmetry: route sensitive inputs short and symmetric across channels; avoid long parallel runs between adjacent channels.
- Guarding high-Z nodes: add guard/keep-out around high-impedance inputs; keep switching nodes and digital edges away from these areas.
- Digital separation: keep gain/MUX control lines away from input traces; add small series resistors where needed to slow edges.
- Return continuity: keep analog return paths continuous; avoid plane splits under PGA/ADC inputs; do not route digital return through analog regions.
- Decoupling loops: place decouplers at the pins and minimize loop area; follow reference-bypass guidance and keep reference routing short and quiet.
- Channel isolation features: use ground guards or spacing to reduce inter-channel capacitive coupling in the input region.
- Testability: provide safe test points or configuration options to isolate coupling paths during debug and validation.
Schematic review (scope stays inside PGA use)
- Range alignment: confirm input CM and amplitude ranges stay inside the PGA linear window and the ADC full-scale window with margin.
- ADC load realism: validate the PGA output network against the real ADC input sampling network (dynamic loading) and any RC/cable capacitance.
- Switching safety: ensure the design supports a defined invalid window after gain/channel switching and after over-range events.
- Per-range calibration support: ensure firmware/NVM storage can hold per-range coefficients and versioning identifiers.
Timing review (MCU/FPGA coordination)
- Define a fixed sequence: switch → wait → dummy N → valid window → average (only after valid begins).
- Make it configurable: store per-range (and per source-Z class) values for wait time and dummy count.
- Reduce thrashing: use hysteresis and rate limits for auto-range decisions to prevent frequent switching.
- Digital quiet windows: schedule control edges outside sampling windows to reduce coupling into sensitive ranges.
Validation tests (must-cover cases for PGA systems)
Per-range accuracy and drift
- What to run: for each gain code, measure offset and gain error using at least two reference levels inside the linear window.
- What to sweep: gain codes × stimulus levels × temperature bins (at minimum low/mid/high).
- What to record: offset (LSB/µV), gain error (ppm of FS), gain drift (ppm/°C), and noise with stated bandwidth/window.
- Pass criteria: stitching step between adjacent ranges stays below a defined LSB/ppm threshold at the same true input.
Switching transient and settling (source impedance matters)
- What to run: gain-code steps and channel switches while driving the real ADC input network.
- What to sweep: small-step vs large-step changes × gain codes × channel changes × source impedance classes.
- What to record: glitch amplitude, Tsettle to ±LSB/ppm, required dummy count, and invalid time per switch.
- Pass criteria: throughput budget is met with a defined valid window; early samples are excluded by rule, not hope.
Crosstalk matrix (frequency scan + code dependence)
- What to run: drive one channel (aggressor) and observe all others (victims) while sweeping frequency.
- What to sweep: aggressor×victim pairs × frequency points × gain codes, with and without digital quiet windows.
- What to record: victim spur level in dB (or LSB) with stated aggressor amplitude, and the worst-case pair.
- Pass criteria: worst-case coupling stays below the application limit across the required bandwidth.
Overload and saturation recovery
- What to run: apply a defined over-range event (amplitude and duration), then observe recovery to valid criteria.
- What to sweep: gain codes × source impedance × temperature bins.
- What to record: time-to-valid to ±LSB/ppm and any residual offset after recovery.
- Pass criteria: a defined invalid window is sufficient to protect data integrity and throughput targets.
Unified pass/fail metrics (one language across teams)
- Accuracy: LSB or ppm of full-scale (state the range code and FS definition).
- Drift: ppm/°C and/or ppm over time window (state temperature points and duration).
- Noise: RMS (and optionally peak-to-peak) with stated bandwidth/window.
- Switching: Tsettle to ±LSB/ppm, dummy N, and invalid time per switch (throughput impact).
- Crosstalk: dB vs frequency (or LSB spur), with stated aggressor amplitude and gain code.
Diagram focus: plan validation coverage as a matrix across gain codes, channels, temperature bins, and stimulus levels, then sign off using consistent metrics (LSB/ppm/dB/time).
FAQs (PGA auto-range, MUX integrity, crosstalk, settling)
These FAQs close long-tail questions without expanding scope beyond PGA behavior in multi-range, multi-channel systems. Each answer stays short and actionable, with structured hooks for fast debugging and clear pass/fail criteria.