Self-test and production test for INAs are about proving “the measurement chain is correctly built and stays stable” with the fastest, repeatable checks—not about re-measuring every datasheet spec on the line.
Use controllable loopback/injection hooks, proxy recovery tests, and correlation + logging to keep results consistent across stations, temperature, and lots.
Why Self-Test Exists in INA Products (Scope & Success Criteria)
Self-test in an instrumentation-amplifier (INA) front-end is not a datasheet re-qualification flow. It is a production-proof method that
demonstrates chain integrity, calibratability, and survivability within a bounded time and uncertainty.
The goal is to catch wrong wiring, hard faults, and drift-to-unusable conditions before shipment, using tests that remain
repeatable across stations, fixtures, and lots.
A) Scope definition: “prove usable,” not “prove everything”
Out-of-scope here: noise spectrum sweeps, full AC CMRR vs frequency, extreme distortion corners, EMI root-cause characterization.
Self-test passes should remain valid under defined limits: input common-mode (CM) range, stimulus level, settling window, and fixture uncertainty.
“Specification compliance” is not claimed unless measurement uncertainty is demonstrably below the guardbanded requirement.
B) Success triangle: Coverage × Throughput × Correlation
Coverage (fault-mode coverage)
Coverage is defined by failure modes (open/short/leakage/saturation/mux/config), not by “how many datasheet rows were tested.”
Each production check must map to at least one fault class with a clear signature.
Throughput (cycle time)
Cycle time includes not only measurement time but also settling, auto-zero/chopper timing, and any temperature soak.
Apply “fast-fail-first” ordering: catch wiring/fault signatures before precision steps.
Correlation (station/fixture/lot repeatability)
A passing limit must be stable across fixtures, stations, and time. Use golden units and correlation checks to ensure
that shifts are detected as process drift, not mis-labeled as product failures.
C) Define the test scope by failure modes (not by specs)
A production-ready self-test plan starts from “what can go wrong” on real boards and fixtures, then chooses the smallest set of checks that
uniquely exposes those faults. Typical fault classes and why they matter:
Open / short (wiring, connector, probe contact): causes saturation, no response to injection, or polarity reversal signatures.
Leakage (flux/moisture/ESD clamp leakage): manifests as drift under “short input,” touch sensitivity, or guard on/off deltas.
Saturation / recovery abnormal (clamps, headroom, loading): creates slow tails, memory, or failure to re-enter linear region.
Mux/config fault (PGA/digital control): wrong gain code, stale configuration, channel mix-up.
D) Minimal traceable data (the “must log” set)
Logging should be just enough to reproduce failures and separate product issues from station drift. A practical minimal schema:
temp_point, supply mode, gain code/mode, stimulus_id (0/mid/nearFS), wait/settle window
Key results
measured_offset, measured_gain, residual, Trecover (proxy), pass/fail + bin
Rule: log the conditions that explain the number. Without conditions, the number is not comparable across lines or lots.
Test coverage should be expressed as fault-mode coverage mapped to a small set of repeatable production checks (loopback, injection, recovery proxy, temperature spot-check).
Test Taxonomy: What You Can Prove on a Line (and What You Cannot)
A production line can prove bounded correctness (within defined conditions and uncertainty). It cannot efficiently prove
full-frequency performance or corner-case analog limits without sacrificing cost and throughput.
The practical strategy is: prove the essentials on the line, and use proxy tests (fast, repeatable signatures) to catch
coupling/layout/protection disasters that would otherwise require long sweeps to detect.
A) Line-prove vs Lab-characterize (decision boundary)
Line-prove (production)
DC sanity: offset/gain under defined CM, stimulus, and settling
Survivability proxies: overload recovery and CM-step recovery time
Control path: configuration readback and gain-code correctness (PGA/digital)
Lab-characterize (engineering)
Wideband noise density and integrated noise vs bandwidth
AC CMRR/PSRR vs frequency and wiring-dependent parasitic imbalance
Distortion/IMD corners and sweep-based linearity validation
EMI root-cause isolation and coupling path attribution
A “line-prove” claim is valid only if: (1) test conditions are fixed and logged, (2) measurement uncertainty is bounded,
and (3) a guardbanded limit is applied to prevent false passes.
B) Why “cannot” is real: constraints that defeat line tests
Uncertainty dominates
If station/fixture/probe uncertainty is comparable to the effect being validated, production results become uncorrelatable.
The right action is to switch to a proxy signature or move the metric to characterization.
Cycle time collapses
Sweeps (frequency, amplitude, temperature) multiply time by settling and repetition. A line must prioritize quick signatures
and spot checks over long, high-resolution scans.
Equipment and maintenance burden
Low-noise sources, precision loads, and calibrated analyzers increase downtime and recalibration needs. When maintenance dominates,
correlation drifts and yield decisions become unreliable.
Observability is limited
Probe loading, parasitics, and ground references can mask the true limitation. Production tests should avoid measurement setups
that inject more error than the device under test.
C) Proxy tests: fast signatures that protect yield and reliability
Proxy tests convert “hard-to-prove” performance risks into short, repeatable signatures. Each proxy should specify:
objective, stimulus, observable, and pass criterion placeholders.
1) CM-step recovery proxy
Objective: flag coupling/layout/protection failures that cause long tails or re-centering errors. Stimulus: controlled common-mode step within the linear CM range. Observe: Trecover and residual error after recovery window. Pass: Trecover < T_limit; residual < E_limit (limits derived from system settling and error budget).
2) Overload recovery proxy
Objective: catch clamp/network issues and output-stuck behaviors under near-rail events. Stimulus: near-full-scale differential step (bounded by safe input current). Observe: time back to linear region and post-event offset shift. Pass: no latch; return-to-linear < T_limit; shift < Vshift_limit.
3) Leakage sensitivity proxy
Objective: detect contamination/moisture/guard failures that produce time-varying offsets. Stimulus: short input, then toggle guard/bias return or apply a tiny bias step. Observe: drift slope over a fixed window and guard on/off delta. Pass: slope < S_limit; delta < D_limit (limits set by allowable drift during measurement session).
D) How to write acceptance limits without guessing numbers
E_limit is derived from the post-calibration system error budget (including ADC quantization and reference drift).
T_limit is derived from the allowed settling time inside the sampling window (after gain change, CM step, or overload).
S_limit is derived from allowable drift during the measurement session and the intended averaging strategy.
Guardband rule: set the production limit tighter than the requirement by an amount that covers station uncertainty and drift
(limit = requirement − guardband).
Use production tests to prove bounded correctness and fault signatures; reserve sweeps and corner characterization for lab validation.
Deploy proxy tests (recovery, leakage sensitivity) to catch high-impact integration failures without collapsing cycle time.
Loopback Architectures: Electrical Loopback vs Stimulus Loopback vs Control Loopback
Loopback is an observability design: it creates repeatable conditions that expose wiring faults, gain/offset sanity, and control-path correctness
without turning production test into full characterization.
A production-ready loopback plan must specify: what path is closed, what fault signatures it proves, what it cannot prove,
and how to avoid false fails caused by clamps, switch leakage, or saturation behavior.
A) Three loopback classes (what they close and what they prove)
Loopback type
Closes
Proves (fault signatures)
Main risks
Best use
Electrical loopback
Output → (Rlarge + switch) → input
Open/short, wrong polarity, gross gain errors, output-drive anomalies
Readback ≠ analog correctness; channel mapping mistakes can hide faults
Always first (fail-fast), especially for PGA/digital INAs
Guardrail: loopback must be designed to avoid test-induced faults. Keep clamp current bounded
(Iclamp < I_limit), keep paths symmetrical, and separate
saturation recovery checks from gain/offset checks.
B) Practical risks and how to prevent false fails
Clamp conduction during electrical loopback
Back-driving the input through the output can forward-bias input protection and create a fake offset or slow tail.
Use large, symmetric loopback resistors and a bounded stimulus that keeps the INA in its linear region.
Switch leakage and charge injection
Leakage can dominate microvolt-level checks, especially at high source impedance. Treat switch leakage as a budgeted error term,
and verify with short-input baselines before and after toggling.
Saturation creates “misleading pass/fail”
If the loopback condition drives rails, gain/offset becomes undefined until recovery completes. Separate the tests:
first verify recovery (Trecover), then run gain/offset checks within the settled window.
C) Minimal production sequence (loopback-centered)
Control loopback: write configuration, read back image, verify channel identity (PGA/digital).
Short-input baseline: establish offset baseline and drift slope under a fixed window.
Electrical loopback: confirm chain integrity and gross gain sanity without entering clamp conduction.
Stimulus loopback: run 1–2 point stimulus for gain/offset, then evaluate residual for robustness to stimulus uncertainty.
Recovery proxy (optional): apply a bounded overload/CM step and verify Trecover < T_limit.
Log + bin: store conditions + key results for correlation (station/fixture/lot/time).
Electrical loopback supports fast chain-integrity checks; stimulus loopback supports calibrated gain/offset with residual-based robustness;
control loopback validates configuration and channel identity before analog measurements.
Calibration Injection Paths: How to Inject a Known Differential Signal Safely
Injection enables production calibration and traceable checks by applying a known differential stimulus to the INA input path.
The injection plan must preserve system realism (coverage) while keeping common-mode headroom and clamp current bounded.
A production-worthy injection design specifies: objective, source type, connection point, settling window, and
risk flags (leakage sensitivity, CM/headroom, clamp interaction).
A) Start from objectives (objective → stimulus → observable → pass)
Offset check
Use a near-zero differential condition (short-input baseline or symmetric micro-injection). Observe mean output and drift slope in a fixed window.
Pass: |Voff| < Voff_limit and slope < S_limit.
Gain check
Apply one or two known differential points inside linear CM/headroom. Fit gain/offset and evaluate residual to reduce sensitivity to stimulus uncertainty.
Pass: |Gain_err| < Ge_limit and residual < Res_limit.
Linearity proxy (not a sweep)
Use a bounded near-full-scale point or step to expose clamp/recovery problems. Observe Trecover and post-event shift.
Pass: Trecover < T_limit and shift < Vshift_limit.
Limits must be budget-derived: E_limit from error budget,
T_limit from settling window, S_limit from allowable drift.
B) Where to inject: coverage vs risk tradeoffs
At input terminals (most realistic)
Maximizes coverage of connectors, protection networks, leakage paths, and wiring errors. Requires stricter control of CM/headroom and clamp interaction.
Before protection (good production default)
Preserves most realism while enabling controlled injection. Must ensure injected current does not get absorbed by TVS/clamps.
After protection (coverage-limited)
Easier to measure but bypasses the most common field-failure contributors (protection leakage, contamination, wiring). Use only when explicitly labeled as “device-only” proof.
C) Injection sources: ratiometric vs DAC vs resistor network
Ratiometric injection
Best for bridges/resistive sensors: stimulus and measurement share the same reference/excitation so drift cancels.
Add a monitor point for Vexc/Vref to avoid “common drift masking.”
DAC injection
Flexible but the DAC becomes part of the error chain (linearity, drift, settling, output impedance).
Prefer residual-based criteria and log DAC code, settle window, temperature point, and reference.
Resistor-network injection
High traceability when precision/low-tempco parts are used. Must be symmetric and tolerance-aware to avoid biasing the differential path.
Log network identity and tolerance class for correlation.
D) Hard constraints: CM/headroom and source-impedance interaction
CM/headroom: injection must keep the INA inside its linear CM and output swing region; otherwise gain/offset results become recovery-dependent.
Clamp interaction: injection networks must bound input current so protection does not dominate the measurement (Iclamp < I_limit).
Source impedance: injection impedance + series protection + sensor source impedance can create unintended division and pseudo-offset; keep networks symmetric or explicitly modeled.
Choose injection by objective first, then select a stimulus source and connection point that preserve coverage while keeping CM/headroom and clamp current bounded.
Use risk flags (CM, clamp, leakage, settling) to define guardbanded pass criteria and required logging.
Gain/Offset Production Checks: Minimal Measurements with Clear Pass Criteria
A production check should prove useful correctness with a minimal point set, not characterize every datasheet curve.
The goal is repeatable pass/fail decisions and stable correlation across stations, fixtures, and lots.
The recommended minimal set is 3 points: 0-point (short-input baseline), small differential (2-point fit),
and near-full-scale (consistency / clamp / headroom proxy). Pass limits are placeholders filled by the system budget.
A) Minimal 3-point set (what each point proves)
0-point (short input)
Establishes a baseline for offset and drift slope under a fixed measurement window.
Also catches obvious wiring/bias-return issues when the output cannot settle reproducibly.
Small differential point
Provides the second anchor for a 2-point gain/offset fit while staying well inside linear CM/headroom.
Use a known and bounded stimulus so clamp and recovery behavior do not contaminate the estimate.
Near-full-scale point
Used as a consistency check and a proxy for clamp/headroom problems. It is not a full linearity sweep.
Evaluate a residual against the 2-point model to detect hidden saturation, path errors, or protection interaction.
B) Measurement sequence (repeatability first)
Control integrity: confirm gain code / channel identity (PGA/digital) before analog checks.
0-point: short input; wait a defined settle window; sample N times; store robust statistics.
Small point: apply Vdiff_small; keep CM inside the linear region; wait settle; sample N times.
Log conditions: station/fixture IDs, temperature point, gain code, VCM target, stimulus ID, settle time, sample count.
Keep stimulus bounded: Iclamp < I_limit and Vout stays inside swing headroom during fit points.
C) Computation (2-point fit + 3rd-point residual)
2-point fit
Estimate gain and offset using 0-point and the small differential point (or ±small points when available).
This keeps recovery and clamp effects out of the model.
Residual consistency
Compute residual at nearFS relative to the 2-point model. Residual is a robust proxy for hidden saturation, clamp interaction,
wiring mistakes, and stimulus-path inconsistencies.
Use 0 and small points for a 2-point fit; use nearFS as a consistency proxy. Fill limits from the system budget and apply guardbands for station and fixture uncertainty.
Detecting Wiring & Leakage Faults: Open/Short, Guard, and Bias Return
The highest-yield production failures are wiring mistakes, unintended shorts, contamination-driven leakage,
and missing bias-return paths. These faults should be detected by signature-based proxy tests rather than by attempting full specs.
Use simple stimuli with bounded currents and fixed windows. Convert each signature into a fast binning action for rework and root-cause correlation.
A) Open vs short vs leakage (field signatures that production can trust)
Open
An unconstrained input node drifts toward a rail or becomes environment-sensitive. Output may saturate or exhibit non-repeatable settling.
Production should treat “cannot settle into a window” as a primary signature.
Short
Differential stimulus is crushed and produces little or no response. Gain checks collapse and injected steps become ineffective.
Verify that a bounded small differential produces at least a minimum response.
Leakage / contamination
Behavior changes with humidity, touch, and time. Even with shorted inputs, drift slope can be elevated.
Guard on/off comparison or slope-under-short is a strong proxy signature.
B) Quick proxy tests (bounded, repeatable, production-friendly)
Open detection: weak bias + return-time window
Apply a very weak bias path (via a large resistor or controlled micro-current) and verify the output returns into a predictable window
within T_return. Pass: T_return < T_limit_open and Vout ∈ V_window.
Short detection: small differential step response
Inject a bounded small differential step and verify a minimum output response after settling.
Pass: |ΔVout| > Vresp_min under the declared gain code and measurement window.
Compare offset/drift with guard enabled vs disabled, or measure drift slope while inputs are shorted.
Pass: |ΔVoff_guard| < Vguard_limit and slope_short < S_limit_leak.
Guardrail: keep test currents bounded (Iclamp < I_limit) and keep CM/headroom in-range during proxy stimuli.
Use signature-based proxy tests: open faults fail return-time windows, short faults fail step response minima, and leakage faults fail guard/slope comparisons.
Keep currents bounded and log conditions to enable fast correlation and rework decisions.
Overload & Common-Mode Recovery as a Production Proxy
Recovery behavior is a high-yield, line-friendly proxy for catching “something is wrong” at the board level.
A single time-domain step can expose protection interaction, coupling, missing returns, and headroom problems without frequency sweeps.
The proxy is defined by T_recover (time to return to a declared linear window) and Residual (post-recovery offset from baseline).
Limits are budget-filled placeholders with guardbands for station and fixture uncertainty.
A) What this proxy can prove on a line
Fast detection of “board-level wrong”
A common-mode or overload step highlights clamp interaction, missing return paths, coupling into inputs, and headroom violations.
It is not a full CMRR vs frequency characterization.
Repeatable pass/fail with fixed windows
Define a linear output window and a timing window. Measure return time and residual error after recovery.
This avoids lab-only instrumentation and enables stable binning.
B) How to run the test (bounded, production-safe)
Declare the linear window: Vout ∈ V_linear_window (budget-filled placeholder).
Apply a bounded CM step or overload stimulus; keep clamp current under control.
Start timing at the stimulus edge; ignore the first t_settle region; evaluate recovery thereafter.
Measure T_recover to re-enter the linear window; then measure Residual relative to a pre-step baseline.
Repeat N times for basic repeatability screening if the use case is sensitive to intermittents.
Guardrails: I_clamp < I_limit, CM/headroom in-range, and keep recovery tests separate from gain/offset fitting windows.
C) Pass criteria and bins (budget-filled placeholders)
Define a linear output window, then measure return time and post-recovery residual. This time-domain proxy is sensitive to clamp interaction, coupling, missing returns, and headroom mistakes.
Temperature Strategy: Cross-Temp Spot Checks, Soak Rules, and Drift Separation
Production lines rarely run full temperature sweeps. A practical strategy is 25 °C full coverage with hot/cold spot checks
that are stable across fixtures and airflow differences.
The key is repeatability: use stability thresholds instead of fixed soak times, log window definitions,
and separate drift-like trends from low-frequency noise using short vs long windows.
A) Cross-temp spot-check plan (line-friendly coverage)
25 °C baseline for every unit
Use the same gain/offset checks and recovery proxies at 25 °C as the primary correlation anchor.
This becomes the reference point for cross-temp deltas and lot comparison.
Hot/cold spot checks for risk and lot consistency
Spot checks validate cross-temp behavior without full sweeps. Focus on new lots, new fixtures, new cleaning processes,
and any configuration with elevated field risk.
B) Soak rules (stability thresholds, not fixed time)
Declare stability by thresholds over windows. This reduces false fails caused by different airflow, fixture thermal resistance,
and chamber dynamics.
Time-to-stable is a reported metric: Time_to_stable < T_soak_limit (optional binning field).
Must log window definitions: W_short_len, W_long_len, sample_rate, and the exact stability thresholds (Δ_stable, S_stable) used by the station.
C) Drift separation (short vs long windows)
Short window: mean for decision
Use mean(W_short) to reduce random noise impact on pass/fail checks. This is the value compared to cross-temp and cross-lot limits.
Long window: slope for drift proxy
Use slope(W_long) to detect trend-like drift. A large slope indicates instability or thermal-gradient sensitivity
even when short-window averages look acceptable.
Use stability thresholds (Δ_stable, S_stable) instead of fixed soak time. Evaluate mean over W_short for decisions and slope over W_long as a drift proxy, then compare cross-temp deltas to budget-filled limits.
Lot-to-Lot & Fixture-to-Fixture Consistency: Golden Unit and Correlation Plan
Consistency is a production system problem: station drift, fixture wear, cable changes, and instrument recalibration can look like “product drift”.
A Golden Unit and a correlation loop keep stations comparable and prevent silent baseline shifts across lots.
Treat station/fixture bias as a modeled system term. Use differential comparisons to detect out-of-family behavior and trigger recalibration or maintenance.
A) Golden Unit policy (definition, cadence, retirement)
Definition
A Golden Unit is a traceable reference assembly used to validate station/fixture health, not to characterize full datasheet behavior.
Select units that pass baseline checks and sit near the center of the normal population (not at edges).
Cadence
Prefer trigger-based checks (fixture change, probe swap, cable replacement, instrument calibration, abnormal bin spikes).
Add a periodic check as a safety net (schedule defined by factory policy).
Retirement
Golden Units age and wear. If multiple stations report consistent shifts, treat the Golden Unit as suspect and replace it.
Keep a backup Golden Unit to avoid single-point contamination of the baseline.
B) Correlation method (same unit, different stations, differential deltas)
Run the same correlation program with the same Golden Unit across Station A/B/C.
Compare deltas of key metrics (offset, gain, recovery time, residual error, leakage proxies) to estimate station/fixture bias.
ΔMetric(A,B) = Metric_A − Metric_B
Treat station bias as a system term: Bias_station → baseline tracking and drift alarms
Correlation criteria (placeholders)
|ΔMetric(station_i, station_j)| < Corr_limit
Exceeding Corr_limit triggers re-calibration; repeated exceedances trigger maintenance or station isolation (policy-defined).
C) Baseline versioning and minimum logging schema
Baselines must be versioned. Any fixture rebuild, probe swap, cable replacement, or instrument recalibration should create a new baseline version
after Golden Unit correlation passes.
Must-log fields: golden_id, golden_rev, station_id, fixture_id, instrument_id, program_rev, baseline_version,
metrics (Offset_est, Gain_est, T_recover, Residual, leak_proxy), deltas (Δ vs baseline / Δ vs other stations), action_flag.
Use a Golden Unit to compare stations and fixtures by differential deltas. When |Δ| exceeds Corr_limit, trigger re-calibration or maintenance and record a new baseline version.
Throughput Engineering: Settling Time, Auto-Zero Timing, and Test Order Optimization
Throughput is determined by waiting, not by math. Optimize test order, avoid unnecessary settle time,
and prevent false fails caused by sampling the wrong auto-zero/chopper phase.
The strategy is simple: run quick-fail first, then do precision checks. Replace fixed delays with threshold-based settling rules,
and pipeline digital operations in parallel with analog settling.
A) Test order (quick-fail first, precision later)
Quick-fail: open/short/leakage signatures and recovery proxy checks (stop early on failure).
Mid-cost: 3-point gain/offset checks after basic health is proven.
Slow steps: long-window trend checks or temperature spot-checks only when required by the production plan.
Early-exit rule: any quick-fail failure skips downstream precision steps and records the failure bin immediately.
B) Settling engineering (threshold-based, not fixed delay)
After gain switching or stimulus changes, replace fixed wait with a settle monitor. Declare stable when short-window deltas fall below a threshold
for M consecutive checks, with a maximum timeout for safety.
ΔV = |mean(W_short,k) − mean(W_short,k−1)|
Stable when ΔV < Δ_settle for M consecutive checks
Timeout: time_to_settle < T_settle_max (otherwise bin as settle failure)
Practical rule: Δ_settle is budget-filled (noise + ADC variation + guardband) and may differ per gain_code.
C) Zero-drift / chopper timing (avoid wrong sampling phase)
Auto-zero and chopping introduce internal timing states. Sampling at a single unlucky phase can create false offset or false drift decisions.
Production should either align sampling to a stable phase or average across phases.
Phase-aligned: sample using a declared ready/state indicator when available.
Phase-averaged: take N samples spanning multiple internal phases; use median/trimmed mean for robustness.
Must-log fields: chopper_mode_flag, sampling_window_len, N_samples, statistic (median/mean), and program_rev for cross-station comparability.
D) Parallelization and pipeline opportunities
Digital config/readback and logging can run during analog settling windows.
Temperature reads can be scheduled while the next stimulus is armed.
Multi-channel stations can pipeline channels when shared stimulus resources do not couple between channels.
Put quick-fail tests first, replace fixed delays with threshold-based settling, and overlap digital work (config/readback/temp/log) with analog settling to reduce cycle time.
Engineering Checklist for Production Readiness (Board Hooks + Test Hooks)
Production readiness depends on “hooks”: predictable injection points, stable probe contact, controllable short/guard states, and versioned baselines.
The list below is written as Action → Purpose → Pass criteria so it can be copied into a design review checklist.
Numeric limits are placeholders (Voff_limit / Corr_limit / Δ_settle / T_limit). Fill them from system noise, drift budgets, and yield guardbands.
A) Injection points & measurement points (minimum closed loop)
Differential injection pads (VIN+ / VIN−)
Action: Place dedicated SMT probe pads near the input zone; keep a short, symmetric path to the INA pins.
Purpose: Enable 2-point/3-point gain-offset checks and controlled wiring/leakage diagnostics.
Pass: Stimulus response is monotonic and repeatable; residual after 2-point fit < Resid_limit.
Reference / excitation test points (VREF / VEXC / Sense)
Action: Add a VREF test pad and (if applicable) excitation + remote-sense pads.
Purpose: Prevent reference drift from being misread as INA drift; support ratiometric checks.
Pass: |ΔVREF| (vs baseline) < Vref_limit during the test window.
These examples exist to speed up datasheet lookup for production hooks. Verify voltage, leakage, temperature grade, and footprint for the target platform.
Test points / fixture interface
SMT test point: Keystone 5015 (PC TEST POINT, miniature)
Precision DAC for controlled stimulus (example): TI DAC60501 (verify required resolution and drift)
A production-ready INA front-end exposes controllable hooks: differential injection, VIN short, guard on/off, Vref observation, output isolation (Riso), and robust test pads for repeatable fixtures.
INA Selection Notes for Self-Test (What to Ask Vendors / What Features Matter)
For self-test and production test, a “great datasheet number” is not enough. Priority goes to behaviors that production can prove:
recovery timing, leakage stability, repeatable gain states, and auditable configuration/readback.
Vendor questions should map to Feature → Why it matters → How it is tested so acceptance criteria are enforceable on the line.
Vendor question checklist (self-test focused)
Recovery behavior (startup / overload / CM step)
Ask: Trecover vs load, gain, and temperature; residual after recovery; recommended measurement setup.
These examples illustrate categories that commonly support production-oriented verification (recovery behavior, low drift, auditable gain/control). Confirm the exact grade and test strategy.
Classic INA (single resistor gain)
TI INA826 (precision instrumentation amplifier, wide supply)
Zero-drift / chopper INA (cross-temp stability)
TI INA333 (low-power, zero-drift instrumentation amplifier)
High-speed / low-latency INA (recovery + settling)
ADI AD8421 (high-speed instrumentation amplifier; strong CMRR)
Programmable gain + test capability (production features)
TI PGA280 (digitally-controllable gain; includes signal-integrity / test-oriented features)
Keep selection questions “testable”: every vendor claim should map to a production proxy, a logging field, and a pass criterion placeholder.
These FAQs close common long-tail production issues without expanding the main content boundary.
Every answer follows a fixed 4-line structure: Likely cause / Quick check / Fix / Pass criteria.
Numeric limits are placeholders (Voff_limit, Corr_limit, T_limit, etc.). Fill them using the system error budget and yield guardbands.
Why does a perfect bench gain/offset fail on the production line?
Likely cause: Fixture loading, probe contact resistance, or leakage paths change the effective test condition (not the INA core).
Quick check: Re-test the same unit with (A) fresh probe insertion and (B) a different station; compare ΔVoff and ΔGain across A/B.
Fix: Add output isolation (e.g., Vishay CRCW060349R9FKEA as Riso placeholder), use stable test pads (Keystone 5015) and replace worn pogo pins (e.g., Mill-Max 0908-4-15-20-75-14-11-0).
Pass criteria: Repeatability spread (N re-insertions) < Repeat_limit AND station correlation |ΔMetric(A,B)| < Corr_limit.
How many calibration points are “enough” for production (1, 2, or 3)?
Likely cause: Too few points fail to separate offset/gain errors from saturation/clamp behavior and fixture-induced non-idealities.
Quick check: Fit offset+gain using 2 points, then evaluate the residual at a 3rd point (near full-scale) to detect clamp/settling artifacts.
Fix: Use 2-point as default; add a 3rd “consistency” point using a controlled stimulus path (e.g., DAC injection TI DAC60501 + low-leakage switch ADI ADG1201 or a precision resistor network).
Pass criteria: |Residual(3rd)| < Resid_limit AND |Gain_err| < Ge_limit AND |Voff| < Voff_limit.
Why does touching the cable change the “offset” result in a fixture?
Likely cause: Humidity/contamination leakage, triboelectric cable effects, or an incomplete bias-return path makes the input condition unstable.
Quick check: Short VIN+/VIN− on the PCB and repeat the “touch test”; then toggle guard on/off and compare ΔVoff(guard_on, guard_off).
Fix: Implement a driven guard with a removable link (0Ω Panasonic ERJ-2GE0R00X), and use low-leakage clamp elements where required (e.g., Nexperia BAV199 as a clamp-helper starting point).
Pass criteria: |ΔVoff(touch/no-touch)| < Touch_delta_limit AND |ΔVoff(guard_on/off)| < Δ_guard_limit.
How do I tell leakage drift from 0.1–10 Hz noise in a quick test?
Likely cause: Leakage creates a directional trend (slope) while 0.1–10 Hz noise creates non-directional wander within a bounded distribution.
Quick check: With input shorted, compute (A) slope over a long window and (B) RMS over a short window; compare guard on/off to amplify leakage signatures.
Fix: Add/enable guard and a defined bias-return path; if switching is needed for A/B tests, use a low-leakage switch (e.g., ADI ADG1201) instead of generic CMOS muxes.
Pass criteria: |Slope| < S_limit AND RMS(short) < Noise_limit AND |ΔMetric(guard_on/off)| < Δ_guard_limit.
Chopper INA: why do readings depend on when I sample?
Likely cause: Chopper/auto-zero ripple and internal phase timing alias into the measurement when sampling is not phase-consistent.
Quick check: Repeat the same measurement at different sample delays after a trigger; plot the mean vs delay to reveal phase sensitivity.
Fix: Use phase-aligned sampling when supported, or average over an integer number of ripple periods; for zero-drift INAs (e.g., TI INA333), enforce a fixed timing window per gain/state.
What’s the fastest way to detect input open/short without precision gear?
Likely cause: Open inputs float into bias/leakage-driven rails; shorts clamp the differential so injected stimulus produces no response.
Quick check: Apply a weak bias injection and observe the output response/time constant; then apply a small differential step and confirm a non-zero gain response.
Fix: Implement a bias-injection path using a high-value precision resistor (e.g., Vishay TNPW080510M0BEEA 10 MΩ placeholder) and a low-leakage switch (ADI ADG1201) for on/off diagnostics.
Pass criteria: Open signature: saturates within T_open_limit; Short signature: |ΔOUT(step)| < Resp_min; Good unit: |ΔOUT(step)| ≥ Resp_min.
How to set guardband without killing yield?
Likely cause: Guardbands are set from typical numbers instead of measurement uncertainty + fixture variation + worst-case drift.
Quick check: Use a golden unit across stations and days to estimate σ_station and σ_time; compare to the current guardband margin.
Fix: Split limits into (device limit) + (station allowance); reduce station σ via probe maintenance (e.g., replace Mill-Max 0908-… pins) and defined settling rules (ΔV < Δ_settle for M checks).
Pass criteria: False reject rate < FR_limit AND yield ≥ Y_target with a guardband policy documented (k·σ model).
Why does station-to-station correlation drift over days?
Likely cause: Fixture aging (probe wear/contamination), baseline reference drift, or unlogged program/fixture revisions change the measurement system.
Quick check: Run the golden unit on Station A/B/C daily; log ΔMetric vs baseline_version and fixture_id to separate drift from device variation.
Fix: Schedule probe replacement (e.g., Mill-Max 0908-…), add stable test pads (Keystone 5015), and lock a stimulus reference (e.g., ADI ADR4525 as a low-drift reference option for the fixture).
How do I define soak completion without wasting time?
Likely cause: Fixed soak time ignores fixture airflow and thermal mass; measurements begin before the system reaches a stable thermal state.
Quick check: Track a rolling average of the key metric and declare “stable” when |ΔMetric| stays below a threshold for a continuous duration.
Fix: Use a stability-based rule and log the local temperature near the analog area (e.g., TI TMP117 as an accurate temperature sensor option for correlation).
Pass criteria: Soak complete when |ΔMetric| < Δ_stable for t ≥ t_stable AND |ΔTemp| < ΔT_stable.
How to validate recovery/overload behavior as a proxy test?
Likely cause: Large common-mode steps and overloads expose board-level issues (loading, clamp conduction, stability) that simple DC checks miss.
Quick check: Apply a repeatable CM step and measure Trecover plus residual offset after recovery at a fixed sampling window.
Fix: Ensure output isolation (e.g., CRCW060349R9FKEA as Riso placeholder) and a stable observation point; if bandwidth demands require, use a high-speed INA class (e.g., ADI AD8421 as a reference device).
Pass criteria: Trecover < T_limit AND |Residual_after| < E_limit AND re-test spread < Repeat_limit.
What minimal data must be logged per unit for traceability?
Likely cause: Missing station/fixture/program context makes correlation failures look like device drift.
Quick check: Audit test records for completeness and confirm the ability to reproduce a failure using the same baseline_version and fixture_id.
Fix: Require the minimum schema: serial, lot/date code, station_id, fixture_id, program_rev, baseline_version, temperature, gain/state code, VREF, Voff, Gain_err, residual, Trecover (if used), and final bin.
Pass criteria: 100% records include all required fields AND missing fields are blocked from “PASS” disposition.
When should I stop relying on board-level calibration and fix hardware?
Likely cause: Calibration cannot compensate unstable leakage, clamp conduction, or fixture-dependent settling that changes with time/humidity/load.
Quick check: Compare metrics under controlled A/B toggles: guard on/off, load light/heavy, and re-insertion; instability indicates a hardware/fixture root cause.
Fix: Redesign the leakage/clamp paths and test hooks: add guard link (0Ω ERJ-2GE0R00X), use low-leakage switching (ADG1201) and clamp helpers (BAV199), and stabilize output loading with Riso (CRCW060349R9FKEA placeholder).
Pass criteria: After hardware/fixture changes, proxy metrics remain stable: |ΔMetric(A/B)| < Δ_limit AND correlation |Δ| < Corr_limit across days.