123 Main Street, New York, NY 10001

Self-Test & Production Test for Instrumentation Amplifiers (INA)

← Back to:Instrumentation Amplifiers (INA)

Self-test and production test for INAs are about proving “the measurement chain is correctly built and stays stable” with the fastest, repeatable checks—not about re-measuring every datasheet spec on the line.

Use controllable loopback/injection hooks, proxy recovery tests, and correlation + logging to keep results consistent across stations, temperature, and lots.

Why Self-Test Exists in INA Products (Scope & Success Criteria)

Self-test in an instrumentation-amplifier (INA) front-end is not a datasheet re-qualification flow. It is a production-proof method that demonstrates chain integrity, calibratability, and survivability within a bounded time and uncertainty.

The goal is to catch wrong wiring, hard faults, and drift-to-unusable conditions before shipment, using tests that remain repeatable across stations, fixtures, and lots.

A) Scope definition: “prove usable,” not “prove everything”

In-scope (listed only): offset/gain sanity, open/short signature, gross leakage proxy, overload/CM recovery proxy, configuration readback (if PGA/digital).
Out-of-scope here: noise spectrum sweeps, full AC CMRR vs frequency, extreme distortion corners, EMI root-cause characterization.
  • Self-test passes should remain valid under defined limits: input common-mode (CM) range, stimulus level, settling window, and fixture uncertainty.
  • “Specification compliance” is not claimed unless measurement uncertainty is demonstrably below the guardbanded requirement.

B) Success triangle: Coverage × Throughput × Correlation

Coverage (fault-mode coverage)
Coverage is defined by failure modes (open/short/leakage/saturation/mux/config), not by “how many datasheet rows were tested.” Each production check must map to at least one fault class with a clear signature.
Throughput (cycle time)
Cycle time includes not only measurement time but also settling, auto-zero/chopper timing, and any temperature soak. Apply “fast-fail-first” ordering: catch wiring/fault signatures before precision steps.
Correlation (station/fixture/lot repeatability)
A passing limit must be stable across fixtures, stations, and time. Use golden units and correlation checks to ensure that shifts are detected as process drift, not mis-labeled as product failures.
Pass-criteria writing style: use placeholders tied to budgets (not arbitrary numbers). Example: |Voff| < Voff_limit, |Gain_err| < Ge_limit, Trecover < T_limit.

C) Define the test scope by failure modes (not by specs)

A production-ready self-test plan starts from “what can go wrong” on real boards and fixtures, then chooses the smallest set of checks that uniquely exposes those faults. Typical fault classes and why they matter:

  • Open / short (wiring, connector, probe contact): causes saturation, no response to injection, or polarity reversal signatures.
  • Leakage (flux/moisture/ESD clamp leakage): manifests as drift under “short input,” touch sensitivity, or guard on/off deltas.
  • Saturation / recovery abnormal (clamps, headroom, loading): creates slow tails, memory, or failure to re-enter linear region.
  • Mux/config fault (PGA/digital control): wrong gain code, stale configuration, channel mix-up.

D) Minimal traceable data (the “must log” set)

Logging should be just enough to reproduce failures and separate product issues from station drift. A practical minimal schema:

Category Fields
Traceability serial, lot/date code, firmware/cal version, timestamp
Station identity station_id, fixture_id, operator/shift (optional), golden-unit delta (if used)
Test condition temp_point, supply mode, gain code/mode, stimulus_id (0/mid/nearFS), wait/settle window
Key results measured_offset, measured_gain, residual, Trecover (proxy), pass/fail + bin
Rule: log the conditions that explain the number. Without conditions, the number is not comparable across lines or lots.
Test coverage map: failure modes vs production checks A block-diagram matrix linking common failure modes (open, short, leakage, saturation, mux fault, config fault) to production checks (short-input, loopback, injection, CM-step proxy, temp spot-check, readback). Coverage is defined by fault modes (not by datasheet rows) Failure modes Production checks Open Short Leakage Saturation / slow recovery Mux / routing fault Short-input sanity Electrical loopback Calibration injection CM-step recovery proxy Temp spot-check / readback Each check targets a fault signature; detailed characterization belongs to lab pages.
Test coverage should be expressed as fault-mode coverage mapped to a small set of repeatable production checks (loopback, injection, recovery proxy, temperature spot-check).

Test Taxonomy: What You Can Prove on a Line (and What You Cannot)

A production line can prove bounded correctness (within defined conditions and uncertainty). It cannot efficiently prove full-frequency performance or corner-case analog limits without sacrificing cost and throughput.

The practical strategy is: prove the essentials on the line, and use proxy tests (fast, repeatable signatures) to catch coupling/layout/protection disasters that would otherwise require long sweeps to detect.

A) Line-prove vs Lab-characterize (decision boundary)

Line-prove (production)
  • DC sanity: offset/gain under defined CM, stimulus, and settling
  • Wiring integrity: open/short/polarity/mux routing signatures
  • Survivability proxies: overload recovery and CM-step recovery time
  • Control path: configuration readback and gain-code correctness (PGA/digital)
Lab-characterize (engineering)
  • Wideband noise density and integrated noise vs bandwidth
  • AC CMRR/PSRR vs frequency and wiring-dependent parasitic imbalance
  • Distortion/IMD corners and sweep-based linearity validation
  • EMI root-cause isolation and coupling path attribution

A “line-prove” claim is valid only if: (1) test conditions are fixed and logged, (2) measurement uncertainty is bounded, and (3) a guardbanded limit is applied to prevent false passes.

B) Why “cannot” is real: constraints that defeat line tests

Uncertainty dominates
If station/fixture/probe uncertainty is comparable to the effect being validated, production results become uncorrelatable. The right action is to switch to a proxy signature or move the metric to characterization.
Cycle time collapses
Sweeps (frequency, amplitude, temperature) multiply time by settling and repetition. A line must prioritize quick signatures and spot checks over long, high-resolution scans.
Equipment and maintenance burden
Low-noise sources, precision loads, and calibrated analyzers increase downtime and recalibration needs. When maintenance dominates, correlation drifts and yield decisions become unreliable.
Observability is limited
Probe loading, parasitics, and ground references can mask the true limitation. Production tests should avoid measurement setups that inject more error than the device under test.

C) Proxy tests: fast signatures that protect yield and reliability

Proxy tests convert “hard-to-prove” performance risks into short, repeatable signatures. Each proxy should specify: objective, stimulus, observable, and pass criterion placeholders.

1) CM-step recovery proxy
Objective: flag coupling/layout/protection failures that cause long tails or re-centering errors.
Stimulus: controlled common-mode step within the linear CM range.
Observe: Trecover and residual error after recovery window.
Pass: Trecover < T_limit; residual < E_limit (limits derived from system settling and error budget).
2) Overload recovery proxy
Objective: catch clamp/network issues and output-stuck behaviors under near-rail events.
Stimulus: near-full-scale differential step (bounded by safe input current).
Observe: time back to linear region and post-event offset shift.
Pass: no latch; return-to-linear < T_limit; shift < Vshift_limit.
3) Leakage sensitivity proxy
Objective: detect contamination/moisture/guard failures that produce time-varying offsets.
Stimulus: short input, then toggle guard/bias return or apply a tiny bias step.
Observe: drift slope over a fixed window and guard on/off delta.
Pass: slope < S_limit; delta < D_limit (limits set by allowable drift during measurement session).

D) How to write acceptance limits without guessing numbers

  • E_limit is derived from the post-calibration system error budget (including ADC quantization and reference drift).
  • T_limit is derived from the allowed settling time inside the sampling window (after gain change, CM step, or overload).
  • S_limit is derived from allowable drift during the measurement session and the intended averaging strategy.
Guardband rule: set the production limit tighter than the requirement by an amount that covers station uncertainty and drift (limit = requirement − guardband).
Can/Can’t checklist: production vs characterization A two-column checklist showing what production can prove with bounded uncertainty and what should be characterized in the lab, with a proxy-test band highlighted. Production proves bounded correctness; lab proves full sweeps Line-prove Lab-characterize Offset / gain sanity Open / short / routing Recovery proxies Leakage sensitivity Noise spectrum sweeps AC CMRR vs frequency Distortion corners EMI root-cause isolation Proxy tests bridge the gap: fast signatures with guardbanded limits
Use production tests to prove bounded correctness and fault signatures; reserve sweeps and corner characterization for lab validation. Deploy proxy tests (recovery, leakage sensitivity) to catch high-impact integration failures without collapsing cycle time.

Loopback Architectures: Electrical Loopback vs Stimulus Loopback vs Control Loopback

Loopback is an observability design: it creates repeatable conditions that expose wiring faults, gain/offset sanity, and control-path correctness without turning production test into full characterization.

A production-ready loopback plan must specify: what path is closed, what fault signatures it proves, what it cannot prove, and how to avoid false fails caused by clamps, switch leakage, or saturation behavior.

A) Three loopback classes (what they close and what they prove)

Loopback type Closes Proves (fault signatures) Main risks Best use
Electrical loopback Output → (Rlarge + switch) → input Open/short, wrong polarity, gross gain errors, output-drive anomalies Clamp conduction (back-drive), switch leakage/charge, saturation lock-in Fast sanity + fail-fast ordering
Stimulus loopback Known stimulus → injection network → input Gain/offset calibration validity, residual checks, linearity proxy points DAC/ref uncertainty, settling dependence, CM/headroom violations Calibration + traceable production metrics
Control loopback Configure → readback → status/identity verification Register integrity, gain-code correctness, channel identity, interface reliability Readback ≠ analog correctness; channel mapping mistakes can hide faults Always first (fail-fast), especially for PGA/digital INAs
Guardrail: loopback must be designed to avoid test-induced faults. Keep clamp current bounded (Iclamp < I_limit), keep paths symmetrical, and separate saturation recovery checks from gain/offset checks.

B) Practical risks and how to prevent false fails

Clamp conduction during electrical loopback

Back-driving the input through the output can forward-bias input protection and create a fake offset or slow tail. Use large, symmetric loopback resistors and a bounded stimulus that keeps the INA in its linear region.

Switch leakage and charge injection

Leakage can dominate microvolt-level checks, especially at high source impedance. Treat switch leakage as a budgeted error term, and verify with short-input baselines before and after toggling.

Saturation creates “misleading pass/fail”

If the loopback condition drives rails, gain/offset becomes undefined until recovery completes. Separate the tests: first verify recovery (Trecover), then run gain/offset checks within the settled window.

C) Minimal production sequence (loopback-centered)

  1. Control loopback: write configuration, read back image, verify channel identity (PGA/digital).
  2. Short-input baseline: establish offset baseline and drift slope under a fixed window.
  3. Electrical loopback: confirm chain integrity and gross gain sanity without entering clamp conduction.
  4. Stimulus loopback: run 1–2 point stimulus for gain/offset, then evaluate residual for robustness to stimulus uncertainty.
  5. Recovery proxy (optional): apply a bounded overload/CM step and verify Trecover < T_limit.
  6. Log + bin: store conditions + key results for correlation (station/fixture/lot/time).
Must-log fields (minimum): station_id, fixture_id, temp_point, gain_code, stimulus_id, measured_offset, measured_gain, residual, Trecover, pass/fail/bin.
Three loopback topologies for INA production tests Three side-by-side block diagrams: electrical loopback from output to input via switch and large resistor, stimulus injection loopback using DAC/reference and injection resistors, and control loopback using MCU configuration readback. 3 loopback topologies (production-oriented) Electrical loopback Stimulus loopback Control loopback INA ADC SW Rlarge DAC/REF Rinj INA ADC MCU SPI/I²C INA Readback ID / code / status Keep labels minimal; define pass limits via budgets and guardbands.
Electrical loopback supports fast chain-integrity checks; stimulus loopback supports calibrated gain/offset with residual-based robustness; control loopback validates configuration and channel identity before analog measurements.

Calibration Injection Paths: How to Inject a Known Differential Signal Safely

Injection enables production calibration and traceable checks by applying a known differential stimulus to the INA input path. The injection plan must preserve system realism (coverage) while keeping common-mode headroom and clamp current bounded.

A production-worthy injection design specifies: objective, source type, connection point, settling window, and risk flags (leakage sensitivity, CM/headroom, clamp interaction).

A) Start from objectives (objective → stimulus → observable → pass)

Offset check

Use a near-zero differential condition (short-input baseline or symmetric micro-injection). Observe mean output and drift slope in a fixed window. Pass: |Voff| < Voff_limit and slope < S_limit.

Gain check

Apply one or two known differential points inside linear CM/headroom. Fit gain/offset and evaluate residual to reduce sensitivity to stimulus uncertainty. Pass: |Gain_err| < Ge_limit and residual < Res_limit.

Linearity proxy (not a sweep)

Use a bounded near-full-scale point or step to expose clamp/recovery problems. Observe Trecover and post-event shift. Pass: Trecover < T_limit and shift < Vshift_limit.

Limits must be budget-derived: E_limit from error budget, T_limit from settling window, S_limit from allowable drift.

B) Where to inject: coverage vs risk tradeoffs

At input terminals (most realistic)

Maximizes coverage of connectors, protection networks, leakage paths, and wiring errors. Requires stricter control of CM/headroom and clamp interaction.

Before protection (good production default)

Preserves most realism while enabling controlled injection. Must ensure injected current does not get absorbed by TVS/clamps.

After protection (coverage-limited)

Easier to measure but bypasses the most common field-failure contributors (protection leakage, contamination, wiring). Use only when explicitly labeled as “device-only” proof.

C) Injection sources: ratiometric vs DAC vs resistor network

Ratiometric injection

Best for bridges/resistive sensors: stimulus and measurement share the same reference/excitation so drift cancels. Add a monitor point for Vexc/Vref to avoid “common drift masking.”

DAC injection

Flexible but the DAC becomes part of the error chain (linearity, drift, settling, output impedance). Prefer residual-based criteria and log DAC code, settle window, temperature point, and reference.

Resistor-network injection

High traceability when precision/low-tempco parts are used. Must be symmetric and tolerance-aware to avoid biasing the differential path. Log network identity and tolerance class for correlation.

D) Hard constraints: CM/headroom and source-impedance interaction

  • CM/headroom: injection must keep the INA inside its linear CM and output swing region; otherwise gain/offset results become recovery-dependent.
  • Clamp interaction: injection networks must bound input current so protection does not dominate the measurement (Iclamp < I_limit).
  • Source impedance: injection impedance + series protection + sensor source impedance can create unintended division and pseudo-offset; keep networks symmetric or explicitly modeled.
Injection decision tree for INA production calibration A tree diagram: objectives (offset, gain, proxy) lead to source choice (ratiometric, DAC, resistor network), then connection point (at input, before protection, after protection), with risk badges (CM, clamp, leakage, settle). Injection decision tree (objective → source → connection → risks) Objective Source Connection point Risk flags Offset Gain Linearity proxy Ratiometric DAC R-network At input Before protection After protection CM Clamp Leakage Settle Recommended default: ratiometric + before protection (high coverage, stable correlation)
Choose injection by objective first, then select a stimulus source and connection point that preserve coverage while keeping CM/headroom and clamp current bounded. Use risk flags (CM, clamp, leakage, settling) to define guardbanded pass criteria and required logging.

Gain/Offset Production Checks: Minimal Measurements with Clear Pass Criteria

A production check should prove useful correctness with a minimal point set, not characterize every datasheet curve. The goal is repeatable pass/fail decisions and stable correlation across stations, fixtures, and lots.

The recommended minimal set is 3 points: 0-point (short-input baseline), small differential (2-point fit), and near-full-scale (consistency / clamp / headroom proxy). Pass limits are placeholders filled by the system budget.

A) Minimal 3-point set (what each point proves)

0-point (short input)

Establishes a baseline for offset and drift slope under a fixed measurement window. Also catches obvious wiring/bias-return issues when the output cannot settle reproducibly.

Small differential point

Provides the second anchor for a 2-point gain/offset fit while staying well inside linear CM/headroom. Use a known and bounded stimulus so clamp and recovery behavior do not contaminate the estimate.

Near-full-scale point

Used as a consistency check and a proxy for clamp/headroom problems. It is not a full linearity sweep. Evaluate a residual against the 2-point model to detect hidden saturation, path errors, or protection interaction.

B) Measurement sequence (repeatability first)

  1. Control integrity: confirm gain code / channel identity (PGA/digital) before analog checks.
  2. 0-point: short input; wait a defined settle window; sample N times; store robust statistics.
  3. Small point: apply Vdiff_small; keep CM inside the linear region; wait settle; sample N times.
  4. NearFS point: apply Vdiff_nearFS; ensure clamp current remains bounded; evaluate residual consistency.
  5. Log conditions: station/fixture IDs, temperature point, gain code, VCM target, stimulus ID, settle time, sample count.
Keep stimulus bounded: Iclamp < I_limit and Vout stays inside swing headroom during fit points.

C) Computation (2-point fit + 3rd-point residual)

2-point fit

Estimate gain and offset using 0-point and the small differential point (or ±small points when available). This keeps recovery and clamp effects out of the model.

Residual consistency

Compute residual at nearFS relative to the 2-point model. Residual is a robust proxy for hidden saturation, clamp interaction, wiring mistakes, and stimulus-path inconsistencies.

Pass criteria (budget-filled placeholders)
  • |Voff| < Voff_limit
  • |Gain_err| < Ge_limit
  • Residual after 2-point fit < Resid_limit
Must-log fields (minimum): station_id, fixture_id, temp_point, gain_code, VCM_target, stimulus_id, Vout_0, Vout_small, Vout_nearFS, Offset_est, Gain_est, Residual, pass/fail/bin.
Three-point production test chart for gain and offset A simple production-oriented chart: x-axis has three stimulus points (0, small, nearFS); y-axis is error. Three bars show measured error at each point; two horizontal limit lines show Voff_limit and Resid_limit. 3-point production check (0 / small / nearFS) Error Stimulus 0 small nearFS Voff_limit Resid_limit Offset Fit Residual
Use 0 and small points for a 2-point fit; use nearFS as a consistency proxy. Fill limits from the system budget and apply guardbands for station and fixture uncertainty.

Detecting Wiring & Leakage Faults: Open/Short, Guard, and Bias Return

The highest-yield production failures are wiring mistakes, unintended shorts, contamination-driven leakage, and missing bias-return paths. These faults should be detected by signature-based proxy tests rather than by attempting full specs.

Use simple stimuli with bounded currents and fixed windows. Convert each signature into a fast binning action for rework and root-cause correlation.

A) Open vs short vs leakage (field signatures that production can trust)

Open

An unconstrained input node drifts toward a rail or becomes environment-sensitive. Output may saturate or exhibit non-repeatable settling. Production should treat “cannot settle into a window” as a primary signature.

Short

Differential stimulus is crushed and produces little or no response. Gain checks collapse and injected steps become ineffective. Verify that a bounded small differential produces at least a minimum response.

Leakage / contamination

Behavior changes with humidity, touch, and time. Even with shorted inputs, drift slope can be elevated. Guard on/off comparison or slope-under-short is a strong proxy signature.

B) Quick proxy tests (bounded, repeatable, production-friendly)

Open detection: weak bias + return-time window

Apply a very weak bias path (via a large resistor or controlled micro-current) and verify the output returns into a predictable window within T_return. Pass: T_return < T_limit_open and Vout ∈ V_window.

Short detection: small differential step response

Inject a bounded small differential step and verify a minimum output response after settling. Pass: |ΔVout| > Vresp_min under the declared gain code and measurement window.

Leakage detection: guard on/off + slope-under-short

Compare offset/drift with guard enabled vs disabled, or measure drift slope while inputs are shorted. Pass: |ΔVoff_guard| < Vguard_limit and slope_short < S_limit_leak.

Guardrail: keep test currents bounded (Iclamp < I_limit) and keep CM/headroom in-range during proxy stimuli.

C) Binning and actionable correlation fields

  • BIN-OPEN: contact/connector/fixture pin issues; channel identity mismatch; return-time window failure.
  • BIN-SHORT: solder bridge, crushed cable, protection device short, differential pair short; response-min failure.
  • BIN-LEAK: contamination/humidity sensitivity; guard effectiveness failure; slope-under-short elevated.
Must-log fields: station_id, fixture_id, humidity(optional), guard_state, T_return, V_window_result, ΔVout_step, slope_short, bin_code.
Fault signature table for wiring and leakage detection A 3×3 table: rows are Open, Short, Leakage; columns are Stimulus, Observed symptom, Quick test. Each cell contains short labels to guide production diagnosis. Fault signature table (Open / Short / Leakage) Stimulus Observed symptom Quick test Open Short Leakage weak bias rail drift T_return small step no response ΔVout_min guard on/off touch/humidity slope_short Convert signatures into bins and log the declared conditions for stable correlation.
Use signature-based proxy tests: open faults fail return-time windows, short faults fail step response minima, and leakage faults fail guard/slope comparisons. Keep currents bounded and log conditions to enable fast correlation and rework decisions.

Overload & Common-Mode Recovery as a Production Proxy

Recovery behavior is a high-yield, line-friendly proxy for catching “something is wrong” at the board level. A single time-domain step can expose protection interaction, coupling, missing returns, and headroom problems without frequency sweeps.

The proxy is defined by T_recover (time to return to a declared linear window) and Residual (post-recovery offset from baseline). Limits are budget-filled placeholders with guardbands for station and fixture uncertainty.

A) What this proxy can prove on a line

Fast detection of “board-level wrong”

A common-mode or overload step highlights clamp interaction, missing return paths, coupling into inputs, and headroom violations. It is not a full CMRR vs frequency characterization.

Repeatable pass/fail with fixed windows

Define a linear output window and a timing window. Measure return time and residual error after recovery. This avoids lab-only instrumentation and enables stable binning.

B) How to run the test (bounded, production-safe)

  1. Declare the linear window: Vout ∈ V_linear_window (budget-filled placeholder).
  2. Apply a bounded CM step or overload stimulus; keep clamp current under control.
  3. Start timing at the stimulus edge; ignore the first t_settle region; evaluate recovery thereafter.
  4. Measure T_recover to re-enter the linear window; then measure Residual relative to a pre-step baseline.
  5. Repeat N times for basic repeatability screening if the use case is sensitive to intermittents.
Guardrails: I_clamp < I_limit, CM/headroom in-range, and keep recovery tests separate from gain/offset fitting windows.

C) Pass criteria and bins (budget-filled placeholders)

Pass criteria
  • T_recover < T_limit
  • Residual step error < E_limit
Suggested bins
  • BIN-RECOV-SLOW: T_recover out-of-limit (suspect protection interaction, coupling, missing returns, headroom).
  • BIN-RECOV-OFFSET: Residual out-of-limit (suspect leakage paths, clamp-induced bias shifts, unintended injection).
  • BIN-RECOV-NR: poor repeatability across N repeats (suspect intermittents: fixture contact, contamination sensitivity).
Must-log fields: stimulus_type, step_level(code), VCM_target, gain_code, t_settle, V_linear_window, T_recover, Residual, repeat_index, station_id, fixture_id, pass/fail/bin.
Recovery waveform proxy: CM step and overload recovery metrics A time-domain waveform showing CM step leading to saturation, then recovery back into a declared linear window. Labels define T_recover and Residual after recovery. Recovery waveform proxy (T_recover + Residual) Vout time V_linear_window saturate CM step T_recover Residual
Define a linear output window, then measure return time and post-recovery residual. This time-domain proxy is sensitive to clamp interaction, coupling, missing returns, and headroom mistakes.

Temperature Strategy: Cross-Temp Spot Checks, Soak Rules, and Drift Separation

Production lines rarely run full temperature sweeps. A practical strategy is 25 °C full coverage with hot/cold spot checks that are stable across fixtures and airflow differences.

The key is repeatability: use stability thresholds instead of fixed soak times, log window definitions, and separate drift-like trends from low-frequency noise using short vs long windows.

A) Cross-temp spot-check plan (line-friendly coverage)

25 °C baseline for every unit

Use the same gain/offset checks and recovery proxies at 25 °C as the primary correlation anchor. This becomes the reference point for cross-temp deltas and lot comparison.

Hot/cold spot checks for risk and lot consistency

Spot checks validate cross-temp behavior without full sweeps. Focus on new lots, new fixtures, new cleaning processes, and any configuration with elevated field risk.

B) Soak rules (stability thresholds, not fixed time)

Declare stability by thresholds over windows. This reduces false fails caused by different airflow, fixture thermal resistance, and chamber dynamics.

  • Stability delta: |mean(W_short,k) − mean(W_short,k−1)| < Δ_stable
  • Trend limit: |slope(W_long)| < S_stable
  • Time-to-stable is a reported metric: Time_to_stable < T_soak_limit (optional binning field).
Must log window definitions: W_short_len, W_long_len, sample_rate, and the exact stability thresholds (Δ_stable, S_stable) used by the station.

C) Drift separation (short vs long windows)

Short window: mean for decision

Use mean(W_short) to reduce random noise impact on pass/fail checks. This is the value compared to cross-temp and cross-lot limits.

Long window: slope for drift proxy

Use slope(W_long) to detect trend-like drift. A large slope indicates instability or thermal-gradient sensitivity even when short-window averages look acceptable.

Cross-temp pass criteria (placeholders)
  • |ΔOffset(T)| < ΔOff_limit
  • |ΔGain(T)| < ΔGe_limit
  • |slope(W_long)| < S_limit
Must-log fields: temp_point(setpoint + measured), soak_state, time_to_stable, mean(W_short), slope(W_long), Offset_est, Gain_est, station_id, fixture_id, lot, pass/fail/bin.
Soak and windowing diagram for cross-temperature spot checks A timeline showing entering a chamber, approaching setpoint, stability threshold check, then measurement windows W_short and W_long. Labels indicate delta stability threshold and slope estimation for drift proxy. Soak & windowing (stable thresholds + W_short/W_long) reading time stable band enter Δ_stable W_short W_long mean slope
Use stability thresholds (Δ_stable, S_stable) instead of fixed soak time. Evaluate mean over W_short for decisions and slope over W_long as a drift proxy, then compare cross-temp deltas to budget-filled limits.

Lot-to-Lot & Fixture-to-Fixture Consistency: Golden Unit and Correlation Plan

Consistency is a production system problem: station drift, fixture wear, cable changes, and instrument recalibration can look like “product drift”. A Golden Unit and a correlation loop keep stations comparable and prevent silent baseline shifts across lots.

Treat station/fixture bias as a modeled system term. Use differential comparisons to detect out-of-family behavior and trigger recalibration or maintenance.

A) Golden Unit policy (definition, cadence, retirement)

Definition

A Golden Unit is a traceable reference assembly used to validate station/fixture health, not to characterize full datasheet behavior. Select units that pass baseline checks and sit near the center of the normal population (not at edges).

Cadence

Prefer trigger-based checks (fixture change, probe swap, cable replacement, instrument calibration, abnormal bin spikes). Add a periodic check as a safety net (schedule defined by factory policy).

Retirement

Golden Units age and wear. If multiple stations report consistent shifts, treat the Golden Unit as suspect and replace it. Keep a backup Golden Unit to avoid single-point contamination of the baseline.

B) Correlation method (same unit, different stations, differential deltas)

Run the same correlation program with the same Golden Unit across Station A/B/C. Compare deltas of key metrics (offset, gain, recovery time, residual error, leakage proxies) to estimate station/fixture bias.

  • ΔMetric(A,B) = Metric_A − Metric_B
  • Treat station bias as a system term: Bias_station → baseline tracking and drift alarms
Correlation criteria (placeholders)
  • |ΔMetric(station_i, station_j)| < Corr_limit
  • Exceeding Corr_limit triggers re-calibration; repeated exceedances trigger maintenance or station isolation (policy-defined).

C) Baseline versioning and minimum logging schema

Baselines must be versioned. Any fixture rebuild, probe swap, cable replacement, or instrument recalibration should create a new baseline version after Golden Unit correlation passes.

Must-log fields: golden_id, golden_rev, station_id, fixture_id, instrument_id, program_rev, baseline_version, metrics (Offset_est, Gain_est, T_recover, Residual, leak_proxy), deltas (Δ vs baseline / Δ vs other stations), action_flag.
Correlation loop for golden unit and station/fixture baseline control A circular flow: Golden Unit goes through Station A/B/C, deltas are compared, fixtures are adjusted or recalibrated, and a versioned baseline is recorded. Correlation loop (Golden Unit + Station deltas + Baseline v#) Golden Station A Station B Station C Delta compare Adjust / Re-cal Baseline v# |Δ| action
Use a Golden Unit to compare stations and fixtures by differential deltas. When |Δ| exceeds Corr_limit, trigger re-calibration or maintenance and record a new baseline version.

Throughput Engineering: Settling Time, Auto-Zero Timing, and Test Order Optimization

Throughput is determined by waiting, not by math. Optimize test order, avoid unnecessary settle time, and prevent false fails caused by sampling the wrong auto-zero/chopper phase.

The strategy is simple: run quick-fail first, then do precision checks. Replace fixed delays with threshold-based settling rules, and pipeline digital operations in parallel with analog settling.

A) Test order (quick-fail first, precision later)

  1. Quick-fail: open/short/leakage signatures and recovery proxy checks (stop early on failure).
  2. Mid-cost: 3-point gain/offset checks after basic health is proven.
  3. Slow steps: long-window trend checks or temperature spot-checks only when required by the production plan.
Early-exit rule: any quick-fail failure skips downstream precision steps and records the failure bin immediately.

B) Settling engineering (threshold-based, not fixed delay)

After gain switching or stimulus changes, replace fixed wait with a settle monitor. Declare stable when short-window deltas fall below a threshold for M consecutive checks, with a maximum timeout for safety.

  • ΔV = |mean(W_short,k) − mean(W_short,k−1)|
  • Stable when ΔV < Δ_settle for M consecutive checks
  • Timeout: time_to_settle < T_settle_max (otherwise bin as settle failure)
Practical rule: Δ_settle is budget-filled (noise + ADC variation + guardband) and may differ per gain_code.

C) Zero-drift / chopper timing (avoid wrong sampling phase)

Auto-zero and chopping introduce internal timing states. Sampling at a single unlucky phase can create false offset or false drift decisions. Production should either align sampling to a stable phase or average across phases.

  • Phase-aligned: sample using a declared ready/state indicator when available.
  • Phase-averaged: take N samples spanning multiple internal phases; use median/trimmed mean for robustness.
Must-log fields: chopper_mode_flag, sampling_window_len, N_samples, statistic (median/mean), and program_rev for cross-station comparability.

D) Parallelization and pipeline opportunities

  • Digital config/readback and logging can run during analog settling windows.
  • Temperature reads can be scheduled while the next stimulus is armed.
  • Multi-channel stations can pipeline channels when shared stimulus resources do not couple between channels.
Throughput log: per-step start/end timestamps, time_to_settle, settle_count, early_exit_flag, and parallel_ops flags.
Optimized test flow timeline with quick-fail, waits, and parallel operations A two-lane timeline: Analog lane shows quick-fail tests, recovery proxy, and 3-point gain/offset with settle wait blocks. Digital lane shows configuration, readback, temperature read, and logging overlapped in parallel. Includes an early-exit path. Optimized test flow timeline (quick-fail + wait + parallel) Analog Digital Quick-fail Recovery Wait ΔV<Δ_settle 3-point Early exit Config Readback Temp read Log Parallel time
Put quick-fail tests first, replace fixed delays with threshold-based settling, and overlap digital work (config/readback/temp/log) with analog settling to reduce cycle time.

Engineering Checklist for Production Readiness (Board Hooks + Test Hooks)

Production readiness depends on “hooks”: predictable injection points, stable probe contact, controllable short/guard states, and versioned baselines. The list below is written as Action → Purpose → Pass criteria so it can be copied into a design review checklist.

Numeric limits are placeholders (Voff_limit / Corr_limit / Δ_settle / T_limit). Fill them from system noise, drift budgets, and yield guardbands.

A) Injection points & measurement points (minimum closed loop)

Differential injection pads (VIN+ / VIN−)
Action: Place dedicated SMT probe pads near the input zone; keep a short, symmetric path to the INA pins.
Purpose: Enable 2-point/3-point gain-offset checks and controlled wiring/leakage diagnostics.
Pass: Stimulus response is monotonic and repeatable; residual after 2-point fit < Resid_limit.
Reference / excitation test points (VREF / VEXC / Sense)
Action: Add a VREF test pad and (if applicable) excitation + remote-sense pads.
Purpose: Prevent reference drift from being misread as INA drift; support ratiometric checks.
Pass: |ΔVREF| (vs baseline) < Vref_limit during the test window.
Output / ADC input observation pads (INA_OUT / ADC_IN)
Action: Provide an output test pad with a local return pad to support time-domain recovery probing.
Purpose: Make overload / common-mode recovery a measurable production proxy (Trecover, Residual).
Pass: Trecover < T_limit and residual step error < E_limit.

B) Short / guard / bypass networks (make failure signatures controllable)

Input short link (VIN+ ↔ VIN−)
Action: Add a 0Ω link footprint (or solder jumper) that shorts the differential input at the PCB (not in the fixture).
Purpose: Create a clean “0-point” for offset checks and a stable condition for leakage/drift proxies.
Pass: |Voff| < Voff_limit; drift slope after short < S_limit.
Guard enable / disable link
Action: Place a removable 0Ω link between the guard driver and the guard ring (support on/off comparison).
Purpose: Turn humidity/contamination sensitivity into a reproducible A/B test.
Pass: |ΔMetric(guard_on, guard_off)| < Δ_guard_limit (or Δ triggers a leakage bin).
Protection / RC footprint flexibility (debug hook, not a routine step)
Action: Provide replaceable footprints for key series resistors and RFI capacitors at the connector/input edge.
Purpose: Rapidly isolate “protection conduction / leakage dominance” during NPI and failure analysis.
Pass: Engineering DOE identifies a stable population with acceptable yield (policy-defined).

C) Output isolation & load hooks (avoid fixture-capacitance traps)

Output isolation resistor footprint (Riso)
Action: Place an Riso footprint between INA output and ADC/connector; allow values to be tuned without re-spin.
Purpose: Stabilize against capacitive fixture loading and reduce ringing that breaks recovery proxies.
Pass: Recovery waveform remains stable across fixtures; Trecover spread < ΔT_limit.
Configurable output load (if needed)
Action: Provide a switchable load option in the fixture (preferred) or on-board (optional).
Purpose: Detect load-dependent recovery/settling failures before field deployment.
Pass: |ΔMetric(light_load, heavy_load)| < Δ_load_limit.

D) Manufacturability & fixture safety (repeatability first)

Probe contact robustness
Action: Size test pads to fixture probe spec; add a nearby return pad for sensitive analog measurements.
Purpose: Prevent intermittent contact from creating non-reproducible fails and false drift.
Pass: Re-insert N times; metric spread < Repeat_limit.
ESD-safe test access
Action: Define ESD-safe handling, add appropriate input protection (with leakage awareness), and label test pads clearly.
Purpose: Avoid test-induced latent damage that shows up as lot-to-lot drift or early-life failures.
Pass: ESD event (policy-defined) does not cause permanent baseline shift beyond ESD_shift_limit.
Test mode entry definition (strap / mode pin)
Action: Define how test mode is entered (strap pin, jumper, firmware command) and make it auditable via readback.
Purpose: Prevent configuration mistakes from being misdiagnosed as analog failures.
Pass: Config write → readback matches; checksum/CRC passes when supported.

E) Versioning hooks (baseline traceability)

Board / fixture / program revision trace
Action: Expose board_rev and fixture_id (silkscreen + log field). Record program_rev and baseline_version in every test record.
Purpose: Keep correlation valid across stations and prevent silent baseline drift after maintenance.
Pass: Every unit record includes (station_id, fixture_id, program_rev, baseline_version); missing fields are blocked.

Reference BOM examples (part numbers; starting points)

These examples exist to speed up datasheet lookup for production hooks. Verify voltage, leakage, temperature grade, and footprint for the target platform.

Test points / fixture interface
  • SMT test point: Keystone 5015 (PC TEST POINT, miniature)
  • Spring-loaded pin (fixture contact): Mill-Max 0908-4-15-20-75-14-11-0 (example pogo contact)
Jumpers / links / isolation
  • 0Ω link (0402 example): Panasonic ERJ-2GE0R00X (use as VIN short / guard enable link)
  • Output isolation resistor (49.9Ω example, 0603): Vishay CRCW060349R9FKEA (Riso placeholder)
Low-leakage clamp / switching for test paths
  • Low-leakage diode pair (rail clamp helper): Nexperia BAV199 (SOT-23, series dual diode)
  • Low-leakage analog switch for injection/loopback: ADI ADG1201 (iCMOS SPST, low leakage)
Stimulus injection building blocks
  • Precision high-value resistor (open/leak test bias, 10 MΩ example): Vishay TNPW080510M0BEEA
  • C0G capacitor (RFI/RC filter, 1 nF example): Murata GRM1555C1H102JA01
  • Precision DAC for controlled stimulus (example): TI DAC60501 (verify required resolution and drift)
Concept PCB map showing production test hooks for an INA front end Abstract PCB zones: input/protection/guard on the left, INA core in the middle, ADC/MCU on the right. Callouts show test pads, Kelvin/sense, guard enable jumper, input short link, Riso footprint, and Vref test point. Test hooks on PCB (concept map) Input zone INA core ADC / MCU Connector Protection Guard ring J INA VIN short Vref TP ADC MCU R Test pads Kelvin
A production-ready INA front-end exposes controllable hooks: differential injection, VIN short, guard on/off, Vref observation, output isolation (Riso), and robust test pads for repeatable fixtures.

INA Selection Notes for Self-Test (What to Ask Vendors / What Features Matter)

For self-test and production test, a “great datasheet number” is not enough. Priority goes to behaviors that production can prove: recovery timing, leakage stability, repeatable gain states, and auditable configuration/readback.

Vendor questions should map to Feature → Why it matters → How it is tested so acceptance criteria are enforceable on the line.

Vendor question checklist (self-test focused)

Recovery behavior (startup / overload / CM step)
Ask: Trecover vs load, gain, and temperature; residual after recovery; recommended measurement setup.
Why: Recovery dominates throughput and false fails; slow tails contaminate gain/offset fits.
Test: CM-step proxy and overload recovery (Trecover < T_limit, Residual < E_limit).
Input protection structure & leakage vs temperature
Ask: Protection topology, leakage distribution (typ/max) vs temp, and recommended source resistance limits.
Why: Protection/leakage drives drift-like failures, especially across humidity and temperature spot checks.
Test: Input short drift proxy; guard on/off delta; bias-injection open/leak signature checks.
Chopper / auto-zero timing (zero-drift parts)
Ask: Ripple characteristics, internal phase timing, and any “ready/state” indicator for aligned sampling.
Why: Wrong sampling phase creates false offset/drift decisions and breaks cross-station comparability.
Test: Phase-aligned sampling (when supported) or phase-averaged sampling (N samples, median/trimmed mean).
Gain state repeatability & settling (PGA / digitally-controlled)
Ask: Gain code repeatability, settling vs gain transitions, and any built-in diagnostics for gain path integrity.
Why: Gain transition settling often dominates cycle time; mis-set gain looks like analog drift.
Test: Threshold-based settling rule (ΔV < Δ_settle for M checks) per gain code.
Interface reliability & auditable configuration
Ask: Readback support, checksum/CRC, error flags, and recommended production read-verify flow.
Why: Control/config faults must be separated from analog faults to prevent incorrect bins.
Test: Control loopback (write → readback → status verify) logged with program_rev and baseline_version.

Reference device examples (part numbers; self-test friendly starting points)

These examples illustrate categories that commonly support production-oriented verification (recovery behavior, low drift, auditable gain/control). Confirm the exact grade and test strategy.

Classic INA (single resistor gain)
  • TI INA826 (precision instrumentation amplifier, wide supply)
Zero-drift / chopper INA (cross-temp stability)
  • TI INA333 (low-power, zero-drift instrumentation amplifier)
High-speed / low-latency INA (recovery + settling)
  • ADI AD8421 (high-speed instrumentation amplifier; strong CMRR)
Programmable gain + test capability (production features)
  • TI PGA280 (digitally-controllable gain; includes signal-integrity / test-oriented features)
Vendor question template for INA self-test and production test A three-column template: Feature, Why it matters, How to test. Rows include recovery, leakage, chopper timing, gain repeatability, and interface readback. Vendor question template (Feature → Why → How to test) Feature Why it matters How to test Recovery Throughput + false fails Trecover / Residual Leakage vs temp Drift-like failures Guard A/B delta Chopper timing Phase-sensitive errors Align / average Gain repeatability Stable bins ΔV<Δ_settle
Keep selection questions “testable”: every vendor claim should map to a production proxy, a logging field, and a pass criterion placeholder.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs — Self-Test & Production Test (INA)

These FAQs close common long-tail production issues without expanding the main content boundary. Every answer follows a fixed 4-line structure: Likely cause / Quick check / Fix / Pass criteria.

Numeric limits are placeholders (Voff_limit, Corr_limit, T_limit, etc.). Fill them using the system error budget and yield guardbands.

Why does a perfect bench gain/offset fail on the production line?
Likely cause: Fixture loading, probe contact resistance, or leakage paths change the effective test condition (not the INA core).
Quick check: Re-test the same unit with (A) fresh probe insertion and (B) a different station; compare ΔVoff and ΔGain across A/B.
Fix: Add output isolation (e.g., Vishay CRCW060349R9FKEA as Riso placeholder), use stable test pads (Keystone 5015) and replace worn pogo pins (e.g., Mill-Max 0908-4-15-20-75-14-11-0).
Pass criteria: Repeatability spread (N re-insertions) < Repeat_limit AND station correlation |ΔMetric(A,B)| < Corr_limit.
How many calibration points are “enough” for production (1, 2, or 3)?
Likely cause: Too few points fail to separate offset/gain errors from saturation/clamp behavior and fixture-induced non-idealities.
Quick check: Fit offset+gain using 2 points, then evaluate the residual at a 3rd point (near full-scale) to detect clamp/settling artifacts.
Fix: Use 2-point as default; add a 3rd “consistency” point using a controlled stimulus path (e.g., DAC injection TI DAC60501 + low-leakage switch ADI ADG1201 or a precision resistor network).
Pass criteria: |Residual(3rd)| < Resid_limit AND |Gain_err| < Ge_limit AND |Voff| < Voff_limit.
Why does touching the cable change the “offset” result in a fixture?
Likely cause: Humidity/contamination leakage, triboelectric cable effects, or an incomplete bias-return path makes the input condition unstable.
Quick check: Short VIN+/VIN− on the PCB and repeat the “touch test”; then toggle guard on/off and compare ΔVoff(guard_on, guard_off).
Fix: Implement a driven guard with a removable link (0Ω Panasonic ERJ-2GE0R00X), and use low-leakage clamp elements where required (e.g., Nexperia BAV199 as a clamp-helper starting point).
Pass criteria: |ΔVoff(touch/no-touch)| < Touch_delta_limit AND |ΔVoff(guard_on/off)| < Δ_guard_limit.
How do I tell leakage drift from 0.1–10 Hz noise in a quick test?
Likely cause: Leakage creates a directional trend (slope) while 0.1–10 Hz noise creates non-directional wander within a bounded distribution.
Quick check: With input shorted, compute (A) slope over a long window and (B) RMS over a short window; compare guard on/off to amplify leakage signatures.
Fix: Add/enable guard and a defined bias-return path; if switching is needed for A/B tests, use a low-leakage switch (e.g., ADI ADG1201) instead of generic CMOS muxes.
Pass criteria: |Slope| < S_limit AND RMS(short) < Noise_limit AND |ΔMetric(guard_on/off)| < Δ_guard_limit.
Chopper INA: why do readings depend on when I sample?
Likely cause: Chopper/auto-zero ripple and internal phase timing alias into the measurement when sampling is not phase-consistent.
Quick check: Repeat the same measurement at different sample delays after a trigger; plot the mean vs delay to reveal phase sensitivity.
Fix: Use phase-aligned sampling when supported, or average over an integer number of ripple periods; for zero-drift INAs (e.g., TI INA333), enforce a fixed timing window per gain/state.
Pass criteria: |ΔMean(delay sweep)| < Phase_sens_limit AND within-window StdDev < Std_limit.
What’s the fastest way to detect input open/short without precision gear?
Likely cause: Open inputs float into bias/leakage-driven rails; shorts clamp the differential so injected stimulus produces no response.
Quick check: Apply a weak bias injection and observe the output response/time constant; then apply a small differential step and confirm a non-zero gain response.
Fix: Implement a bias-injection path using a high-value precision resistor (e.g., Vishay TNPW080510M0BEEA 10 MΩ placeholder) and a low-leakage switch (ADI ADG1201) for on/off diagnostics.
Pass criteria: Open signature: saturates within T_open_limit; Short signature: |ΔOUT(step)| < Resp_min; Good unit: |ΔOUT(step)| ≥ Resp_min.
How to set guardband without killing yield?
Likely cause: Guardbands are set from typical numbers instead of measurement uncertainty + fixture variation + worst-case drift.
Quick check: Use a golden unit across stations and days to estimate σ_station and σ_time; compare to the current guardband margin.
Fix: Split limits into (device limit) + (station allowance); reduce station σ via probe maintenance (e.g., replace Mill-Max 0908-… pins) and defined settling rules (ΔV < Δ_settle for M checks).
Pass criteria: False reject rate < FR_limit AND yield ≥ Y_target with a guardband policy documented (k·σ model).
Why does station-to-station correlation drift over days?
Likely cause: Fixture aging (probe wear/contamination), baseline reference drift, or unlogged program/fixture revisions change the measurement system.
Quick check: Run the golden unit on Station A/B/C daily; log ΔMetric vs baseline_version and fixture_id to separate drift from device variation.
Fix: Schedule probe replacement (e.g., Mill-Max 0908-…), add stable test pads (Keystone 5015), and lock a stimulus reference (e.g., ADI ADR4525 as a low-drift reference option for the fixture).
Pass criteria: |ΔMetric(station_i, station_j)| < Corr_limit AND daily drift |ΔBaseline/day| < Drift_limit.
How do I define soak completion without wasting time?
Likely cause: Fixed soak time ignores fixture airflow and thermal mass; measurements begin before the system reaches a stable thermal state.
Quick check: Track a rolling average of the key metric and declare “stable” when |ΔMetric| stays below a threshold for a continuous duration.
Fix: Use a stability-based rule and log the local temperature near the analog area (e.g., TI TMP117 as an accurate temperature sensor option for correlation).
Pass criteria: Soak complete when |ΔMetric| < Δ_stable for t ≥ t_stable AND |ΔTemp| < ΔT_stable.
How to validate recovery/overload behavior as a proxy test?
Likely cause: Large common-mode steps and overloads expose board-level issues (loading, clamp conduction, stability) that simple DC checks miss.
Quick check: Apply a repeatable CM step and measure Trecover plus residual offset after recovery at a fixed sampling window.
Fix: Ensure output isolation (e.g., CRCW060349R9FKEA as Riso placeholder) and a stable observation point; if bandwidth demands require, use a high-speed INA class (e.g., ADI AD8421 as a reference device).
Pass criteria: Trecover < T_limit AND |Residual_after| < E_limit AND re-test spread < Repeat_limit.
What minimal data must be logged per unit for traceability?
Likely cause: Missing station/fixture/program context makes correlation failures look like device drift.
Quick check: Audit test records for completeness and confirm the ability to reproduce a failure using the same baseline_version and fixture_id.
Fix: Require the minimum schema: serial, lot/date code, station_id, fixture_id, program_rev, baseline_version, temperature, gain/state code, VREF, Voff, Gain_err, residual, Trecover (if used), and final bin.
Pass criteria: 100% records include all required fields AND missing fields are blocked from “PASS” disposition.
When should I stop relying on board-level calibration and fix hardware?
Likely cause: Calibration cannot compensate unstable leakage, clamp conduction, or fixture-dependent settling that changes with time/humidity/load.
Quick check: Compare metrics under controlled A/B toggles: guard on/off, load light/heavy, and re-insertion; instability indicates a hardware/fixture root cause.
Fix: Redesign the leakage/clamp paths and test hooks: add guard link (0Ω ERJ-2GE0R00X), use low-leakage switching (ADG1201) and clamp helpers (BAV199), and stabilize output loading with Riso (CRCW060349R9FKEA placeholder).
Pass criteria: After hardware/fixture changes, proxy metrics remain stable: |ΔMetric(A/B)| < Δ_limit AND correlation |Δ| < Corr_limit across days.