123 Main Street, New York, NY 10001

Production Test & Matching for Op Amps

← Back to:Operational Amplifiers (Op Amps)

This page turns op-amp matching into a single production workflow: pair channels, sweep temperature, and monitor long-term drift with traceable records. The goal is not a perfect lab curve—it is consistent shipment bins and fast root-cause isolation when yield shifts.

What this page solves (production matching in one workflow)

Production matching is not about making a single op amp look perfect on a lab bench. It is about controlling channel-to-channel consistency and keeping every decision traceable across lots, fixtures, and temperature.

Matching targets relative error (Δ between channels) rather than absolute perfection: pairs are selected to minimize spread in offset class, drift slope class, and recovery behavior class under controlled conditions. Temperature sweeps turn “one number” into a curve, so binning and pairing can be based on slope and shape instead of a single point.

Long-term drift monitoring closes the gap between factory pass and field stability: re-test cadence and alarm rules convert “it drifted” into a managed event with documented thresholds, lot context, and corrective actions.

What this page delivers (usable outputs)
  • One end-to-end workflow: incoming → pairing → room test → temperature sweep → drift monitor → feedback.
  • Actionable “pass/fail windows” and bin codes that include guardbands and traceability fields.
  • Minimum data record fields (SN/lot/fixture/temp/version/results) that make drift and mismatch debuggable.
Production matching workflow for operational amplifiers Block diagram showing an end-to-end production workflow: incoming, pairing, room test, temperature sweep, drift monitoring, and feedback, with a data record database icon indicating traceability. End-to-end workflow with traceable records Incoming SN / Lot Pairing Pair ID Room test Windows Temp sweep Slope Drift Retest Feedback Actions Data record Traceable Minimal text, more blocks

When matching matters (use-cases that justify the cost)

Matching adds fixture time, temperature chamber occupancy, and data management overhead. It is justified when field cost (recalibration, downtime, false alarms, performance spread) grows faster than factory cost. The decision is not binary: projects typically choose a depth level from room-only pairing to temperature-swept pairing and finally to drift monitoring with alarms.

Differential / bridge zero alignment
  • Without matching: channel-to-channel zero spread shows up as persistent offset between sensor legs and complicates system calibration.
  • Matching focus: Δoffset class at 25°C plus drift slope class across temperature points.
  • Minimum depth: room test for windows; upgrade to a short temp sweep when slope split drives field errors.
Multi-axis / multi-phase channel symmetry
  • Without matching: asymmetry forces per-channel trims and increases sensitivity to replacements and board-to-board spread.
  • Matching focus: pair within the same lot and preserve thermal symmetry on the board for stable relative behavior.
  • Minimum depth: room-only pairing; add temperature points when operating range is wide and symmetry is a spec.
Service replacement consistency
  • Without matching: replacement parts shift relative offsets and trigger re-calibration cost, downtime, or re-qualification.
  • Matching focus: bin codes and pair IDs that allow “like-for-like” swaps within defined windows.
  • Minimum depth: binning + room test; reserve long-term drift rules for high availability systems.
Long-running drift management
  • Without matching: early-life changes and slow drift become intermittent faults, false alarms, or unexplained spec escapes.
  • Matching focus: re-test cadence, drift slope tracking, and alarm thresholds tied to lot and fixture context.
  • Minimum depth: temperature sweep + drift monitoring for systems where maintenance cycle and uptime are contractual.
Matching decision matrix for op amp production testing A 2 by 2 matrix plotting matching cost versus field risk, with four zones: Skip, Sample, Pair, and Full. Use-case markers indicate where matching depth is typically justified. Matching depth = cost vs field risk Field risk Matching cost PAIR Room windows FULL Temp + drift SKIP Basic QA SAMPLE Audit lots Bridge Multi-axis Service Drift Depth levels Room windows Temp points Drift alarms

Define what “matching” means (metrics & acceptance windows)

In production, “matching” must be defined as a measurable metric bundle with repeatable conditions and an acceptance window. This avoids ambiguous “good parts” language and enables pairing, binning, and service replacement based on traceable rules.

Production-ready metric bundle (what to measure and how to report)
  • Offset match @ 25°C (ΔVos): report Vos per channel and ΔVos under the same operating point and settling rules.
  • Gain match (ΔGain, if measurable): report gain per channel and ΔGain only when stimulus and meter uncertainty stay below the target window.
  • Drift match vs temperature (Δ(dVos/dT)): report temperature points, soak outcome, fitted slope, and slope difference between channels.
  • Recovery behavior class: classify overload recovery as fast/slow using a fixed stimulus and a fixed return-to-threshold rule.

Acceptance windows should be treated as production windows, not as a copy of datasheet limits. A practical window includes a guardband that covers measurement uncertainty, fixture drift, temperature gradients, and repeatability so that “pass/fail” remains stable across stations and time.

Window rules that prevent false mismatch
  • Define conditions first: rails, operating point, load class, filter/averaging, and settle criteria must be identical for both channels.
  • Report uncertainty: if meter + stimulus uncertainty is near the window width, use coarse binning rather than tight pairing.
  • Guardband explicitly: use a conservative “production window” that still meets the system requirement after uncertainty.
Matching metric bundle with acceptance windows and guardbands A metric bundle card showing four matching metrics: delta offset at 25C, delta gain, drift slope difference, and recovery class. Each metric has a window bar with guardbands. Matching = metric bundle + acceptance window + guardband guardband + window ΔVos @ 25°C Offset match ΔGain Gain match Δ(dVos/dT) Drift slope Recovery Fast / Slow Use windows for pairing/binning; guardband for uncertainty and repeatability.

Test architecture (fixture, routing, and measurement chain ownership)

Production tests fail most often when the measurement chain is not owned. A stable process must separate DUT behavior from fixture, switching, cabling, temperature gradients, and instrument limits. Tight pairing and narrow bins are only valid when the dominant error sources are identified and controlled.

Standard production chain (structure before theory)
  • Stimulus source: controlled level and source impedance; verified with a reference check.
  • Switch matrix (MUX/relay): routing and contact variability; track fixture ID and relay cycle count.
  • DUT socket: repeatable contact and thermal path; avoid position-dependent bias.
  • Meter / capture: fixed range and averaging rules; record settings that impact uncertainty.
  • Temperature control: setpoint plus measured temperature; soak outcome must be recorded.
  • Logger: store context (SN/lot/station/fixture/temp/version) together with results.
Ownership rule of thumb
  • Solid-border blocks: calibratable or modelable segments suitable for tight bins.
  • Dashed-border blocks: uncontrolled or strongly time-varying segments; use wider windows or add checks.
  • Always record: station, fixture ID, relay cycles, temperature slot, and test timestamp.
Production test architecture with ownership separation Block diagram showing stimulus source, switch matrix, DUT socket, meter, temperature chamber, and logger. Solid borders indicate calibratable segments and dashed borders indicate uncontrolled disturbances. Own the chain: identify what is calibratable vs uncontrolled Solid = calibratable Dashed = uncontrolled Source Stimulus Switch MUX/Relay DUT Socket Meter Capture Temp Chamber Logger Schema Record station, fixture, relay cycles, temp slot, and settings to keep results traceable.

Pairing strategy (how to choose pairs that stay paired)

“Staying paired” means the relative error between two channels remains inside the chosen windows after re-tests, temperature exposure, and station-to-station variation. Pairing should therefore be treated as an engineering process with eligibility filters, ordering rules, conflict handling, and a traceable Pair ID.

Three pairing strategies (low cost → high robustness)
  • Same lot + same package: lowest cost; reduces lot-to-lot spread but does not guarantee tight matching.
  • Same board-position symmetry: improves thermal consistency on the PCB; helps pairs remain stable under real heat gradients.
  • Data-driven pairing: uses measured differences (ΔVos @25°C + Δ(dVos/dT) + recovery class) to minimize mismatch within windows.
Executable pairing flow (no heavy math)
  1. Filter: keep only parts with complete records and identical test conditions; split by recovery class (fast/slow) first.
  2. Sort: group by drift-slope class, then order by ΔVos (or a simple combined score) inside each group.
  3. Pair: pair neighbors inside the same group to minimize deltas; generate a Pair ID for each confirmed pair.
  4. Handle conflicts: orphans go to a wider class once (single downgrade); missing/outliers go to re-test; unresolved parts are not used for tight pairing.
Required outputs for traceability
  • Pair list: SN_A, SN_B, Pair ID, class/bin code, ΔVos, Δslope, recovery class, station/fixture.
  • Replacement rule: allowed swaps by class/bin code; re-test requirement after replacement.
  • Re-test hook: room recheck (spot) and optional temperature-point spot-check for high-risk builds.
Pairing flow for production matching A flowchart showing filter, sort, pair, and lock Pair ID steps, with a side branch for orphans, missing data, and outliers leading to retest or downgrade. Filter → Sort → Pair → Lock Pair ID Eligible Pool Filter Class Sort Order Pair Min Δ Pair ID Pair list Traceable Conflicts Orphan Missing Outlier Re-test Downgrade once Pairing should be traceable and repeatable; conflicts become explicit actions.

Temperature sweep method (points, soak time, sequence, repeatability)

A temperature sweep is not “running temperatures”; it is controlling variables so that drift slope and curve shape can be used for pairing and binning. The method must define point selection, soak criteria, sequence (up/down) for hysteresis tagging, and repeatability rules that keep results comparable across stations.

Point selection (minimum vs robust)
  • Minimal: 3 points (low / mid / high) for a basic slope estimate and coarse drift-class binning.
  • Robust: 5–7 points to detect nonlinearity and reduce fit ambiguity across a wide operating range.
  • Placement rule: include endpoints of the real operating range and at least one point near the most common field temperature.
Soak definition (state-based, not minutes)
  • Soak passes when the reading and measured temperature remain inside a stability threshold for a defined observation window.
  • Fixed minutes are avoided because chamber load, fixture thermal paths, and airflow can change settling behavior.
  • Always record soak outcome, soak duration, measured temperature, and sweep direction.
Sequence, repeats, and outliers (keep it comparable)
  • Up vs down sweep: use the same temperature points in both directions when hysteresis tagging is required; report direction explicitly.
  • Per-point repeats: measure multiple times at each point using the same averaging and range settings; store all repeats (not only averages).
  • Outlier rule: re-test once; if correlated to station/fixture, flag chain ownership; if correlated to DUT only, downgrade or reject for tight bins.
Temperature sweep SOP (copy-ready checklist)
  • For each temperature point: step → wait until soak passes → measure repeats → compute slope-fit inputs → log context + results.
  • Pass condition: complete records (station/fixture/temp/direction/settings) and repeatability check at each point.
  • Outputs: per-point data, fitted slope dVos/dT, curve-shape tag, hysteresis tag (if down sweep is used).
Temperature sweep sequence with soak, measure, log, and step A simplified temperature versus time line showing step transitions, soak plateaus with pass checks, measurement repeats at each plateau, and logging to a database icon. Minimal labels: Step, Soak, Measure, Log. Step → Soak (pass) → Measure (repeats) → Log → Step Temp Time Step Step Step Soak Soak Soak Measure Measure Measure Log Record Up/Down sweep tag Soak is a stability state; repeats and logging make slope and hysteresis comparable across stations.

Long-term drift monitoring (retest cadence, burn-in, and alarms)

Long-term drift is best managed as an operational system: a retest cadence plus alarm thresholds with clear dispositions. This turns “nice lab curves” into traceable production behavior that supports pairing stability, replacements, and lot-level investigations.

Retest cadence tiers (dense → steady → event-driven)
  • Early dense retest: short-interval checks to screen early instability and to validate station repeatability.
  • Steady cadence: periodic retests (by product grade) to maintain long-term baselines and detect distribution shifts.
  • Event-driven retest: triggered by process changes, fixture maintenance, field returns, or SOP updates.
Alarm definitions (what to flag and why)
  • Absolute drift: the device moves beyond its baseline band (single-unit abnormality risk).
  • Slope change: drift rate changes materially over time (trend risk, not just a one-time shift).
  • Lot shift: a lot’s distribution center or spread moves relative to historical baselines (systemic risk).
Disposition by severity (keep actions explicit)
  • Green: continue; record trend; follow the planned cadence.
  • Amber: downgrade bin or require calibration; schedule a confirmatory re-test.
  • Red: isolate lot or station; block tight pairing; trigger investigation and corrective actions.
Minimum record fields (make drift traceable)
  • Identity: SN, lot/date code, package, product grade, Pair ID (if used), bin code.
  • Context: timestamp, station, fixture ID, relay cycles, temp slot, measured temperature, soak outcome.
  • Results: key metrics (Vos, slope, class), baseline version, alarm flags, disposition code.
Long-term drift monitoring timeline with retest points and alarm bands A simplified monitoring chart with a drift trace over time, a baseline band, threshold bands, retest markers, and alarm indicators for points leaving the band. Retest cadence + baseline band + alarms (absolute / slope / lot shift) Drift Time Baseline band ALARM Dense retest Steady cadence Event trigger Monitor drift as a time series: retest points, bands, and alarms drive bin changes and actions.

Guardbands & binning (turn lab specs into production bins)

Datasheet “typical” and “maximum” numbers are not production limits. Production bins must include a guardband that accounts for measurement uncertainty, temperature coverage, monitoring policy, and station variation. This creates stable, auditable pass/fail decisions and clear SKU routing.

Where guardbands come from (engineering, not guesswork)
  • Measurement uncertainty: meter, stimulus, switching, and repeatability limits.
  • Temperature coverage: limited points, gradients, and soak definition variance.
  • Aging policy: long-term monitoring cadence and alarm rules.
  • Station variation: fixture differences and calibration drift across stations.
Production bins (clear destinations)
  • Bin0 (Golden): tight windows and complete records; suitable for high-grade SKUs and baseline references.
  • Bin1 (Normal): meets primary windows; used for mainstream shipping with standard verification.
  • Bin2 (Use with calibration): wider matching; routed to designs that include calibration or compensation hooks.
  • Reject: outside windows, incomplete records that cannot be recovered, or unstable monitoring behavior.
Bin definitions must be traceable
  • Link to uncertainty: bin limits reference the station/fixture calibration and uncertainty assumptions.
  • Link to temperature method: points/soak/sequence define what “drift match” means in practice.
  • Link to monitoring: cadence and alarms define how conservative initial bins must be.
Production binning with destinations: ship, recalibrate, reject Four bin blocks labeled Bin0, Bin1, Bin2, and Reject. Arrows route bins to destination icons: shipping, calibration, and reject. Guardbanded bins route parts to clear destinations Bin0 Golden Bin1 Normal Bin2 Calibrate Reject Rework Ship Recal Reject Bins must map to uncertainty, temperature coverage, and monitoring policy; destinations stay explicit.

Data schema (minimum fields that make drift traceable)

A drift program fails when results cannot be reproduced or attributed. The minimum schema below is designed to make every record traceable from device identity through test conditions to bins and actions. Keep the naming stable (snake_case) and treat hashes/versions as first-class fields.

Identity (required keys)
  • device_sn (string, required): unique serial number.
  • lot_id (string, required): production lot identifier.
  • wafer_id (string, optional): wafer identifier (if available).
  • assembly_id (string, optional): assembly/batch identifier.
  • package_code (string, required): package/footprint code.
  • revision (string, optional): silicon/assembly revision.
  • pair_id (string, optional): pairing identifier (post pairing).
Test context (ownership evidence)
  • test_station_id (string, required): station identifier.
  • fixture_id (string, required): fixture identifier.
  • socket_id (string, optional): socket identifier (multi-socket fixtures).
  • slot_id (string, optional): chamber slot/position (if used).
  • relay_cycle_count (int, required): relay/switching lifetime counter.
  • operator_id (string, optional): operator or shift identifier.
  • timestamp_utc (datetime, required): test time in UTC.
  • test_program_id (string, required): test program version label.
  • test_program_hash (string, required): immutable content hash for the program/config.
Environment (what “temperature” really means)
  • temp_setpoint_c (float, required): chamber setpoint.
  • temp_measured_c (float, required): measured temperature at the DUT/fixture point.
  • humidity_rh (float, optional): humidity (if monitored).
  • soak_time_s (int, required): soak duration in seconds.
  • soak_pass (bool, required): state-based soak pass/fail.
  • sweep_direction (enum, optional): up / down / na.
  • cycle_index (int, optional): sweep cycle index.
Stimulus (make results comparable)
  • input_mode (enum, required): stimulus mode code.
  • input_amplitude_v (float, required): applied amplitude.
  • source_impedance_ohm (float, required): source impedance.
  • supply_vpos_v (float, required): positive supply rail.
  • supply_vneg_v (float, optional): negative supply rail (or null for single-supply).
  • load_ohm (float, optional): output load (if relevant).
  • bandwidth_hz (float, optional): measurement bandwidth definition (if applicable).
Results + bins (what production uses)
  • vos_uv (float, required): input offset voltage (µV).
  • delta_vos_pair_uv (float, optional): pair delta at the pairing condition.
  • drift_slope_uv_per_c (float, optional): drift slope from sweep or monitoring.
  • recovery_class (enum, optional): fast / slow / na.
  • pass_fail (bool, required): overall pass/fail at the applied bin definition.
  • fail_code (enum, optional): reason code (coded, not free text).
  • bin_code (enum, required): bin0 / bin1 / bin2 / reject.
  • bin_version (string, required): version of the bin thresholds.
Calibration linkage + attachments (audit hooks)
  • cal_version (string, optional): calibration package version.
  • coeff_hash (string, optional): coefficient/LUT hash.
  • firmware_build_id (string, optional): firmware build identifier (if used).
  • cal_applied (bool, optional): whether calibration was applied at test time.
  • raw_capture_hash (string, optional): raw waveform/data hash.
  • raw_capture_summary (string, optional): short numeric summary.
  • note_code (enum, optional): coded note/exception tag.
Hashes and versions (protect comparability)
  • test_condition_hash: derived from environment + stimulus + program hash; compare only within the same hash.
  • baseline_version: identifies which monitoring baseline/band was applied when raising alarms.
  • uncertainty_profile_id (optional): links to station/fixture uncertainty assumptions and calibration state.
Minimum production database schema for traceable drift A database table card with grouped fields on the left and a trace arrow on the right connecting SN to lot, station, temperature, results, and bin. Table groups + trace chain keep drift auditable production_test_records identity test_context environment stimulus results + bins cal + attachments Trace chain SN lot station temp result bin audit Traceability requires identity + context + environment + stimulus + versions; bins become auditable outcomes.

Analysis & correlation (find root cause without blaming the DUT)

The fastest production root-cause work starts with ownership: if a shift follows a station, fixture, or environment, the DUT is not the first suspect. Use the checks below to isolate measurement-chain effects before tightening bins or rejecting lots.

Fast correlation checks (production-practical)
  • Station-to-station offset: compare means under the same test_condition_hash; confirm with a golden device rotation.
  • Fixture clustering: check whether results form clusters by fixture_id or socket_id within the same station.
  • Slot / chamber position: correlate temp_measured_c and results with slot_id; use slot rotation to confirm gradients.
  • Relay aging: correlate relay_cycle_count with shifts; define replacement limits and uncertainty profiles.
  • Lot distribution shift: compare drift_slope distributions lot-to-lot; quarantine lots that move as a group.
Action-oriented outcomes (what to do next)
  • Shift follows station: isolate station, re-calibrate instruments, and re-run golden rotation.
  • Shift follows fixture/socket: swap fixtures, inspect leakage/contact, and retest the same devices.
  • Shift follows slot: update soak rules or restrict tight bins to validated slots.
  • Shift follows relay cycles: schedule relay replacement and treat the aging state as an uncertainty change.
  • Shift follows lot: quarantine the lot, expand monitoring cadence, and adjust guardbands if justified.
Priority triage rule

First eliminate measurement-chain and environment correlations (station → fixture → slot → relays → lot). Only when shifts do not follow these factors should the DUT be treated as the primary suspect.

Production triage tree: isolate station and fixture before suspecting the DUT A decision tree from symptom to checks: station, fixture, environment, relays, lot, and finally DUT suspect. Each branch leads to a clear action code. Symptom → check chain ownership → then DUT suspect Symptom shift / yield Check station golden rotate Isolate / calibrate station no yes Check fixture swap / retest Repair / restrict fixture Check slot rotate Check relays cycle trend Check lot shift DUT suspect Action codes: isolate station, swap fixture, rotate slot, replace relays, quarantine lot; suspect DUT last.

Closing the loop (how production data improves design & sourcing)

Production matching only “sticks” when data turns into actions. The goal is not prettier lab plots, but a repeatable system that drives design changes, supplier controls, and calibration policy with auditable versioning.

Action 1 — Design change (turn drift evidence into layout/test hooks)
  • Improve measurement ownership: add clear test points and traceable stimulus nodes for Vos/drift checks.
  • Enforce symmetry for paired channels: mirror placement and routing for dual/quad op amps to reduce thermal gradients.
  • Stabilize ratio-critical networks: prefer matched resistor arrays (tracking) where gain/ratio drives pairing yield.
  • Make temperature real: add a local temperature sensor footprint close to the DUT hot spot used in drift correlation.
Action 2 — Supplier control (lock what actually controls yield)
  • Lot & process windows: record lot/wafer/assembly IDs and quarantine any lot that shifts drift slope as a group.
  • Incoming sampling: define an AQL plan tied to drift-sensitive metrics (Vos window + slope window), not just pass/fail.
  • Fixture wear controls: treat relay/switch matrix aging as a supplier-like input; lock maintenance intervals by cycle count.
  • Approved alternates: qualify second sources only after cross-station, cross-fixture correlation stays inside the same bins.
Action 3 — Calibration policy (decide what may ship, and how versions freeze)
  • Define “calibration-eligible” bins: allow calibration shipment only for bins whose drift behavior stays stable under monitoring.
  • Freeze coefficient identity: store coefficient hashes and firmware/build IDs alongside every shipped unit.
  • Prevent silent changes: bin thresholds, baselines, and calibration packages must carry immutable versions and change logs.
  • Regress before rollout: any threshold or calibration update requires a defined regression set across temperature points.
Change management minimum set (must be auditable)
  • version_id: schema version, bin_version, baseline_version, cal_version (no “silent” edits).
  • threshold change record: what moved, by how much, and why (linked to lot/station evidence).
  • regression requirements: required temp points, soak rule, repeats, and acceptance criteria.
  • rollout control: effective date, affected stations/fixtures, and rollback trigger.
Concrete material part numbers (examples used in production workflows)

The list below is not a catalog. It shows common, concrete material identifiers that make a “closing-the-loop” system easier to implement and audit (golden devices, local temperature capture, coefficient storage, and fixture ownership).

Golden / reference devices (for station rotation)
  • TI: OPA2188 (dual zero-drift op amp)
  • TI: OPA188AIDGKR (example orderable code; single zero-drift op amp)
  • ADI: ADA4522-2 (dual zero-drift op amp)
  • ADI: ADA4528-2ARMZ (example MPN; low-noise dual zero-drift op amp)
Design hooks that improve matching repeatability
  • Local temperature capture: TI TMP117 (digital temperature sensor, on-board hot-spot tracking)
  • Ratio tracking networks: Vishay ACAS 0606 AT / ACAS 0612 AT (precision resistor array family)
Calibration & fixture ownership building blocks
  • Coefficient storage: Microchip 24AA64 (I²C EEPROM family, audit-friendly coefficient retention)
  • Switch matrix wear tracking: Omron G6K series signal relays (example: G6K(U)-2F variants)
Closing the loop: production database drives design, supplier, and calibration actions A central production database block with three feedback arrows to design change, supplier control, and calibration policy, plus small icons for versions and audit. Production DB schema + hashes bins + versions Design symmetry + TP Supplier lot + incoming Calibration bin-eligible + freeze versioned thresholds + change log Close the loop by linking bins and drift evidence to design actions, supplier controls, and frozen calibration versions.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs (Production test & matching)

These FAQs close long-tail questions strictly within production pairing, temperature sweep, drift monitoring, guardbands, binning, and traceability. Answers are short and action-first, using repeatable thresholds and required record fields.

Threshold conventions used below
  • U95: 95% measurement uncertainty for the station under the same test_condition_hash.
  • σ_station: station repeatability standard deviation (from repeats or golden rotation).
  • Rule of thumb: treat a shift as “real” when |Δ| ≥ 3×U95 (or ≥ 4×σ_station for cross-station comparisons).
Should pair matching be done at wafer/package level, or at board level?
Decision

Match at the level that controls the dominant variation: silicon-only pairing is insufficient when board thermal gradients and routing asymmetry dominate drift mismatch.

Threshold
  • If board-level Δ(driftslope) adds ≥ 3×U95 vs package-only results, board-level pairing is required.
  • If cross-board position changes cause ≥ 4×σ_station shifts, wafer/package pairing will not hold in the field.
Actions
  1. Run a small split: package-level pairs vs board-level pairs under the same test_condition_hash.
  2. Quantify added mismatch from board placement symmetry (same slot, same soak rules).
  3. Lock pairing level in SOP and version it (pairing_policy_version).
Record fields

device_sn, lot_id, package_code, pair_id, test_condition_hash, slot_id, temp_measured_c, drift_slope_uv_per_c, pairing_policy_version

After pairing within the same lot, why are temperature-sweep slopes still inconsistent? What are the top 3 checks?
Decision

A “slope mismatch” is actionable only when it exceeds uncertainty and stays consistent across repeats under the same sweep method.

Threshold
  • Flag slope mismatch when |Δ(drift_slope)| ≥ 3×U95 and repeats agree (no sign flips across repeats).
  • If mismatch disappears after slot rotation, treat it as environment, not DUT.
Top 3 checks (in order)
  1. Soak validity: verify soak_pass and temp_measured stability at each point (do not trust setpoint alone).
  2. Slot gradient: rotate slot_id between paired devices; re-run the same points and sequence.
  3. Stimulus drift: confirm input_mode, source_impedance_ohm, and supply rails were identical; check test_program_hash changes.
Record fields

pair_id, cycle_index, sweep_direction, soak_pass, soak_time_s, temp_setpoint_c, temp_measured_c, slot_id, test_condition_hash, test_program_hash

How should “soak time is enough” be decided: fixed time or a stability threshold?
Decision

Use a stability threshold whenever drift monitoring or slope extraction matters; fixed minutes alone do not guarantee comparability.

Threshold
  • Declare soak_pass only after temp_measured_c stays within a defined window for a defined window time.
  • Declare reading stability when successive readings change by ≤ 1×U95 over the stability window.
Actions
  1. Define the soak rule in SOP: stability window, measurement cadence, and pass criteria.
  2. Log soak_pass and the final temp_measured_c; reject points where soak_pass=false from slope fitting.
  3. Version the soak rule (soak_rule_version) and include it in test_condition_hash.
Record fields

soak_time_s, soak_pass, temp_measured_c, timestamp_utc, test_station_id, soak_rule_version, test_condition_hash

Should a temperature sweep go up or down? Does the sequence affect results?
Decision

Sequence affects comparability when soak behavior differs between heating and cooling or when slot gradients drift with airflow.

Threshold
  • Treat sequence dependence as real when up vs down slope differs by ≥ 3×U95 under identical points and soak rules.
  • If only one direction fails soak_pass frequently, fix the soak rule or chamber control before changing bins.
Actions
  1. Pick one direction as the production standard and lock sweep_direction in SOP.
  2. Run a periodic bidirectional audit on a golden pair; update alarms if sequence bias appears.
  3. Record sweep_direction and cycle_index for every point used in slope fitting.
Record fields

sweep_direction, cycle_index, temp_setpoint_c, temp_measured_c, soak_pass, drift_slope_uv_per_c, test_condition_hash

How should production guardbands be set to avoid excessive rejects?
Decision

Guardbands must cover measurement uncertainty and station variation; using datasheet limits directly causes unstable yields across stations and time.

Threshold
  • Set production limit as: limit_prod = limit_target − k×U95 (k typically 2–3, based on risk).
  • If station-to-station mean spread is ≥ 4×σ_station, fix station/fixture before tightening limits.
Actions
  1. Measure U95 using repeats and golden rotation under the same test_condition_hash.
  2. Define bins (bin0/bin1/bin2/reject) with a versioned threshold table (bin_version).
  3. Re-run a regression set after any station/fixture maintenance or program changes.
Record fields

bin_code, bin_version, test_station_id, fixture_id, test_condition_hash, uncertainty_profile_id, pass_fail

Station A passes but Station B fails. How to quickly tell fixture issues from DUT issues?
Decision

If the shift follows station/fixture, treat it as measurement-chain ownership until proven otherwise.

Threshold
  • Cross-station disagreement is actionable when mean_A − mean_B ≥ 4×σ_station under the same test_condition_hash.
  • If a golden device fails only on one station, the station is the suspect.
Actions (fast triage)
  1. Run the same golden device on Station A and B (same input_mode, same program hash).
  2. Swap fixtures between stations; re-run the same DUT and golden.
  3. Rotate slot_id if temperature is involved; confirm temp_measured alignment.
  4. Only after the above: flag DUT for deeper investigation.
Record fields

test_station_id, fixture_id, socket_id, test_program_hash, test_condition_hash, temp_measured_c, pass_fail, fail_code

How does relay/MUX aging create “fake drift”, and how should it be monitored?
Decision

When a metric correlates with relay cycle count rather than device identity or lot, the drift is likely in the switch path.

Threshold
  • Flag “path aging” when correlation persists and the shift exceeds 3×U95 over a relay cycle band.
  • Stop using tight bins on a path when new-vs-old relay states differ by ≥ 4×σ_station.
Actions
  1. Log relay_cycle_count per test path; segment results by cycle bands.
  2. Run a golden device weekly across all paths; compare to baseline_version.
  3. Define a replacement limit and encode it in maintenance SOP; bump uncertainty_profile_id after maintenance.
Record fields

relay_cycle_count, test_station_id, fixture_id, socket_id, test_condition_hash, baseline_version, uncertainty_profile_id

How long should long-term drift be retested to be meaningful, and how should alarms be set?
Decision

Meaningful monitoring requires at least two phases: early dense checks to catch instabilities, then steady cadence to detect distribution shifts.

Threshold
  • Absolute alarm: |Δmetric| ≥ 3×U95 from baseline under the same conditions.
  • Slope alarm: drift rate changes by ≥ 3×U95 between windows.
  • Lot alarm: lot mean shift ≥ 4×σ_station vs historical baseline.
Actions
  1. Define early dense cadence (days) and steady cadence (weeks/months) by product grade.
  2. Store baseline_version and alarm_policy_version; alarms must be replayable.
  3. When alarms trigger: downgrade bins, increase cadence, and isolate correlated stations/fixtures first.
Record fields

timestamp_utc, baseline_version, alarm_policy_version, test_condition_hash, vos_uv, drift_slope_uv_per_c, bin_code, pass_fail

How should bin codes be designed to support traceability later?
Decision

A bin is useful only when it encodes both outcome and policy version, enabling replay and rollback.

Threshold
  • Do not allow shipping bins without bin_version and test_program_hash.
  • Reject any record missing test_condition_hash when bins depend on temperature or stimulus.
Actions
  1. Use a small, stable bin set: bin0 (golden), bin1 (normal), bin2 (cal-eligible), reject.
  2. Make bin definitions a versioned table; change requires regression and a change log.
  3. Store fail_code as a controlled enum; free-text notes belong in note_code or attachments.
Record fields

bin_code, bin_version, pass_fail, fail_code, test_condition_hash, test_program_hash, baseline_version

How should a “golden pair” be maintained and requalified?
Decision

A golden pair is a station/fixture reference artifact. It must be requalified whenever programs, bins, fixtures, or uncertainty profiles change.

Threshold
  • Requalify when golden results shift by ≥ 3×U95 from the stored baseline under the same conditions.
  • Invalidate if golden behavior becomes sensitive to slot_id or fixture swaps (cluster evidence).
Actions
  1. Store a golden baseline dataset with baseline_version and test_condition_hash.
  2. Run golden rotation on a fixed schedule and after maintenance; compare to baseline with the same program hash.
  3. Track golden artifact custody: storage, handling, and usage count.
Record fields

pair_id, baseline_version, test_station_id, fixture_id, test_condition_hash, test_program_hash, note_code

How can chamber slot differences be reduced? Is slot correction mandatory?
Decision

Slot correction is not mandatory if slot effects are kept below uncertainty and controlled via rotation and validated slots for tight bins.

Threshold
  • Allow tight pairing only on slots where slot-to-slot shift is < 2×U95.
  • If slot-to-slot mean shift is ≥ 4×σ_station, restrict or correct; do not tighten bins.
Actions
  1. Use slot rotation for pairs (swap slot_id between devices) to reveal gradients.
  2. Record temp_measured_c per slot; treat setpoint-only data as insufficient for drift decisions.
  3. Whitelist validated slots for bin0/bin1; route others to looser bins or extra soak.
Record fields

slot_id, temp_setpoint_c, temp_measured_c, soak_pass, test_condition_hash, bin_code, bin_version

If retest scatter increases, should the line be stopped first or should shipments be downgraded first?
Decision

Increased scatter is treated as a measurement-chain event until proven otherwise. Use a severity gate to decide “stop” vs “downgrade.”

Threshold (severity gate)
  • Green: scatter increase < historical σ_station → continue, monitor.
  • Amber: scatter increase between and σ_station → downgrade bins, run golden rotation.
  • Red: scatter increase ≥ σ_station or station split appears → pause tight-bin shipments and isolate station/fixture.
Actions
  1. Segment by test_station_id and fixture_id; look for clustering before suspecting the DUT.
  2. Run golden device across suspected stations; compare to baseline_version.
  3. If Amber: ship only from looser bins (bin1/bin2) until repeatability recovers; record the policy change.
  4. If Red: pause tight bins, fix ownership chain, then re-run regression set.
Record fields

test_station_id, fixture_id, uncertainty_profile_id, baseline_version, bin_version, pass_fail, fail_code, timestamp_utc