Tempco & Calibration Strategy for Instrumentation Amplifiers (INA)
← Back to:Instrumentation Amplifiers (INA)
Stable accuracy across temperature is achieved by owning drift terms (offset/gain vs time/aging), then applying the simplest calibration that still passes independent validation points.
Use 2-point as the default, move to multi-point/LUT only when residual curvature is stable and measurement uncertainty is well below the error budget, and enforce warm-up/soak gates plus long-term tracking to keep coefficients valid.
Scope & Error Ownership Map (Tempco + Calibration only)
Stable accuracy is not a single “low drift” number. It is a closed loop: own the dominant error terms, select a calibration ladder that matches real temperature behavior, and validate with independent checks so calibration does not “hide” wiring/protection/ADC issues.
A) What this page owns (deep coverage)
- Tempco decomposition: offset(T), gain(T), curvature/hysteresis boundaries, and when linear assumptions break.
- Calibration ladder: 1-point → 2-point → multi-point/LUT → tracking, with point planning and validation gates.
- Zero-drift (chopper) artifacts: ripple/alias-like behaviors that corrupt “drift” interpretation and how to handle them in calibration.
- Long-term drift tracking: aging-driven drift, re-calibration triggers, and coefficient/version governance.
B) What this page does not cover (linked, not repeated)
These topics often look like “drift” in the field, but they must be solved at their true owner pages to avoid cross-page overlap.
- Input clamp & leakage modeling (diode/TVS leakage and series-R effects) → Input Clamp & Leakage Budgeting
- Layout/grounding/guarding (leakage paths, return continuity, guards) → Layout & Grounding
- ADC drive / anti-alias filtering (settling, phase margin, AAF topology) → ADC Drive & Anti-Alias Filtering
- Production injection fixtures (how to build loopback/injection hardware) → Self-Test & Production Test
C) Error term → owner → quick test hook (use before calibrating)
| Error term | Typical symptom | Calibratable? | Owner | Quick test hook |
|---|---|---|---|---|
| Offset(T) | Reading shifts up/down with temperature at “zero input”. | Yes | This page | Short input; sweep T; plot intercept vs T; verify repeatability on return-to-room. |
| Gain(T) | Scale factor changes; ratios drift across temperature. | Yes | This page | Apply two known inputs; track slope vs T; validate with a third “holdout” point. |
| Chopper artifact | Looks like drift after filtering/averaging; shape changes with sampling. | Partly | This page | Change sample/average window; if “drift” morphs, treat as artifact—not tempco. |
| Leakage(T) | Offset grows with humidity/touch; worse with high source impedance. | No | Clamp & Leakage | Swap source-R; watch offset sensitivity; inspect “touch/cable move” dependence. |
| CMR residual | Output shifts when common-mode changes; worse with long leads/mismatch. | No | Layout | Step common-mode (safe range); observe ΔVout at constant differential input. |
| Aging(t) | Slow monotonic drift under same conditions over days/months. | Track | This page | Log periodic check at fixed T; set re-cal trigger when drift crosses guardband. |
Rule of thumb: calibrate only what stays correlated across temperature/time. If the symptom changes with humidity, cable touch, or common-mode steps, solve the owner page first—calibration will not make it reliable.
Use this map as a gate: if the symptom follows humidity/touch/common-mode steps, fix the owner first. Calibration should be applied only to terms that remain correlated across temperature and time.
Tempco Taxonomy: What Actually Drifts (and Why It Matters)
“Low drift” is ambiguous unless drift is broken into budgetable objects. This section classifies drift by shape (offset vs gain vs curvature), dependency (temperature vs time vs environment), and actionability (2-point, multi-point/LUT, tracking, or solve elsewhere).
- Looks like: baseline shifts with temperature at near-zero input.
- Dominates when: small signals, high gain, narrow input span.
- Quick check: short input; plot intercept vs T; repeat on return-to-room.
- Action: 1-point (narrow T) or 2-point (wide T) with holdout verification.
- Looks like: scale factor changes; ratios drift across temperature.
- Dominates when: large span, ratiometric assumptions break, or external gain-set components drift.
- Quick check: apply two known inputs; track slope vs T; validate at a third point.
- Action: 2-point minimum; multi-point if curvature appears across the temperature range.
- Looks like: 2-point fits endpoints but leaves mid-range residual; up-sweep ≠ down-sweep.
- Dominates when: wide temperature range, packaging stress, gradients, or component tempco mismatch.
- Quick check: add a mid-temperature “holdout” point and compare residual patterns.
- Action: multi-point/LUT with constraints + validation points; define sweep direction policy.
- Looks like: “drift” appears after filtering/averaging; symptom changes with sampling window.
- Dominates when: bandwidth is low, averaging is heavy, or sampling interacts with ripple components.
- Quick check: vary sampling/averaging; if the error morphs, treat it as artifact not tempco.
- Action: manage ripple windows and validation methods; do not force static calibration to absorb it.
- Looks like: offset grows with humidity/touch/cable movement; stronger with high source impedance.
- Quick check: change source resistance; observe offset sensitivity and “touch dependence”.
- Action: fix clamp/leakage ownership first → Input Clamp & Leakage Budgeting
- Looks like: slow monotonic drift at constant temperature over weeks/months.
- Quick check: periodic fixed-T reference checks; trend vs time; set guardband.
- Action: tracking + re-calibration triggers (time, drift threshold, event-based).
What to pull from a datasheet (5 items, with the “reading rules”)
- Offset drift (µV/°C): confirm gain setting, temperature range, and whether conditions include warm-up stabilization.
- Gain drift (ppm/°C): check gain configuration and whether external gain-set component drift is included or excluded.
- 0.1–10 Hz noise (pp): verify bandwidth assumptions; compare to wideband density to avoid false “drift” conclusions.
- Chopper ripple / residual modulation: note frequency/behavior; plan sampling/averaging so the ripple does not alias into DC.
- Long-term stability: check measurement duration and stress conditions; define re-cal policy from drift vs time, not from typical only.
“Which term dominates?” A practical ranking with triggers
- P1: Leakage-driven pseudo drift — triggers: touch/cable movement sensitivity, humidity dependence, high source impedance behavior.
- P1: Offset(T) — triggers: baseline moves with temperature at near-zero input (intercept changes).
- P2: Gain(T) — triggers: ratio error changes with temperature while baseline stays relatively stable (slope changes).
- P2: Chopper artifact — triggers: “drift” shape changes when sampling/averaging windows change.
- P3: Aging(t) — triggers: monotonic drift over time at fixed temperature; crossing guardband requires policy.
Calibration should be applied after the dominance term is owned. If the dominant term is not calibratable, calibration will appear to “work” in the lab but fail in the field.
A reliable calibration strategy starts with this decomposition: calibrate offset/gain, track aging, and do not use calibration to mask leakage/CMR/ADC-path ownership.
Build the Error Model Before Calibrating (Minimal Math, Maximum Use)
Calibration only works when the corrected terms remain correlated across temperature and time. A minimal model makes that correlation explicit, separates calibratable terms from non-calibratable ownership, and defines what must be measured and stored to keep accuracy stable in production and field operation.
A) Minimal expandable model (treat terms as owned objects)
Template:
Vout = (Vin · G(T) + Vos(T)) + ε(T, t)
- G(T): the temperature-dependent scale factor (2-point or LUT targets this).
- Vos(T): the temperature-dependent baseline shift (1-point/2-point targets this).
- ε(T, t): a container for terms that should not be silently absorbed by calibration.
Rule: only calibrate terms that pass correlation gates. If the error “changes shape” with humidity, touch, common-mode steps, or sampling/averaging settings, treat it as ownership outside static tempco calibration.
B) Pre-cal correlation gates (must pass before fitting coefficients)
Pass all gates → proceed to coefficient fitting. Fail a gate → stop and fix the ownership page; otherwise calibration becomes a fragile patch.
C) Error mapping to final units (framework, not application-specific)
The calibration model is evaluated in volts (or codes), then mapped to the final unit using a sensitivity interface. The conversion must remain an interface layer so this page does not overlap with application pages.
Errorunit = ErrorV / SensitivityV/unit
SensitivityV/unit belongs to the application layer. This page focuses on ensuring ErrorV stays stable vs temperature and time.
D) Error template table (fields to measure, fit, store, and govern)
| term | term_id | unit | depends on T? | depends on time? | calibratable? | valid_range | how to measure |
|---|---|---|---|---|---|---|---|
| Offset(T) | CAL_VOS | V (or code) | Yes | Weak | Yes | Temp span, input near zero | Short input; soak; average with fixed window; record vs T; re-sweep check. |
| Gain(T) | CAL_GAIN | V/V (or code/code) | Yes | Weak | Yes | Temp span, input span | Apply two known inputs; fit slope; validate with a third holdout point. |
| Chopper artifact | CAL_ARTF | V (or code) | Often | Yes | Not static | Sampling + average window | Vary sampling/averaging; if the residual changes shape, treat as artifact and manage windows. |
| Aging(t) | TRK_AGING | V (or code) | Weak | Yes | Track | Fixed T checkpoints | Log periodic check at fixed temperature; trigger recal when crossing guardband. |
Governance requirement: store term_id, valid_range, and version/CRC with coefficients so updates remain auditable and safe.
The model’s purpose is governance: define which terms are fitted, which are tracked, and which must be solved elsewhere so coefficients remain valid across real temperature and time.
Calibration Ladder: None → 1-Point → 2-Point → Multi-Point/LUT → Tracking
Calibration strategy is a constrained decision. The correct “level” is determined by three gates: accuracy (budget closure), stability (repeatability and correlation), and cost (production time, storage, maintenance). Always verify with a holdout check so the chosen level does not overfit a narrow lab condition.
- Solves: no fitted terms.
- Use when: narrow temperature span and guardband covers drift.
- Verify: fixed-T repeatability + worst-case guardband check.
- Solves: offset only.
- Use when: gain drift is negligible and drift is mostly baseline shift.
- Verify: re-sweep repeatability; ensure slope remains within budget.
- Solves: offset + gain.
- Use when: behavior vs temperature is approximately linear over the operating range.
- Verify: add a holdout point (3rd point) to reveal curvature/hysteresis.
- Solves: curvature and range-dependent behavior.
- Use when: 2-point fails the holdout residual test across temperature.
- Verify: cross-validate on points not used in fitting; enforce constraints (monotonic / limited order).
- Solves: aging and environment-driven long-term drift via policy.
- Use when: unattended systems or long intervals between service require drift governance.
- Verify: fixed-T checkpoints + re-cal triggers (time / threshold / event-based).
Decision table (constraints → recommended ladder level)
| Constraint | None | 1-Point | 2-Point | LUT | Tracking |
|---|---|---|---|---|---|
| Accuracy target is loose | Fit | Optional | Optional | No | No |
| Temp span is wide | Risk | Risk | Fit | Fit | Policy |
| Holdout residual fails 2-point | N/A | N/A | No | Go | Maybe |
| Production time budget is tight | Fit | Fit | Risk | No | No |
| Storage / governance budget is limited | Fit | Fit | Fit | No | Risk |
| Long service interval (field drift matters) | Risk | Risk | Risk | Risk | Go |
Strategy selection should be gated by holdout verification and repeatability. If stability gates fail, do not move up the ladder—fix ownership first.
Use the gates as a rule: if stability is not proven, moving up the ladder increases complexity without improving reliability.
Two-Point Calibration Deep Dive (Point Choice, Sequence, Guardband)
Two-point calibration is reliable only when the fitted terms remain correlated across temperature, supply, and common-mode range. This section provides a do-able procedure: choose safe points, apply a fitting sequence, lock validity guardbands, and verify with an independent holdout point.
A) Point choice: pick safe, informative points (avoid boundary behavior)
- Best for: maximum coverage over the operating span.
- Risk: close to rails / saturation / protection conduction.
- Rule: pull points inward by a guard margin if any boundary effect is possible.
- Best for: stable behavior inside the real working zone.
- Risk: extrapolation errors at the span edges.
- Rule: define strict valid_range and do not extrapolate beyond it.
- Best for: keeping one point very stable while still spanning range.
- Rule: place the risky point away from rails and away from clamp/leakage knee regions.
- Must: validate with a holdout point near the center.
- Near input/output swing limits or near any saturation behavior.
- Near input common-mode limits or any region with degraded linearity.
- Near protection conduction / leakage knees (series R + clamp diode behavior changes).
B) Fitting sequence: default approach and exception cases
- Why: reduces slope bias by first fixing the baseline.
- Requires: a stable near-zero input condition or a clean reference injection point.
- Recommended: record offset at each calibration temperature point before gain fitting.
- When: a stable zero condition is not available.
- Rule: always add an independent holdout point (third point) to avoid training-only success.
- Guard: reject fits that change substantially with sampling/averaging settings.
- Stable zero/known reference available → use Offset → Gain.
- Only two known stimuli available → joint fit + mandatory holdout verify.
- Drift shape changes with averaging/window → treat as artifact; do not fit as tempco.
C) Guardband: lock coefficient validity (prevent silent misuse)
D) Two-point procedure card (copy-ready workflow)
E) Pass criteria templates (field placeholders; set values by budget)
- Residual depends on touch/humidity/cable motion → leakage ownership; fix clamp/leakage paths first.
- Residual depends on common-mode steps → CMR ownership; fix I/O range and CM handling first.
- Residual changes with sampling/averaging window → artifact ownership; manage windows before fitting.
Fit quality must be judged at the holdout point. A fit that looks perfect at the two training points is not evidence of stability.
Multi-Point & LUT: When It’s Worth It (and How to Avoid Overfitting)
Multi-point calibration is justified only when coefficient behavior is stable and measurement uncertainty is well below the required residual target. Otherwise, added points increase complexity while encoding noise, artifacts, and unstable conditions into the correction table.
A) Triggers: upgrade beyond 2-point only with observable evidence
- Coefficient repeatability must meet a defined delta across re-sweeps.
- Measurement uncertainty must be clearly smaller than the target residual.
- Validation points must be independent and not reused as fit points.
B) Do vs Don’t (constraints and verification first)
- Prefer piecewise linear (PWL) when possible.
- Enforce monotonic and continuity constraints.
- Cross-validate with independent points across the operating span.
- Bind every table to valid_range and mode_id.
- Store with fw_version and CRC for governance.
- Do not raise polynomial order to chase training residual.
- Do not reuse validation points as fit points.
- Do not extrapolate outside the declared valid_range.
- Do not deploy without re-sweep repeatability checks.
- Do not ignore shape changes with sampling/averaging settings.
C) Overfitting recognition (shape-based checks, not training metrics)
D) Coefficient data structure (LUT-ready, production-governed)
| field | meaning | notes |
|---|---|---|
| coef_id | Table identifier | Unique per product + mode_id |
| temp_bin | Breakpoint or bin index | Use ordered breakpoints for PWL |
| gain | Scale coefficient | Define Q-format and scaling in metadata |
| offset | Baseline coefficient | Bind to measurement units (code or volts) |
| valid_range | Validity bounds | Include T, Vcm, supply/mode, input span |
| fw_version | Firmware compatibility | Block use on mismatch |
| CRC | Integrity check | Detect corruption and wrong-table loads |
Prefer a representation that supports constraints and validation: piecewise linear segments with explicit breakpoints are often easier to govern than high-order coefficients.
The key test is performance at independent validation points. A model that only reduces training residual is not a stable calibration strategy.
Temperature-Point Planning: Soak, Stability Detection, and Correlation
Temperature-point failures are often not algorithmic. The real root causes are unstable soak conditions, incorrect temperature correlation, and hidden hysteresis between warm-up and cool-down states. This section defines practical stability gates and correlation checks that prevent “bad points” from being fed into calibration.
A) Soak adequacy: use slope thresholds, not fixed time
- Lock measurement configuration (mode_id, sampling window, averaging).
- Pass dT/dt gate over the window.
- Pass dVerr/dt gate over the window (final authority).
- Only then: capture the calibration point and metadata.
B) Stability detection mini-algorithm card (script/firmware-ready fields)
- T(t): temperature samples
- Verr(t): error/output samples (or raw code)
- Δt: sample period
- N: window length (samples)
- Estimate slope_T over window
- Estimate slope_V over window
- Estimate noise_V (RMS or robust spread)
- Set thresholds consistent with noise level
- Stable / Not stable
- Stable window start/end
- Recommended capture timestamp
- Gate reason (T-gate or V-gate failure)
A threshold smaller than the measurement noise floor turns the gate into randomness. Gate thresholds must be tied to observed noise, not to an arbitrary “nice-looking” number.
C) Correlation minefields (3 common mistakes + quick checks)
D) Thermal hysteresis policy (warm-up vs cool-down)
The capture point is triggered only after both gates pass in a continuous window. This prevents unstable soak states from entering calibration.
Zero-Drift / Chopper Artifacts: Ripple, Sampling Interaction, and Mitigation
Zero-drift INAs can show excellent DC drift numbers while still producing ripple and artifact behavior that corrupts measurement and calibration. The key is to detect sampling interaction and aliasing signatures, then apply minimal mitigation (locking windows, avoiding sensitive zones, and validating residual stability).
A) What to treat as “artifact” (dynamic) versus “drift” (static)
- Offset(T) and Gain(T) with repeatable shape
- Slow aging trend that is monotonic and trackable
- Behavior that does not change with sampling window
- Ripple components tied to chop frequency
- Residual shape changes with Fs / averaging / window
- Alias signatures that move when sampling configuration changes
If residual changes materially with sampling window or averaging, it is not a stable tempco term. Lock the measurement window first.
B) Symptoms → Quick check → Mitigation (engineering triage table)
| symptom | quick check | mitigation |
|---|---|---|
| Residual changes after adjusting averaging/window | Sweep window length and record residual shape changes | Lock window config for calibration; reject fits that are window-dependent |
| Periodic “texture” or ripple-like error | Observe time-domain output with the intended sampling cadence | Apply minimal filtering/averaging and avoid sensitive sampling relationships |
| Good drift numbers but unstable calibration repeatability | Repeat calibration with identical conditions and compare coefficient deltas | Treat instability as artifact ownership; reduce model complexity; add verification points |
| Residual shifts when Fs changes slightly | Sweep Fs and log alias signatures (movement of error pattern) | Select a sampling plan that avoids alias-prone zones and keep it fixed during calibration |
- Lock sampling window and averaging settings for calibration measurements.
- Use validation points to confirm residual stability under the locked window.
- Prefer simple avoidance/synchronization policies over complex fitting.
C) Do / Don’t summary (artifact-safe calibration behavior)
- Check window dependence before fitting any tempco.
- Use independent validation points under the same sampling policy.
- Log mode_id, Fs, window length, and averaging settings with coefficients.
- Do not increase model order to chase ripple-shaped residual.
- Do not mix sampling windows between calibration and verification.
- Do not treat alias movement as “temperature drift.”
When residual depends on sampling configuration, fitting it as static tempco is unstable. Lock the sampling policy, verify stability, then calibrate.
Long-Term Drift Tracking: Aging, Stress, and Recalibration Policy
Long-term accuracy is maintained by policy, evidence, and guardband—not by a one-time factory calibration. This section turns slow drift into measurable indicators, actionable triggers, and auditable records.
A) Drift taxonomy: temperature-driven vs time-driven vs event-driven
- Curve shape repeats when returning to the same temperature point
- Dominated by Offset(T) / Gain(T) mapping and thermal correlation
- Best handled by stable point capture and valid-range enforcement
- Monotonic drift at the same temperature point across days/weeks
- Often caused by aging, stress relaxation, humidity, or contamination
- Requires tracking indicators and scheduled or threshold-based maintenance
- Step change after ESD/OVP, wiring faults, service, or cleaning events
- May alter slope or intercept abruptly (not a smooth tempco behavior)
- Handled by post-event recheck and forced revalidation rules
B) Minimal experiments to distinguish T-drift from t-drift
Revisit the same temperature point after a sweep. A repeatable curve indicates temperature-driven behavior.
Repeat the same point on different days. A monotonic offset indicates time-driven drift (aging/contamination).
Compare before/after an event (service, wiring, ESD suspect). Step changes point to event-driven behavior.
C) Drift budget: convert “per-year drift” into a maintenance plan
Coefficients must remain valid under expected long-term drift. Guardband reserves error space for aging and stress, preventing “perfect day-0 fit” from failing in the field.
- Long-term accuracy target
- Allowed drift margin (guardband)
- Recheck interval (calendar / usage)
- Trigger thresholds (indicator-based)
- Coefficient version and integrity (CRC)
- Traceability (device/lot/board rev)
- Drift indicator history
- Reason-coded actions (recheck / recal / downgrade)
D) Recalibration policy table (triggers → checks → actions → records)
| trigger | check method | limit | action | record fields |
|---|---|---|---|---|
| time-based | periodic verification at defined points | verification residual ≤ threshold | recheck → recal if failed | timestamp, point_id, residual, coef_id, cal_version |
| thermal dose | accumulate temperature exposure metric | dose ≤ limit | force verification or scheduled recal | thermal_dose, window_cfg, mode_id, fw_version |
| metric-based | track drift indicator at a stable checkpoint | indicator ≤ limit band | recalibrate or tighten guardband | indicator_value, limit, action_code, coef_crc |
| event-based | post-event verification and comparison to baseline | post-event residual within guardband | force recheck; recal if out-of-band | event_id, reason_code, before_after_delta, lot/board_rev |
device_id / lot / board_rev · fw_version / cal_version / coef_id · timestamp · temperature_point · mode_id / window_cfg · indicator_value · action_code · reason_code
Drift is managed within a guardband. Recalibration is triggered by time, metrics, or events—then validated and recorded for traceability.
Engineering Checklist: Calibration-Ready Design and Bring-Up
This checklist is designed for design reviews, bring-up, and production readiness. Each item includes a pass-criteria placeholder to enforce measurable acceptance rather than subjective “looks good” decisions.
Keep the checklist outputs as artifacts: model_id, window_cfg, coef_id, verification residuals, and traceability records.
Pre-design checklist (Decide)
- Define long-term accuracy target and operating temperature range
- Lock the error model terms (calibratable vs non-calibratable)
- Define independent verification points (not fit points)
- Choose coefficient format (fixed-point, Q format)
- Define versioning fields (fw_version, cal_version, coef_id)
- Define integrity (CRC) and rollback policy
- Define recheck interval and trigger thresholds
- Define event triggers (service, ESD suspect, OVP)
- Define evidence fields and retention rules
Bring-up checklist (Measure → Fit → Verify)
- Stability gates pass (dT/dt and dV/dt) with continuous windows
- Sampling window and averaging are locked for calibration
- Repeatability meets target under identical conditions
- Fit offset and gain using defined points and sequence
- Keep coefficients within valid-range guardbands
- Use independent verification points for residual checks
- Log mode_id, window_cfg, temperature_point, direction_flag
- Log fw_version, cal_version, coef_id, coef_crc
- Log verification evidence and action_code
Production-ready checklist (Lock)
- Lock coefficient schema and compatibility rules
- Enforce CRC and reject invalid coefficient loads
- Define rollback/upgrade behavior by cal_version
- Bind coefficients to device_id / lot / board_rev
- Keep calibration timestamp and verification evidence
- Store action_code and reason_code for audit
- Define a minimum achievable uncertainty (fixture floor)
- Reject targets below the uncertainty floor without redesign
- Use the floor to set realistic pass thresholds
The workflow is enforced by artifacts: locked model, stable measurement gates, independent verification evidence, and traceable coefficient records.
Applications (Placed Late): Use-case → Dominant Error → Strategy
This section does strategy mapping only: identify the dominant drift/artifact bucket, choose the minimal calibration ladder level, then verify with an independent metric. Application circuit details (wiring, excitation, filtering topologies) belong to sibling pages.
A) How to use this map
- Pick a use-case bucket (bridge/RTD/TC/bio/high-Z).
- Declare the dominant error (Offset(T), Gain(T), Artifact, Leakage(t)).
- Select the ladder level (1-pt / 2-pt / Multi-pt / Tracking).
- Verify with an independent metric (not used in fitting), then lock a guardband policy.
B) Dominant error cheat-sheet (fast identification)
C) Strategy mapping table (copy into design reviews)
| Use-case | Dominant term | Recommended ladder | Verification metric | Reference examples (PNs) |
|---|---|---|---|---|
| Bridge / weighing (steady) | Offset(T) + Gain(T) | 2-point temp points | Independent point residual (not used in fit) + same-temp repeatability after a thermal cycle | INA333 · INA188 · AD8237 |
| Bridge (dynamic / fast transients) | Artifact + recovery interaction (window-sensitive) | 2-point window lock | Residual stability vs sampling window + step response repeatability under the same fixture | INA828 · AD8421 · INA826 |
| RTD (precision, slow bandwidth) | Offset(T) or Gain(T) (pick by error shape) | 1-pt → 2-pt | Residual shape (constant vs proportional) + same-temp repeatability after re-soak | INA188 · INA333 · AD8237 |
| Thermocouple (µV-level DC) | Offset(T) + warm-up behavior | 2-pt warm-up gate | Drift during the first minutes + independent validation point after stabilization | INA188 · INA333 · AD8237 |
| Bio-potential (ECG/EEG/EMG) | Artifact (low-freq window sensitivity) | 2-pt fixed window | Residual stability vs averaging/window + repeatability across sessions | AD8237 · INA333 · INA826 |
| High-Z / electrochemistry | Leakage(t) / contamination (same-temp drift) | Tracking policy | Daily checkpoint drift + event-triggered re-check (ESD/over-range) — leakage modeling belongs to the Clamp/Leakage page | INA116 |
Part numbers are provided as datasheet starting points. Strategy must be driven by the error model, stability gates, and verification metrics defined in earlier sections.
IC Selection Logic: Spec → Risk → What to Ask (Calibration-Centric)
Selection for stable accuracy is not “lowest drift typical”. The goal is coefficient validity: stability across temperature, time, modes, and measurement windows. This section converts datasheet fields into failure modes and a copy-paste inquiry template.
A) Spec fields that decide calibration stability
- Vos drift (temperature) + long-term drift (time)
- Gain drift (temperature) + mode dependence (gain setting, power mode)
- Warm-up behavior (first minutes) and stabilization condition
- 0.1–10 Hz noise (low-frequency stability proxy)
- Chopper ripple (amplitude/frequency) or any internal auto-zero artifacts
- Output behavior vs averaging / filter pins / sample timing
- CMRR/PSRR vs frequency (curves + test conditions)
- Output swing vs load (headroom-induced curvature risk)
- Stability with capacitive loads / common filter networks
- Coefficient storage support (external NVM / internal memory, if any)
- Versioning fields (format, CRC, validity range)
- Recommended calibration conditions and repeatability claims
B) Spec → Risk events (why “good typical” still fails)
C) What to ask vendors (minimum evidence set)
- Temperature points + soak/stability definition (dT/dt, dV/dt, window length)
- Supply, common-mode range, input source impedance, output load and filter network
- Sampling/averaging window rules if zero-drift/auto-zero modes are involved
- Vos(T), Gain(T), and warm-up drift vs time
- CMRR/PSRR vs frequency (with conditions)
- Output swing vs load / headroom notes relevant to calibration points
- Lot-to-lot / unit-to-unit distribution for drift-critical fields
- Temperature hysteresis behavior (heat vs cool) and recommended handling
- Any long-term drift bounds or recommended recalibration interval
D) Reference examples (part numbers; datasheet starting points only)
These examples help speed up datasheet lookup and inquiry drafting. Selection must be driven by the Spec→Risk mapping above and the verification gates defined earlier.
E) Copy-paste inquiry fields (specify conditions to avoid mis-comparison)
| Category | Requested item | Required conditions | Format | Acceptance placeholder |
|---|---|---|---|---|
| Drift | Vos(T), Gain(T) | temp points, soak/stability rule, supply, CM, source-R, load | curve + table | independent-point residual < X (system budget) |
| Warm-up | drift vs time after power-on | ambient, airflow, load, measurement window | curve | |dV/dt| < Y for Z seconds |
| Artifact | chopper ripple / auto-zero artifacts | mode, sample timing, averaging/window, filter pins | note + curve | window-to-window residual change < X |
| Transfer | CMRR/PSRR vs frequency | source-R mismatch, CM range, supply ripple spectrum | curve | CM residual within budget across band |
| Lifecycle | lot spread + long-term drift bounds | temp history, humidity/contamination notes | report | recal interval ≤ N months (policy) |
FAQs: Tempco & Calibration Strategy (Short, Actionable)
These FAQs close long-tail questions without expanding the main body. Each answer uses a fixed 4-line format: Likely cause / Quick check / Fix / Pass criteria.