Tunability in Analog Signal Conditioning (Steps & Trims)
← Back to: Active Filters & Signal Conditioning
Tunability in an active filter/signal-conditioning chain means parameters (gain, cutoff, threshold) can be adjusted predictably and safely—with a known effective step size, proven monotonic behavior, and repeatable return to the same state. Real tunability is only “real” when it holds targets across temperature/aging through calibration and leaves an auditable trail (versioned data, timestamps, and rollback-ready changes).
What “tunability” really means in analog front ends
Tunability is the ability to change a target parameter (gain, cutoff, Q, threshold, offset) with bounded, testable error, and to return to the same result on demand—despite temperature variation and long-term drift. “It can be adjusted” is not a spec; tunability becomes real only when the adjustment is repeatable and can be maintained via calibration/compensation.
- Range (coverage): how far the controlled parameter can move while staying inside safe operating bounds (no overload, no instability, no violating system constraints).
- Step (granularity): how finely it can move (effective step size), whether it is monotonic, and how “code → parameter” mapping error behaves across the range.
- Stability (repeatability & drift): whether the same setting produces the same behavior over repeated writes, across temperature, and after aging—plus whether calibration can restore the target without surprises.
Where tunability matters: use-cases and failure modes
Tunability becomes a hard requirement when a platform must stay consistent across sensor variability, temperature spread, board/batch differences, and field service events. The goal is not “more adjustability,” but controlled reuse: the same hardware can be deployed to multiple contexts without turning validation into a new project each time.
Four failure modes explain most “tunable but painful” systems: Accuracy / map error Repeatability / hysteresis Tempco / drift / aging Cross-coupling When these are not separated and measured independently, teams often overcorrect in software, increase test time, or accept hidden yield and reliability losses.
| Use-case trigger (why tuning is needed) | What breaks first (typical failure mode) | Cost if ignored (how it shows up) |
|---|---|---|
| Multi-sensor / multi-range platform One front end must cover different source impedances and amplitudes. |
Accuracy (code→parameter mismatch) and cross-coupling (tuning gain shifts bandwidth/phase). | Unexpected SNR/THD spread across SKUs, more test bins, field complaints that “same setting behaves differently.” |
| Wide temperature operation Ambient or self-heating changes operating point. |
Tempco/drift dominates; “room-temp OK” fails at extremes even with stable codes. | Out-of-spec at hot/cold, repeated recalibration, conservative margins that reduce dynamic range or bandwidth. |
| Manufacturing tolerance & batch variation Board-to-board spread must be tightened. |
Repeatability and hysteresis: settings cannot reliably land at the same point during test. | Longer production test, higher rework, unstable binning, calibration data that cannot be trusted over time. |
| Field service / replacement events Replacing a sensor or module should restore behavior quickly. |
Observability gap: tuning exists but there is no safe way to verify/rollback a tuned state. | “Works after reboot, breaks later” incidents, truck rolls, inability to reproduce failures due to missing logs/versioning. |
| Low-latency or transient-sensitive systems Parameter changes can disturb the signal chain. |
Update transient: glitches/settling cause overloads, false triggers, or stability regressions. | False alarms, audible/visible artifacts, protection trips, or “tuning must be disabled,” wasting the feature. |
Tunability taxonomy: digital steps vs analog trims vs hybrid
Tunability is implemented through three practical patterns. Each pattern produces a different error shape and a different set of risks in production and field operation. Choosing the right pattern depends less on “how adjustable” it is, and more on whether it can be specified, tested, and maintained across temperature and time.
| Pattern | Strengths (why it’s used) | Typical pitfalls (what breaks) | Best used for | Spec focus |
|---|---|---|---|---|
| Digital steps codeLSBmonotonic |
Deterministic control interface, easy to store and restore settings, supports guardrails (limits, rollback). | “Bit depth” is not effective resolution; code→parameter mapping may be non-linear or non-monotonic. Update events can inject transient disturbance if not managed. | Multi-SKU platforms, production repeatability, field service restore, configurations that must be versioned and audited. | Monotonicity, usable steps, update latency/settling. |
| Analog trims trim rangesensitivity |
Fine adjustment around a nominal point; can correct static offsets and small drift when properly bounded. | Sensitive to temperature and aging; may change noise and distortion through coupling paths. Repeatability may depend on approach direction (hysteresis). | Fine alignment after coarse placement, compensating residual spread, tightening endpoints without large range changes. | Tempco on the controlled parameter, drift, repeatability under defined conditions. |
| Hybrid coarse+finelockdown |
Coarse steps provide coverage; fine trim removes residual error. Enables “calibrate then lock” to stabilize behavior. | More states to validate; requires clear calibration workflow and data governance (versioning, restore rules). | Wide range + tight accuracy, cross-temperature consistency, minimizing production time while enabling field restoration. | Calibration method, data retention (NVM), restore/rollback rules. |
Selection heuristic 1 — If safety matters, specify updates
When tuning can trigger overloads or false trips, require measurable limits on update latency, transient injection, and settling behavior.
Selection heuristic 2 — If temperature dominates, require compensation
Across temperature, “stable code” does not imply stable behavior. Tempco must be stated on the controlled parameter, not just components.
Selection heuristic 3 — If platform reuse dominates, require restore
Repeatable field restoration requires versioned settings, bounded ranges, and a defined calibration/verification workflow.
Vocabulary note — INL/DNL as mapping linearity
INL/DNL here describes code→parameter mapping linearity and step usability, not ADC conversion accuracy.
Spec definitions that prevent tunability misunderstandings
Tunability fails most often because teams do not share a measurable definition for usable steps, update behavior, and repeatability. The definitions below are written to be testable at validation, production, and field service.
Step size vs effective resolution
Step size is the nominal increment per code change. Effective resolution counts only steps that remain usable under noise, mapping non-linearity, and temperature drift.
Common trap: “12-bit register” is treated as 4096 usable steps even when mid-range codes compress or reverse.
Monotonicity (step usability)
A control is monotonic when increasing code does not reverse the controlled parameter. Non-monotonic zones reduce usable steps and complicate calibration.
Common trap: average fit looks linear, but local reversals cause “unreachable” targets during closed-loop tuning.
Update latency
Time from a setting write to the moment the parameter starts changing. Predictable latency is required to schedule safe update windows.
Common trap: latency is ignored until field tuning collides with system state machines and triggers false alarms.
Settling time
Time from the start of change until the output enters a defined error band and stays there (no rebound).
Common trap: first crossing is counted as “settled,” causing sporadic errors when the response rings back out of tolerance.
Glitch energy (update transient)
A measure of disturbance injected during an update. It predicts overload, false triggers, and visible/audible artifacts.
Common trap: stable steady-state specs are met, yet tuning causes one-shot spikes that break downstream detection logic.
Repeatability (under defined conditions)
Ability to return to the same behavior after repeated writes to the same code—under specified temperature, load, and approach direction.
Common trap: repeatability is claimed without stating conditions, so production and field data cannot be compared.
Error model: separating accuracy, repeatability, drift, and hysteresis
A tunable control rarely fails for a single reason. Real systems combine a static mapping error, repeatability spread, slow drift, path dependence, and update transients. Separating these terms turns “tuning feels unreliable” into measurable requirements and a validation plan.
Accuracy (static map error)
Difference between the target and the settled result under a defined condition (same temperature, stable supply, stable load). Dominated by code→parameter mapping bias and systematic offsets.
Repeatability
Scatter when returning to the same setting under the same condition. Quantified by spread (σ or peak-to-peak) after the output is settled.
Drift (temperature / time / aging)
Slow movement of the parameter under a fixed code as temperature or time changes. Quantified by slope or curve vs temperature/time.
Hysteresis (path dependence)
Different settled outcomes depending on whether the setting is approached from above or below. Appears as a consistent Δ between up/down sweeps.
A practical engineering decomposition for a controlled parameter gainfcQthreshold can be stated as: Total deviation = static map error + temperature drift + aging drift + repeatability noise + update transient. The terms are not interchangeable; each one needs different data to isolate and fix.
| Error term to isolate | Minimum data required | How to compute it | What it often gets confused with |
|---|---|---|---|
| Accuracy (static) | Hold one temperature point; choose a few representative codes; wait for settling; measure the mean at each code. | Mean error vs target at each code (map bias). | Repeatability (random scatter) or drift (slow movement across time/temperature). |
| Repeatability | Same temperature; same supply/load; write the same code repeatedly (N cycles); measure after each settling. | σ or peak-to-peak of settled results at fixed code. | Hysteresis (direction-dependent offset) or update transient (pre-settle behavior). |
| Hysteresis | Same temperature; run an up-sweep and a down-sweep across the same code points; settle at each step. | Δ(up vs down) at the same code/target. | Repeatability scatter; hysteresis is systematic and correlates with approach direction. |
| Drift vs temperature | Measure the same code at multiple temperature points (cold/room/hot); allow soak; settle; record mean at each point. | Slope or curve of parameter vs temperature (tempco on the target parameter). | Accuracy (single-point bias) or repeatability (same-temperature scatter). |
| Drift vs time/aging | Repeat the same settled measurement at a fixed temperature after time intervals (hours/days) or after stress/soak. | Parameter change per unit time or per stress cycle. | Temperature drift if the thermal condition is not controlled tightly. |
| Update transient | Capture the time window around updates; quantify peak disturbance and time to return within tolerance band. | Peak/area of disturbance + settling time after each update. | Repeatability; transient is time-local to the update event. |
Temperature coefficient & compensation: tempco is not just “ppm/°C”
Temperature coefficient is only meaningful when expressed on the target parameter (gain, cutoff frequency, Q, threshold), not just on individual components. A system can be highly repeatable at a single temperature and still drift badly across temperature.
Tempco must be defined on the controlled parameter
Useful expressions: %/°C for gain or cutoff frequency, ΔQ/°C for Q, and mV/°C (or LSB/°C) for thresholds. Component ppm/°C is not a substitute for system-level sensitivity.
Repeatability ≠ cross-temperature consistency
Same-code repeatability at one temperature measures scatter. Tempco measures systematic shift vs temperature. Both must be specified and validated.
| Metric on target parameter | Dominant error term | Compensation option | Validation proof |
|---|---|---|---|
| Gain tempco (%/°C) | Temperature drift (system-level sensitivity) | LUT (feed-forward) using temperature input; or multi-point calibration by temperature zones. | Gain vs temperature after compensation stays inside tolerance band at defined points. |
| Cutoff tempco (%/°C) | Temperature drift + mapping non-linearity | LUT with zone interpolation; optional guardrails on update rate to avoid transient artifacts. | fc vs temperature curve flattens; verify no stability regressions during update windows. |
| Threshold tempco (mV/°C or LSB/°C) | Temperature drift + hysteresis sensitivity | Measure-and-correct (closed-loop) when an internal reference/loopback is available; otherwise multi-point calibration. | Threshold crossing error vs temperature meets system false-trip limits. |
| Q tempco (ΔQ/°C) | Temperature drift + coupling into dynamic behavior | Temperature-zone calibration; compensate only within safe operating windows to avoid transient-induced overshoot. | Q stays within bounds across temperature; step response metrics remain stable after compensation. |
| Cross-temperature repeatability | Combined drift + hysteresis + mapping changes | Hybrid strategy: coarse placement + fine correction, then lock with versioned NVM data. | Restore-to-spec after temperature cycling, with defined calibration version and rollback rules. |
Calibration architectures: factory trim vs in-field vs background
Calibration is not a single action; it is an architecture with defined triggers, observability requirements, safety guardrails, and traceable data outputs. The three common architectures differ mainly in cost, disruption to the signal chain, and the strength of proof they can provide over temperature and time.
| Architecture | Best fit | Trade-offs | Safety needs | Traceability output |
|---|---|---|---|---|
| Factory trim one-timeproduction |
Tight consistency per unit, predictable performance at shipment, minimal field complexity. | Limited coverage for long-term aging and wide temperature cycling unless multiple temperature points are added in production. | Controlled test environment, strong verification gating before locking data. | Cal table + version + timestamp + test station ID. |
| In-field serviceperiodic |
Long lifetime, harsh environments, replaceable modules, performance restored after drift or service. | Requires maintenance windows and user workflows; validation must be robust against noisy environments. | Safe update windows; rollback to last-known-good settings; clear pass/fail criteria. | Cal record per run + operator/event reason + pass/fail + rollback count. |
| Background in-runautonomous |
Always-on systems where downtime is expensive and tuning must track drift continuously. | Highest requirements on observability and isolation (loopback/bypass); risk of disrupting the chain if guardrails are weak. | Strong guardrails: shadow updates, rate limits, verification after each change, automatic rollback. | Continuous telemetry: versioned tables, decision logs, metrics trend snapshots. |
Common calibration triggers
Temperature change (ΔT + soak), drift monitor threshold, runtime/aging timer, configuration/firmware change, maintenance mode entry, and user-initiated service actions.
Mandatory checklist (all architectures)
Trigger Observables Isolation Verify Store/Version Rollback
Measurement & observability: what must be measurable to calibrate
Calibration is only as good as the system’s observability. Without measurable references, response metrics, stability indicators, and environmental state, calibration becomes guessing. Observability also enables traceability: every calibration decision should be reproducible from logged context.
Reference (ground truth)
A known reference point to anchor absolute accuracy (e.g., known stimulus, reference threshold, or known path). Used to correct static map error.
Response metrics (amplitude/phase proxy)
A measurable proxy for gain/frequency/threshold behavior to verify that tuning moved the parameter in the intended direction and magnitude.
Stability & noise indicators
A noise floor or short-term scatter metric to separate repeatability from drift, and to set drift-monitor thresholds.
Environment & state
Internal temperature (at minimum), plus system state/mode and supply status to avoid mis-attribution of drift.
| Category | Recommended fields (for traceability) |
|---|---|
| Configuration | tuning_code(s), mode/range, approach_direction (up/down), guardrail_state (safe_window used), update_rate_limit setting |
| Environment | internal_temperature, timestamp, uptime, supply_state (rail OK flags), operating_state (run/service/background) |
| Calibration identity | cal_version, algorithm_version, table_crc/hash, reason_code (temp step / drift threshold / timer / user request) |
| Quality metrics | post_cal_error (on target parameter), repeatability_sigma, hysteresis_delta (if measured), drift_slope_estimate, pass_fail |
| Safety & recovery | rollback_count, last_known_good_version, fault_flags during update, transient_peak metric, settle_time metric |
Safe tuning in real systems: avoiding pops, overloads, and stability regressions
Field tuning is risky because parameter updates are not “free.” A single change can inject transient energy, push stages into saturation, shift stability margins, or violate common-mode and dynamic-range limits. Safe tuning turns updates into a controlled procedure: gate by update windows, limit update slew, apply a safe order, verify each stage, and rollback on failure.
Transient shock (pops/clicks/spikes)
Abrupt parameter jumps can create short high-energy disturbances and long settle tails, corrupting samples or tripping protection.
Overload / saturation
Gain, threshold, or bias moves can collapse headroom and cause clipping or long recovery, even if steady-state looks acceptable.
Stability regressions
Updates can alter the dynamic response; ringing or slow settling often appears only under specific operating states.
Common-mode / dynamic-range violations
Output common-mode, bias points, or swing limits can be violated during transitions, raising distortion and triggering boundary faults.
| Risk | Typical trigger | Mitigation policy | Proof / monitors |
|---|---|---|---|
| Pop / transient spike | Large step update; updating while signal is active; insufficient settling | Slew-limited update (small steps + dwell); update window gating (quiet/idle/muted/clamped); shadow+commit with stage verification. | transient_peak, settle_time, sample_drop/blanking counters |
| Overload / clipping | Gain up before shaping; threshold move that reduces headroom; bias/CM shift | Safe order: risk-down first (gain↓ / clamp on) → shape change (fc/Q/threshold) → restore gain; enforce range checks prior to commit. | saturation_flag, clip_count, headroom_margin estimate |
| Stability regression | Parameter combination that changes dynamic behavior; updates during high load | Apply updates only in approved modes; verify using stability proxy metrics after each stage; keep a last-known-good configuration for rollback. | ringing_metric, settle_time tail, oscillation detector (threshold) |
| CM / DR violation | Mode switch; CM target change; large amplitude combined with retuning | Gate updates to silent/limited windows; verify CM-range and swing; use shadow register to prevent partial state exposure. | cm_range_ok, swing_ok, distortion proxy / error spikes |
| Bad update exposure | Partial write; re-entrant updates; interruption mid-change | Shadow registers + atomic switch; versioned commit; reject updates when in-progress flag is set; rollback on verify failure. | commit_version, update_state, rollback_count, fault flags |
Recommended update windows (when updates are allowed)
Idle/quiet segment, muted path, limited/clamped state, calibration/maintenance mode, or a defined “safe window” with sample blanking.
Recommended update denials (when updates should be blocked)
Active capture requiring continuity, near headroom boundaries, recent saturation events, high dynamic stress, or when verification resources are unavailable.
Recommended safe tuning state machine (operational steps)
- Pre-check: confirm update window is valid; reject if active capture or boundary flags are set.
- Arm guardrails: enable mute/clamp/limit and define sample blanking if used.
- Prepare shadow: load new parameters into shadow; run range checks and basic sanity rules.
- Stage update: apply in safe order (risk-down → shape change → restore) with slew-limited increments.
- Settle & verify: after each stage, check transient_peak, settle_time, saturation flags, and stability proxy metrics.
- Commit: promote shadow to active atomically; write version + reason code + metrics summary to the log.
- Rollback: on any failed verification, revert to last-known-good immediately and record rollback cause.
Validation & production test: proving tunability is real
“Tunability” becomes real only when it is provable across code sweeps, temperature points, repeat cycles, and update transients. Validation should be layered: R&D proves the full model, production screens efficiently, and field checks avoid disruption.
| Proof dimension | What to run | Acceptance idea | What failure looks like |
|---|---|---|---|
| Monotonicity & step consistency | Code sweep (coarse or dense), track target parameter trend | Direction-consistent change; step sizes within tolerance bands | Backtracking, dead zones, irregular steps, temperature-dependent breaks |
| Repeatability (return-to-code) | Repeated writes to the same codes (N cycles), settle then measure | Scatter (σ or pk-pk) within repeatability spec | Growing scatter, mode-dependent noise, direction-sensitive offsets |
| Temperature curves | Multi-temperature points with soak; measure same codes | Drift vs temperature within budget; compensation closes the gap | Non-smooth behavior, zone discontinuities, compensation instability |
| Non-disruptive tuning | Measure transient_peak and settle_time during updates | Update transient metrics within safe limits; no overload trips | Pop/spike events, long tails, saturation flags, stability alarms |
R&D checklist (full proof)
Dense sweeps + up/down direction; multi-temperature points + soak; high-N return-to-code repeats; update transient capture with guardrail stress cases.
Production checklist (fast screen)
Representative code points (low/mid/high + edges); short repeat cycles; quick monotonic segments; a small set of “sentinel” temperature checks if applicable.
Field checklist (non-disruptive proof)
Only tune in safe windows; small-step updates; verify transient_peak/settle_time and boundary flags; write full trace logs for later analysis.
Fast vs deep strategy
Production favors few points with strict guardrails; R&D/maintenance favors more points, more temperature coverage, and longer settle validation.
H2-11 · Selection checklist: compare “tunable” options with less vendor bias
A “tunable” AFE block is only valuable when it can change parameters safely, return to the same state repeatably, and hold targets across temperature/aging with auditable calibration evidence.
Pass/Fail gates Weighted scorecard Evidence pack Concrete MPN examples1) Pass/Fail gates (do not score until these are satisfied)
These are “red-line” requirements that prevent common failures: non-monotonic steps, unsafe updates, silent drift, and untraceable calibration. If any gate fails, selection should stop regardless of attractive headline specs.
- Monotonic guarantee: increasing code/setting must not reverse the controlled parameter (gain / cutoff / threshold).
- Hard bounds: parameter limits must be enforced (device-side or system-side) to prevent illegal/unstable states.
- Safe update path: “shadow → commit” (or equivalent) plus a defined update window (quiet / clamped / idle).
- Rollback: last-known-good restore path after bad tuning or failed calibration.
- Update transient limits: a measurable constraint for glitch / settling during parameter changes.
- Calibration traceability: versioned calibration tables + timestamp + temperature + firmware/algorithm version.
2) Weighted scorecard template (copy/paste and reuse)
Score each criterion from 0–5 with an explicit verification method. Apply weights to match the risk profile (medical/industrial measurement typically weights repeatability, drift, and safe updates higher than raw range).
| Criterion | Weight | Definition (what it really means) | How to verify (test method) | Evidence required (deliverables) |
|---|---|---|---|---|
| Step granularity & monotonicity | 15% | Effective resolution on the controlled parameter (not “register bits”). Must be monotonic over the usable range. | Code sweep up/down; measure parameter at each step; record non-monotonic events and step variance. | Plots/tables for sweep; monotonic statement + conditions; min/max step size. |
| Accuracy (map error) | 15% | Difference between commanded setting and achieved parameter under stated conditions (static mapping error). | Measure at key setpoints (endpoints + midpoints); validate across load/CM/output swing constraints. | Test conditions + error table; limits under worst-case corners. |
| Repeatability | 20% | Returning to the same code returns to the same parameter (distribution defined). | N cycles: change away → return; compute mean/σ and p99 deviation; do same at multiple temperatures. | Repeatability stats (σ, p99); defined settle time; sample count. |
| Tempco & drift on the controlled parameter | 20% | Temperature/aging sensitivity of gain / fc / threshold (not only resistor/cap ppm/°C). | Multi-temp points (≥3); fit slope + curvature; optional soak for aging drift trends. | Temp sweep data; drift vs time; compensation method (LUT/closed-loop) if used. |
| Calibration support & traceability | 15% | Factory trim / in-field calibration capability with versioned parameters and audit logs. | Verify NVM/parameter storage, calibration versioning, and recovery behavior after power loss. | Calibration record format; NVM spec; rollback path; required hooks (loopback, test injection). |
| Update transient limits (glitch/settling) | 10% | Peak disturbance + time-to-within-tolerance after updating a setting. | Step update under worst-case signal level; capture peak, overshoot, recovery; quantify settling definition. | Oscilloscope captures; numeric limits; recommended slew/blanking windows. |
| In-field safety (bounds/permissions/rollback) | 5% | Prevents unauthorized/unsafe tuning; supports rate limit and last-known-good restore. | Negative testing: illegal settings; fault injection; verify bounded behavior and recovery. | State machine description; permission model; logs for changes + reverts. |
Scoring guidance example: 0=not specified / not measurable; 3=typical-condition data exists; 5=clear limits + test method + corner coverage + reproducible evidence.
Procurement view vs Engineering view (same table, different emphasis)
Procurement view: prefers auditable limits, defined test conditions, calibration traceability, and clear delivery acceptance criteria.
Engineering view: prioritizes safe update behavior, transient limits, monotonicity, and observability for debugging drift and field issues.
3) Evidence pack (what to request to make claims auditable)
- Code sweep dataset: up/down sweep; step variance; non-monotonic flags; settle time definition.
- Repeatability loop: N≥30 return cycles per temp; report σ + p99/p999 deviation.
- Temperature characterization: ≥3 temps + optional soak; report tempco on the controlled parameter.
- Update transient captures: peak glitch energy / overshoot / recovery time under worst-case signal.
- Calibration artifact: LUT/version ID, checksum, timestamp, internal temperature, and algorithm/firmware version.
- Change log fields: setting_id, old→new, reason, temp, time, cal_ver, rollback_event.
4) Concrete MPN examples (non-exhaustive reference list)
The part numbers below are practical “anchors” for comparing tunability approaches (digital steps, analog trims, clock-tunable SC filters, and VGAs). Use them to sanity-check claims and to build a shortlist that matches the scorecard.
| Category | MPN (examples) | Tunability mechanism | Best for | Checklist focus |
|---|---|---|---|---|
| Digitally-programmable gain (PGIA/PGA) |
AD8250, AD8251 LTC6910-1 / -2 / -3 TI PGA280 |
Digital gain codes (pins/SPI); discrete gain steps | Multi-range measurement chains; sensor front ends; DAQ range switching | Monotonicity, effective step size, update transients, repeatability |
| Digital potentiometer / rheostat (trim) |
AD5272 / AD5274 (NVM) AD5290 (+30V / ±15V class) MCP41010 (SPI, volatile) |
Wiper code → resistance ratio (analog trim) | Gain/threshold trim, offset nulling, calibration hooks, field service | INL/DNL vs “bits”, wiper noise, temp drift of controlled parameter, NVM/traceability |
| Clock-tunable switched-capacitor filters |
MAX7400 family (8th-order LP SCF) LTC1060 (universal dual SC filter) LTC1068 (quad 2nd-order SC blocks) |
External clock sets corner/center frequency | Anti-alias / reconstruction, tunable BP/notch, tracking filters | Clock feedthrough/alias risk, update windows, step/settling behavior, repeatability across temp |
| Universal active filter IC (fixed on-chip caps) | TI UAF42 | State-variable building block; tunability via external R ratios | Rapid prototyping of LP/HP/BP where on-chip capacitor tolerance helps consistency | Range vs component tolerance, repeatability, calibration hooks (system-level) |
| Variable gain amplifier (VGA) | AD8336 | Analog control voltage sets gain (linear-in-dB class) | AGC systems, wideband signal conditioning where continuous tuning is needed | Hysteresis/path dependence, control-to-gain repeatability, drift, safe update (slew-limited control) |
How to use these MPNs without “vendor lock-in”
- Use each MPN as a benchmark pattern (digital-step PGA, NVM trim digipot, clock-tunable SCF, analog VGA).
- Map competing parts to the same checklist rows and demand the same evidence pack.
- When two solutions score similarly, prefer the one with clearer update transient limits and better traceability.
Figure F11 — “Tunable” selection workflow: gates → score → evidence → decision
A compact workflow map for procurement + engineering alignment. (Text ≥ 18px, low clutter.)
H2-12 · FAQs ×12 (Tunability)
These FAQs focus on practical tunability: effective steps, monotonic proof, repeatability, drift/hysteresis, safe updates, calibration strategy, and traceability—without relying on vendor marketing language.
Q1Why does a “10-bit register” NOT equal 1024 usable tuning steps?
Q2How can monotonic tuning be proven (no local back-jumps)?
Q3Why can the same code produce different results during heat-up vs cool-down (hysteresis)?
Q4How should repeatability be measured—how many returns to the same code are meaningful?
Q5What causes the most common field “pops / shocks / false trips” during tuning?
Q6Why can LUT temperature compensation “work” at first but drift returns over time?
Q7Factory one-time trim vs periodic in-field calibration—how to weigh cost and risk?
Q8How can tuning be made rollback-safe so field changes do not degrade performance?
Q9Why is “component ppm/°C” not enough, and why measure tempco on the target parameter?
Q10How can calibration data be made traceable (version / temperature points / timestamp)?
Q11Why can a system be “very accurate” after tuning but hard to reproduce across boards/batches?
Q12Which three tunability indicators are most often overlooked during selection?
Tip: For procurement, treat any “tunable” claim without a measurable method and evidence pack as unproven.