123 Main Street, New York, NY 10001

Mammography Detector Readout Chain

← Back to: Medical Imaging & Patient Monitoring

A mammography detector readout succeeds by maximizing low-dose stability—not just resolution—so noise floor, low-frequency drift, banding, gain transitions, and saturation recovery stay controlled across temperature and time.

This page explains how to design and verify the integrator, CDS/PGA timing, ADC choice, and temperature-aware calibration so artifacts are prevented and fixes remain robust in real operating corners.

H2-1 · What must be optimized is low-dose stability, not just “resolution”

Mammography readout quality is often limited first by stability at low dose: the image must keep a repeatable baseline and avoid structured artifacts (banding/shading) when temperature, time, and gain states change.

KPI 1) Noise floor (random + low-frequency)
Random noise sets the fine-grain limit, while low-frequency components often appear as shading or slow texture. A low RMS value is not sufficient if correlation creates visible structure.
KPI 2) Offset / gain drift (time + temperature)
Drift becomes image drift when it is not tracked at the same granularity as the readout chain (per channel / per tile / per gain state). Offset drift drives DSNU-like residuals; gain drift drives PRNU-like residuals.
KPI 3) Banding / shading (correlated error)
Banding is usually not “more noise,” but more correlation (timing-locked injection, settling residue, reference ripple sampled into line/row patterns). Correlation is what makes artifacts visible.
KPI 4) Lag / memory (recovery after large signals)
Lag is defined by how many frames are affected after saturation or a large step. Memory effects can come from incomplete reset, dielectric absorption, and charge injection histories.

What should be measured (so problems are caught early)

  • Noise floor: RMS plus low-frequency behavior (correlation / slow shading tendency), not only peak ENOB.
  • Drift: offset vs temperature/time and gain vs temperature/time, tracked per gain state.
  • Banding sensitivity: does striping change with timing edges, reset phase, or reference conditions (a strong hint of correlation)?
  • Lag: frames-to-recover after saturation; check dependence on temperature and gain state.
Common pitfall
Increasing ADC resolution can digitize an unstable baseline more precisely. Practical improvements often come first from stabilizing the integrator node (Cf/reset/leakage), controlling sampling timing, and using temperature-aware calibration with the correct granularity.
F1 — Mammography readout chain and stability drivers Cover-level block diagram: sensor charge integration and reset, CDS and PGA, ADC, and digital correction. Side inputs show temperature sensing, reference/bias stability, and periodic dark frames for drift control. Low-dose stability drivers (noise • drift • banding • lag) Sensor Charge Q Low dose Integrator + Reset Cf • leakage injection lag risk CDS sample A/B timing lock PGA gain steps settling ADC ΣΔ SAR Digital correction Offset map • Gain map • Defect pixels • Linearization Residual check for banding / shading Temp sensors Vref bias Dark frames cadence temp bins Practical success: stable baseline + low correlated artifacts under low dose and temperature change.

H2-2 · Use one equation set to make the chain engineering-clear (Q→V, noise terms, drift mapping)

The goal is not heavy math. The goal is a compact checklist that links each physical contributor to the artifact it creates, so design knobs and validation can be prioritized.

(1) Charge-to-voltage (integrator)
    Vout ≈ Q / Cf

(2) Dominant error contributors (concept form)
    σV^2 ≈ (kT/Cf) + (en^2 · BW_eq) + ( (I_leak · Tint)/Cf )^2 + (V_inj/Cf)^2 + (1/f contribution)

(3) Drift mapping (separate additive vs multiplicative)
    Vmeas = G(T,t,gain_state) · Videal + Voffset(T,t,gain_state) + εrandom
        

How to interpret the terms (what they usually become in images)

  • kT/C on Cf: sets a reset-related floor. Timing (CDS) can reduce visibility, but aggressive edges and injection can reintroduce structured residue.
  • en over BW_eq: drives random grain. If this term dominates, better front-end noise and bandwidth control help more than complex drift models.
  • I_leak · Tint / Cf: is a practical “slow error generator.” Even tiny leakage can become measurable baseline drift across the integration window, often showing as shading or temperature-dependent offsets.
  • V_inj / Cf (switch injection & feedthrough): is often correlated with timing edges, so it tends to create banding rather than benign random noise.
  • Separate G(T,t,·) from Voffset(T,t,·): gain drift and offset drift require different calibration evidence and different acceptance checks (flat-field vs dark).

Practical decision rules (what to fix first)

  • If banding dominates: treat it as correlation. Focus on reset/injection control, CDS phase margins, and reference/bias coupling before chasing “more bits.”
  • If drift dominates: separate offset vs gain. Add temperature sensing where drift is created, and build calibration tables indexed by gain state and temperature bins.
  • If random grain dominates: focus on front-end noise and bandwidth, and confirm the ADC choice supports the intended sampling and filtering without passband ripple artifacts.
  • If lag dominates: validate frames-to-recover after saturation and adjust reset strategy, node leakage, and “memory” contributors until recovery is within spec.
Validation tie-in
For each term above, create one measurement that can isolate it: e.g., temperature sweep for drift separation, timing-edge sensitivity for injection-locked banding, and step-to-recovery for lag.
F2 — Noise & drift source map to visible artifacts Two-column map linking sources (kT/C, 1/f, leakage, injection, reference drift, thermal gradient) to visible artifacts (grain, shading, banding, gain-step seams, lag/ghost), with a bottom strip of key design knobs. Source → artifact map (focus on correlation and drift) Sources kT/C on Cf (reset) 1/f & low-frequency noise Leakage / bias current Switch injection / feedthrough Vref & bias rail drift Thermal gradient / warm-up Visible artifacts Random grain / mottling Shading (slow non-uniformity) Banding (correlated stripes) Gain-step seams / discontinuity Linearity residual / tone warp Lag / ghost (slow recovery) Key knobs Cf & Tint Reset edge / CDS phase Leakage control Vref stability Temp bins

H2-3 · Integrator design: Cf, reset switch, leakage and injection

Low-leakage readout is not only about a small current number on a datasheet. In practice, leakage and reset behavior become visible when they create repeatable, timing-locked error (banding) or slow spatial residue (shading). This section turns Cf, reset edge and sampling phase into concrete design knobs.

Core relations (concept form)

1) Charge-to-voltage:
   Vout ≈ Q / Cf

2) Droop from leakage during integration:
   ΔVdroop ≈ (Ileak · Tint) / Cf

3) Sensitivity to injection/feedthrough:
   ΔVinjection ≈ Qinj / Cf   (or a timing-locked step on the summing node)
        
1) Cf selection (gain, kT/C, and artifact sensitivity)
  • Dynamic range vs sensitivity: smaller Cf increases conversion gain (Q→V), improving low-dose sensitivity but also amplifying injection and residual steps.
  • Reset-related floor: Cf directly impacts kT/C behavior and how much reset residue can survive into sampling windows.
  • Correlation risk: if a repeatable charge step is injected every row/phase, smaller Cf makes that step more visible as banding.
2) Droop and leakage (how “tiny current” becomes shading)
  • Droop scales with Tint: if shading worsens roughly in proportion to integration time, leakage (or bias-current paths) is a prime suspect.
  • Uniform vs spatially varying leakage: uniform droop can look like a baseline shift; spatial leakage gradients create non-uniform residuals that calibration must track.
  • Guarding is a stability tool: guard rings and controlled return paths reduce the chance that leakage becomes unpredictable or temperature-sensitive.
3) Reset edge and injection (why banding is often timing-locked)
  • Fast edges can couple into the summing node: parasitic feedthrough and switch injection create a repeatable step.
  • Banding happens when the step is sampled: if Sample A or Sample B is too close to reset, settling residue becomes a fixed-pattern error.
  • Practical knob: enlarge settle margin after reset and validate with a phase sweep (move Sample A later and see if banding falls).
4) Dielectric absorption / memory (lag and slow recovery)
  • Memory shows up after large steps: strong exposure or saturation can leave a slow tail that contaminates the next frames.
  • Not an ADC problem: if frames-to-recover depends on temperature or reset strategy, the integrator node is a likely root cause.
  • Validation: step-to-recovery curves (frames-to-baseline) across temperature bins and gain states.
Integrator validation checklist
  • Change Tint: does shading scale roughly with Tint (leakage signature)?
  • Sweep Sample A phase: does banding change with phase (injection/settling signature)?
  • Modify reset edge: does banding respond to edge shaping (feedthrough signature)?
  • Run step-to-recovery: does lag depend on temperature or gain state (memory signature)?
F3 — Integrator summing node zoom: Cf, reset, guard, phases and banding risks Diagram zooming into the integrator node: sensor charge input, op-amp integrator, feedback capacitor Cf, reset switch with edge coupling, leakage paths, guard ring, and sampling phases. Highlights which paths can become banding. Integrator node (what drives leakage, injection and banding) Guard ring region SUM Sensor Charge Q Integrator Front-end noise Cf Vout Reset Edge Coupling Leakage paths Sampling phases Sample A Sample B Banding risk: injection + early sampling Banding risk: timing-locked residue captured in A/B Keep sampling away from reset edge residue and control leakage paths inside the guarded region.

H2-4 · CDS: when it helps, and when it creates artifacts

Correlated double sampling (CDS) is powerful when the unwanted term is nearly identical in the two samples. It becomes risky when the two samples see different settling residue or timing-locked injection. The safest CDS design is defined by phase placement and a guaranteed settling window.

CDS is effective when…
  • Offset and slow drift are nearly unchanged between Sample A and Sample B.
  • Sample points avoid edges and are taken after sufficient settling.
  • The A–B interval is short enough that drift does not evolve significantly.
CDS becomes dangerous when…
  • Sample A captures reset residue (injection or feedthrough) that does not match Sample B.
  • Settling is insufficient, so subtraction turns a repeatable residue into banding.
  • Gain-state changes alter settling or injection, but phases are not re-validated per state.
Why the settling window is a hard requirement
If Sample A is taken too close to reset, the measured value includes a timing-locked residue. Subtraction does not remove it; it preserves it as a structured pattern. A practical validation is a phase sweep: delaying Sample A should change the banding level if injection/settling is the root cause.
CDS validation checklist
  • Guarantee a settle margin after reset before Sample A.
  • Check banding sensitivity vs Sample A delay (phase sweep).
  • Repeat the sweep per gain state and temperature bin.
  • Confirm that the chosen A–B interval does not weaken drift cancellation.
F4 — CDS timing: Reset → Settle → Sample A → Integrate → Sample B → Subtract Timing diagram showing the CDS phases with a highlighted settling window after reset. Sample A and Sample B positions are marked, and a subtraction block indicates the CDS output. CDS timing with mandatory settling window Reset Settle must settle Sample A Integrate Sample B A B Subtract CDS output = B − A Artifact risk: If Sample A is too close to reset, settling residue becomes timing-locked and may appear as banding. Keep a guaranteed settle window, then validate by sweeping Sample A delay.

H2-5 · PGA / multi-range: gain switching strategy and settling acceptance

Gain switching succeeds only when two conditions are met at the same time: continuity in the overlap region (no gain-step seam) and settled sampling after switching (no timing-locked residue that turns into banding). This section turns multi-range behavior into measurable acceptance checks.

1) Ranging rules (coverage + overlap + forbidden zones)
  • Define an overlap region: adjacent gain states must share a usable input window for continuity checks and stitching.
  • Keep “forbidden zones” away from switching: avoid boundaries near saturation and near the noise floor where any mismatch becomes visible.
  • Index calibration by gain state: offset and gain corrections should be stored and verified per range (not one global map).
2) Boundary strategy (hysteresis + hold-off to prevent thrashing)
  • Hysteresis: use separate up-switch and down-switch thresholds so the system does not bounce at the boundary.
  • Hold-off: after switching, freeze the gain decision for a minimum number of rows/frames to guarantee stable sampling.
  • Trigger on robust statistics: base switching on a windowed metric (peak count, mean, saturation flags), not a single sample.
3) Overload recovery (avoid post-saturation ghosts and seams)
  • Define frames-to-recover: after saturation or a large step, measure how many frames are required to return to the baseline envelope.
  • Protect the switching window: during recovery, suppress gain switching or force a conservative state until settling is confirmed.
  • Watch for memory tails: a slow tail that depends on temperature or gain state is a strong indicator of node memory, not “random noise.”
4) Settling acceptance (driver + ADC S/H + reference)
  • Sampling must occur after settling: if the driver, ADC sample-and-hold, or reference has not settled, the residue is sampled and can become banding.
  • Phase sweep is the fastest diagnosis: move the sample instant later and check whether seam/banding changes (a signature of settling residue).
  • Repeat per gain state: the hardest state often differs by load, swing, and reference dynamics.
Gain-switching acceptance checklist
  • In the overlap region, verify Δ(Out) between adjacent gains stays within limits across temperature bins.
  • Verify no boundary thrash: hysteresis and hold-off prevent repeated switching on similar scenes.
  • After switching, verify settling margin at the chosen sample instant (phase sweep sensitivity low).
  • After saturation, verify frames-to-recover and block switching during recovery if needed.
F5 — Gain ranging and error sources: overlap continuity and switching residue Diagram showing multiple gain ranges with overlap regions, a continuity check in the overlap, and a source-to-artifact mapping for seams and banding (settling residue, reference transient, overload recovery). Gain ranging with overlap and continuity checks Range coverage (adjacent gains must overlap) Input signal level → G0 G1 G2 Overlap Overlap Continuity check in overlap (seam test) Overlap input sweep → Δ ≈ 0 Seam risk Error sources Offset / gain mismatch Settling residue (driver/S&H) Reference transient Overload recovery tail Artifacts Seam / discontinuity Banding (locked) Ghost / slow tail Use overlap to prove continuity, then use phase/settling tests to prevent timing-locked banding.

H2-6 · ADC choice: ΣΔ vs SAR (real tradeoffs for mammography readout)

The best ADC choice is driven by low-dose stability and artifact risk, not by headline resolution alone. In mammography, the deciding factors are per-channel rate, tolerated latency, low-frequency behavior, linearity needs, and whether driver/reference settling can be proven under gain switching.

ΣΔ ADC: strong for low-frequency stability (with filter discipline)
  • Why it can fit: digital filtering can suppress wideband noise and support strong low-frequency behavior and consistent linearity.
  • What must be managed: group delay (latency) and passband ripple risks. Poor filter choices can introduce structured slow texture or response quirks tied to system cadence.
  • Validation focus: verify the filter response under the system’s timing cadence and confirm low-frequency residuals do not become shading patterns.
SAR ADC: strong for deterministic timing (if driver/reference settle is proven)
  • Why it can fit: low latency and predictable sampling behavior, useful for tight timing and high per-channel throughput.
  • Primary risk: strict requirements on driver and reference settling. If sampling occurs before settling, the residue is captured and can become banding, especially under switching.
  • Validation focus: phase-delay sensitivity, step response at the input/driver, and reference transient checks across gain states and temperature bins.
Fast decision checklist
  • Need very low latency? SAR is often favored if settling can be proven.
  • Low-frequency stability is the top KPI and latency is acceptable? ΣΔ is often favored with disciplined filter validation.
  • Gain switching is frequent? prefer the option whose switching-settling verification is stronger and easier to guarantee.
  • Complexity budget: ΣΔ shifts complexity to filtering/verification; SAR shifts complexity to driver/reference/settling control.
F6 — ΣΔ vs SAR decision tree for mammography readout Decision tree selecting Sigma-Delta or SAR ADC based on per-channel rate, latency tolerance, low-frequency priority, driver/reference settling capability, and implementation complexity. Outputs a recommended choice with validation focus. ADC decision tree (ΣΔ vs SAR) for low-dose stability Per-channel rate high? Yes No Very low latency required? Low-frequency stability top KPI? Yes No Driver/ref settling proven? Complexity budget tight? Yes No Latency acceptable? Settling proven? Choose SAR Best when: low latency + high rate settling is verified Validate: phase sweep, ref transient Choose ΣΔ Best when: LF stability priority latency acceptable Validate: filter ripple, group delay Pick the ADC that makes artifact prevention verifiable under gain switching and temperature variation.

H2-7 · Reference & bias: many “drifts” are Vref / bias moving

Drift becomes solvable only after separating gain drift (multiplicative) from offset drift (additive). In mammography readout chains, Vref, bias networks and rail coupling often dominate “mysterious drift” because they can move slowly yet consistently, turning into shading or banding when temperature gradients exist.

1) Separate gain drift vs offset drift (fast root-cause split)
  • Gain drift signature: the error scales with signal level. Mid-gray flats shift proportionally and appear as contrast or global shading changes.
  • Offset drift signature: low-signal and dark regions shift more obviously. Baseline moves and fixed-pattern residue becomes visible near the floor.
  • Practical split test: compare drift behavior on a dark frame vs a mid-gray flat. Proportional change points to Vref/gain; additive shift points to offsets/bias.
2) Place monitor points (no monitor points = no proof)
  • Source layer: Vref (and its buffer output), bias rails, analog rails (AVDD), and any rail that can modulate references.
  • Chain layer: PGA output, ADC driver node, ADC reference pins/decoupling neighborhood, and any switched node that can inject residue.
  • Result layer: gain/offset estimators, flat-field residual metrics, shading/banding indicators aligned with temperature logs.
3) Temperature coefficients are design variables
  • Reference TC matters twice: it changes gain directly and it can also shift bias points that influence offsets.
  • Rails can masquerade as “drift”: a rail moving with load or temperature can modulate Vref/bias and create slow structured changes.
  • Goal: make drift stable, monitorable, and modeled. Uncontrolled coupling leads to unpredictable residuals that look like shading.
Drift root-cause workflow (evidence chain)
  1. Run dark + mid-gray flat: decide additive vs multiplicative behavior.
  2. Align temperature vs time: check monotonic drift and warm-up shape.
  3. Correlate Vref and key rails with the measured gain/offset estimators.
  4. Check gain states: see whether a specific range magnifies the issue.
  5. Confirm with controlled perturbations (phase/edge/rail): a real root cause responds predictably.
F7 — Drift path map: Vref/rails/temperature to gain/offset to shading Flow diagram mapping drift sources (Vref, bias, rails, temperature gradient) into gain and offset drift mechanisms, with monitor points and the resulting image artifacts (shading, banding, baseline drift). Drift path map (sources → mechanisms → artifacts) Sources Mechanisms Artifacts Vref Monitor Bias Monitor Rails Monitor Temperature Gradient / warm-up Monitor points Gain drift (×) Vref / scaling changes signal-proportional Offset drift (+) bias / offsets move floor-sensitive Chain monitors PGA out ADC in ADC ref pins Shading slow gradients Banding timing-locked Baseline drift dark floor move Diagnose drift by separating × (gain) from + (offset), then correlate monitor points with residuals.

H2-8 · Temperature drift compensation: match granularity, not one global curve

Temperature compensation fails most often because the model granularity does not match the real gradients. A single global coefficient cannot track per-zone and per-gain behavior. Effective compensation is a closed loop: measure temperature near the right components, index the right calibration tables, monitor residuals, and lock versions.

1) Temperature sensing must be co-located with drift sources
  • Near reference and bias networks: track what changes gain and offsets directly.
  • Near AFE/ADC hot spots: capture local self-heating and gradients that a corner sensor would miss.
  • Multiple points: use at least a small set of sensors so gradients can be modeled, not guessed.
2) Track at the same granularity (zone + gain state)
  • Per-zone: different regions see different gradients; apply zone-indexed corrections to avoid over/under-compensation.
  • Per gain state: gain ranges have different sensitivities and loading; store tables per range to prevent seams.
  • Prefer bin tables over a single curve: temperature bins make validation and rollback straightforward.
3) Warm-up behavior: stabilize first, then apply tight correction
  • Early drift is fastest: initial self-heating and reference stabilization can dominate the first minutes.
  • Use guarded modes: during warm-up, increase residual monitoring and avoid aggressive auto-switching if it magnifies artifacts.
  • Enter “stable mode”: apply the tightest compensation after the temperature slope falls below a defined threshold.
4) Residual monitoring + version lock (make compensation auditable)
  • Residual monitors: track flat-field residual, shading metric, and seam metric in the overlap region.
  • Gated updates: if residuals exceed thresholds, roll back to a safe table or trigger re-calibration.
  • Version lock: tables must be tagged by TableID, BinID, ZoneID, and GainState to support traceability and rollback.
Temperature compensation acceptance checklist
  • Temperature sensing covers reference + AFE/ADC hot spots (not only board average).
  • Corrections are indexed by zone + gain state + temperature bin.
  • Residual monitors are logged and thresholds are defined (seam + shading + warm-up slope).
  • Calibration tables are versioned and rollback-ready (TableID/BinID/ZoneID/GainState).
F8 — Temperature compensation and calibration closed loop Closed-loop diagram: temperature sampling at multiple points feeds a bin indexer, selects calibration tables by zone and gain state, applies corrections, monitors residuals, and locks versions with rollback and update gates. Compensation closed loop (measure → index → apply → monitor → lock) Temperature sampling Ref / AFE / ADC / board Bin indexer BinID = f(T) ZoneID / GainState Calibration tables offset/gain maps per bin / zone / gain Apply correction gain / offset update stitch by overlap Residual monitor Seam Δ (overlap) Shading residual Warm-up slope Version lock TableID / BinID ZoneID / GainState Rollback-ready Update gate If residuals exceed thresholds → rollback or re-calibrate Keep corrections auditable and versioned Effective temperature compensation is a versioned closed loop, indexed by bin + zone + gain state, with residual gates.

H2-9 · Calibration strategy: dark/flat/defect order and “over-calibration” risk

The safest calibration is not the most aggressive one. A stable sequence builds maps that represent repeatable behavior, then uses residual checks to prevent random noise, warm-up drift, or transient events from being baked into correction tables.

1) Map purpose (and what it must NOT absorb)
  • Dark map (offset): removes additive fixed pattern. It must not capture warm-up slope or short-lived drift.
  • Flat map (gain): removes multiplicative non-uniformity after offset is removed. It must not include offset residue.
  • Defect map: flags unstable pixels/rows/columns for replacement. It must not confuse random spikes with permanent defects.
  • Linearity/LUT: reduces structured nonlinearity. It must not “fit” noise into a curve.
2) Order that prevents noise from turning into stripes
  1. Acquire dark set → compute dark map: remove the additive layer first so gain is computed on the right baseline.
  2. Acquire flat set → compute flat map: compute multiplicative correction on offset-corrected frames.
  3. Build defect map: identify stable defects using statistics after dark/flat corrections reduce confusion.
  4. Apply linearity model (where defined): keep linearization disciplined and validate by residuals, not by perfect fitting.
3) Typical over-calibration traps
  • Too few frames: single-frame or small-sample maps bake random noise into fixed correction.
  • Warm-up not finished: early drift becomes a “map feature,” creating slow shading later.
  • No outlier handling: transient spikes become defects or gain distortions and show up as banding.
  • Mixed conditions: reusing one map across different gain states or temperature bins creates seams and discontinuities.
4) Residual validation (proof that maps did not amplify noise)
  • Dark residual: check for row/column structure that indicates offset instability or drift baked into the map.
  • Flat residual: confirm low-frequency residual decreases without creating new stripes or periodic patterns.
  • Seam residual (overlap): verify adjacent gain states agree in overlap regions to prevent visible steps.
  • Temperature consistency: confirm residuals stay bounded within each temperature bin; large bin-to-bin jumps require re-binning or model fixes.
5) Freeze version and define service update rules
  • Freeze: tag tables with TableID, temperature bin, GainState, acquisition conditions, and timestamp.
  • Update gate: allow updates only when residual thresholds are exceeded consistently or after service events.
  • Rollback: keep last-known-good tables for immediate rollback if new tables increase structured residuals.
F9 — Calibration flow state machine: acquire, compute, validate, freeze, update rules State machine diagram for calibration: Acquire datasets, compute maps, validate residuals, freeze version, and apply service update rules with rollback on failure. Calibration state machine (maps + residual gates + version control) Acquire Dark set / Flat set Compute maps Dark → Flat → Defect Validate residual Dark / Flat / Seam Gate: PASS / FAIL Freeze version TableID / Bin / Gain Service update rules Update gate + rollback Rollback Revert to last-known-good Residual checks Dark residual Flat residual Seam residual Residual gates prevent over-calibration from turning noise or warm-up drift into fixed stripes.

H2-10 · Saturation and lag: measure, limit, and recover

Lag is a memory effect: after a bright or saturated condition, the baseline can return slowly. The only reliable way to control it is to measure a step response, quantify frames-to-recover, and apply recovery gates so the tail does not enter image data as a structured artifact.

1) Typical lag sources in readout chains
  • Integrator node memory: saturation drives nodes into regions where recovery is slow or nonlinear.
  • Charge injection / trapping: switch edges leave residual charge that decays over multiple frames.
  • Dielectric absorption: capacitive elements can release stored charge slowly, creating a long tail.
2) Verification method: step → frames-to-recover
  1. Apply a controlled step: dark → bright (saturate) → return to dark.
  2. Record offset residual per frame: measure how far baseline stays from the dark target.
  3. Compute frames-to-recover: number of frames required to return under a defined residual threshold.
  4. Repeat across conditions: temperature bins and gain states reveal worst-case recovery behavior.
3) Define gates: a tail is acceptable only if it is bounded
  • Residual threshold: set a limit tied to the dark noise envelope so “invisible” tails remain below it.
  • Time/frames limit: define the maximum allowed frames-to-recover under worst-case conditions.
  • Fail action: if gates fail, enforce recovery policy (blanking/flush) and block risky switching during recovery.
4) Recovery actions that prevent ghost artifacts
  • Blanking / discard window: drop a defined number of frames/rows after saturation so tails do not enter images.
  • Stability hold: freeze gain switching and critical timing changes until recovery gates are satisfied.
  • Reset discipline: enforce a controlled reset/settle routine to minimize injection-driven residue.
Lag control acceptance checklist
  • frames-to-recover is measured and logged for each gain state and temperature bin.
  • pass/fail gates are defined (residual threshold + frame limit).
  • recovery policy (blanking + hold) is applied automatically on gate failure.
F10 — Recovery curve after saturation: frames-to-recover with pass/fail threshold Plot-style diagram showing offset residual vs frame index after a saturation event, with a residual threshold and the measured frames-to-recover highlighted for pass/fail gating. Recovery after saturation (offset residual vs frames) Frame index → Offset residual Residual threshold Saturation event Frames-to-recover PASS: within limit FAIL: too slow Record conditions: TempBin + GainState (worst-case) Quantify lag with a step test and enforce recovery gates so tails cannot become structured artifacts.

H2-11 · Verification checklist: catch “rework issues” before release

Pre-release acceptance is not “images look fine.” It is a gated matrix that quantifies noise, low-frequency behavior, stripes, temperature dependence, gain transitions, saturation recovery, and long-run drift—then freezes a versioned calibration set only after residual metrics pass under worst-case conditions.

How to use the test matrix
  • Rows = test items that commonly trigger late-stage rework.
  • Columns = conditions (temperature bins, gain states, stimulus levels, time) that reveal worst-case behavior.
  • Cells = output metrics that are computed, logged, and gated (PASS/FAIL) for release readiness.
Release checklist (what to measure and what to gate)
1) Noise floor
  • Condition: dark frames, representative integration time, multi-frame statistics.
  • Metrics: RMS noise, row/column projection, spatial correlation (structure vs random).
  • Gate: noise must remain random-dominant; structured row/column components must stay below defined limits.
2) Low-frequency (LF) noise
  • Condition: stable stimulus, long enough capture to expose 1/f and drift components.
  • Metrics: LF band power ratio, trend slope, residual vs temperature/time alignment.
  • Gate: LF ratio and drift slope must not grow into shading-class residuals under any temperature bin.
3) Banding (stripe risk)
  • Condition: flat-field, typical and stress readout modes (timing variants if applicable).
  • Metrics: row/column FFT peaks, band amplitude, lock-in stability (periodic + stable = high risk).
  • Gate: no new periodic components may appear after calibration; stable band peaks require root-cause closure.
4) Temperature sweep (bins + warm-up)
  • Condition: cold/nominal/hot bins, warm-up phase vs steady-state phase.
  • Metrics: gain/offset estimators vs temperature, residual vs temperature, bin-to-bin discontinuity.
  • Gate: residual must remain bounded per bin; bin transitions must not create step-like seams.
5) Gain transition (overlap continuity)
  • Condition: sweep stimulus through the overlap region; force range switching events.
  • Metrics: overlap seam Δ, settling tail after switch, switch-count correlation with artifacts.
  • Gate: overlap must remain continuous; any measurable step risk blocks release.
6) Saturation recovery (lag)
  • Condition: step test (dark → saturate → dark), across temperature bins and gain states.
  • Metrics: frames-to-recover, tail amplitude, tail shape stability (memory signature).
  • Gate: recovery must meet frames-to-recover limits; failing conditions must trigger blanking/flush policy.
7) Long-time drift (hours-scale stability)
  • Condition: multi-hour run (or overnight), with temperature and key rails logged.
  • Metrics: baseline drift, LF growth, structured residual emergence.
  • Gate: structured residual growth is not acceptable; requires closure with monitor-point evidence and table rollback.
Release gates and traceability
  • Freeze a calibration set only after PASS: TableID, TempBin set, GainState set, acquisition conditions, timestamp.
  • Define fail actions: rollback to last-known-good tables, block risky switching modes, enforce recovery blanking.
  • Re-test rules: after any table update or service event, re-run the matrix items that touch the modified map or mode.
Example parts to support low-noise verification (not exhaustive)

These part numbers are practical reference points for building stable rails, references, switching, sensing, and logging that help verification results remain reproducible across temperature bins and gain states.

References, buffers, and low-noise rails
  • Voltage reference: TI REF5050, ADI ADR4550, ADI ADR445
  • Reference class option: ADI LT6658 (family)
  • Low-noise LDO: ADI LT3042, ADI ADM7150, TI TPS7A4700
Precision / low-drift amplifiers
  • Zero-drift (LF stability): ADI ADA4522-2, ADI LTC2057
  • Low-noise precision: TI OPA211, TI OPA140
  • Ultra-low input bias option: ADI ADA4530-1
Low-leakage / low-injection switching
  • Analog switch / MUX: ADI ADG1209, ADI ADG1219
  • Alternative families: TI TMUX6111, TI TMUX6136
ADC examples for comparison in verification
  • ΣΔ: ADI AD7768-1, ADI AD7177-2, TI ADS127L01
  • SAR: ADI AD4003, ADI AD4630-24, ADI LTC2387-20
Temperature sensing and table logging
  • Digital temperature sensors: TI TMP117, ADI ADT7420, Maxim MAX31875
  • Multi-sensor measurement: ADI LTC2983
  • Table/version storage examples: Fujitsu MB85RC256V (FRAM), Microchip 24LC256 (EEPROM)
F11 — Verification test matrix: test item × conditions × output metrics Table-style block diagram showing verification items versus key conditions (temperature bin, gain state, stimulus, duration), with example output metrics used for release gating. Verification test matrix (release gating) Test item TempBin GainState Stimulus Duration Output metrics Noise floor Cold/Norm/Hot All ranges Dark Short RMS Row/Col proj LF noise Bins + warm-up Key ranges Stable Long LF% Drift slope Banding All bins Stress modes Flat Short FFT peaks Band amp Temp sweep Cold↔Hot All ranges Dark/Flat Long Gain/Offset vs T Δbin Gain transition All bins Adjacent ranges Overlap sweep Short Seam Δ Settle tail Saturation recovery Bins All ranges Step (dark↔sat) Short Frames-to-rec Tail Long-time drift Bins Key ranges Dark/Flat Hours Residual OK Drift slope PASS → Freeze FAIL → Rollback / Re-test Log: TableID · TempBin · GainState · FirmwareID · Conditions The matrix prevents late rework by enforcing worst-case conditions, computed metrics, and versioned gates.

H2-12 · Recommended internal links (links only)

These pages provide the deeper background for topics that are intentionally not expanded here.

Link note
Replace “#” with the actual internal URLs/slugs used on the site.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-13 · FAQs (quick decisions + troubleshooting)

These FAQs focus on integrator stability, CDS/PGA timing, ADC tradeoffs, and temperature/calibration pitfalls. Each answer provides a practical rule, a verification action, and a common trap to avoid.

Quick index
Integrator node (Cf / reset / leakage / injection)
1) How should Cf be chosen for low-dose stability, not just “resolution”?
Choose Cf to meet low-dose stability first, then confirm headroom to avoid frequent saturation. Verify by measuring dark RMS noise, LF drift ratio, and droop rate at the target integration time across temperature bins. A common trap is shrinking Cf to raise gain, then discovering banding and drift dominate low-dose images.
2) Why can the reset switch create banding, and how is it confirmed?
Reset edges can inject charge into the integrator node, and a fixed readout cadence makes that error appear as stable stripes. Confirm by shifting the reset timing or edge rate and checking whether the band pattern moves or changes amplitude in a locked way. A common trap is averaging more frames, which hides random noise but keeps locked banding.
3) How is leakage-driven droop measured and gated before release?
Treat droop as a measurable slope at the integrator output during a controlled hold interval. Measure droop rate versus integration time and temperature bin, then set a pass/fail gate using the same residual metric used for calibration validation. A common trap is validating only at room temperature, then seeing drift and shading in cold or hot bins.
4) How can dielectric absorption be separated from saturation lag in practice?
Dielectric absorption behaves like a slow release of stored charge and often shows a multi-time-constant tail after a step. Run step tests at multiple amplitudes and compare tail shape and frames-to-recover; consistent shape across amplitude suggests DA-dominated behavior. A common trap is trying to “calibrate out” memory effects, which can overfit noise and worsen structured artifacts.
CDS / PGA (timing / switching / settling)
5) When does CDS help, and when can it create artifacts instead?
CDS helps when offset and low-frequency noise are dominant and sampling windows are stable and well-settled. Validate by sweeping the settle window and confirming that residuals become less structured without creating new periodic components. A common trap is enabling CDS as a universal fix, then discovering timing sensitivity turns injection and settling errors into visible stripes.
6) How can PGA gain switching avoid seams and “range-step” banding?
Gain switching must be designed around overlap continuity, not only per-range noise targets. Force transitions across the overlap region and measure seam delta and post-switch settling tail for each temperature bin and gain state. A common trap is validating only one stimulus point, then seeing discontinuities and stripes once real scenes cross the switching boundary.
7) Why can insufficient settling look like fixed banding rather than random error?
A fixed sampling cadence turns a repeatable settling shortfall into a repeatable row or column error, so the artifact becomes periodic and stable. Confirm by shifting sampling phase or delay and observing whether the band frequency or position moves predictably. A common trap is blaming “noise” and increasing averaging, which cannot remove cadence-locked settling artifacts.
ADC (ΣΔ vs SAR / reference / driver)
8) What is the shortest decision path for ΣΔ versus SAR in this scenario?
Choose ΣΔ when low-frequency noise and linearity dominate and latency is acceptable, and choose SAR when deterministic sampling and higher per-channel throughput are required. Validate by mapping requirements to per-channel rate, allowed latency, LF residual gate, and driver/reference settling burden. A common trap is selecting by resolution alone while ignoring filter ripple, latency, or settling constraints.
9) How can Vref drift masquerade as “front-end drift” or shading?
Reference drift changes gain and sometimes offset consistently, so residual shading can appear even when the integrator and CDS look stable. Confirm by logging reference-related monitor points and correlating residual changes with temperature sweeps and load conditions. A common trap is tuning calibration tables to hide a moving reference, which increases over-calibration risk and reduces cross-bin robustness.
10) How should SAR driver and sample-and-hold settling be verified?
Verify settling by forcing edge cases: step the input, vary the sampling rate, and check for cadence-locked error that grows with faster sampling or higher source impedance. Add tests across gain states and temperature bins because driver margin can shrink in corners. A common trap is validating only at a slow mode, then discovering distortion and noise growth when throughput is increased.
Temperature drift / Calibration (granularity / over-calibration)
11) Why must temperature compensation match granularity instead of using one global factor?
Different zones and gain states can drift differently, so a single global correction often creates seams and residual structure. Validate by binning temperature near critical components and checking bin-to-bin continuity and residual bounds for each gain state. A common trap is applying one coefficient to everything, which looks fine in one bin but fails in corners with visible shading and steps.
12) How can over-calibration be detected before it turns noise into stripes?
Over-calibration is present when residuals become more structured after correction rather than more random. Confirm by comparing row/column projections and FFT peaks before and after applying maps; any new or amplified periodic components should fail the gate. A common trap is adding more calibration complexity to chase residuals, which can bake transient noise and warm-up drift into fixed correction tables.