123 Main Street, New York, NY 10001

SWIR/InGaAs Camera Front-End Design (ROIC, TIA/ADC, TEC)

← Back to: Imaging / Camera / Machine Vision

This page stays at the detector-to-ADC boundary: InGaAs FPA, ROIC readout (TIA/Integrator/CDS), bias/reference integrity, and TEC cooling with temperature control. The goal is measurable stability, not subjective “image quality.”

H2-1. What SWIR/InGaAs Front-End Must Guarantee

Why this chapter exists

A SWIR/InGaAs front-end is accepted by engineering evidence: repeatable noise floor, predictable drift, stable temperature control, and a clean digital boundary that downstream blocks can trust. “Looks good” is not a requirement; measurable KPIs are.

Boundary definition: InGaAs FPA → ROIC (TIA/Integrator/CDS) → ADC → digital codes. Interfaces, ISP tuning, compression, and protocol details are out of scope.

Acceptance KPIs (what must be measured)

Use these KPIs as the page-wide checklist. Each KPI must map to a test condition, an observable signal, and a pass/fail rule.

Read noise (e⁻ rms / uV rms) Dark current vs temperature (baseline rise) Dynamic range (noise floor ↔ saturation) Linearity (gain switching, residual error) Bandwidth / frame boundary (timing limits) Drift (offset/gain drift, short/long)

A practical rule: if a KPI cannot be tied to one waveform or one statistics plot, it is not ready for acceptance.

Minimum evidence per KPI (condition → measurement → discriminator)

Read noise
Condition: dark frame, fixed integration time, stable temperature
Measurement: code σ (per pixel / region), convert to e⁻ or uV
Discriminator: σ tracks ADC/reference ripple → power issue; σ rises with T → dark/shot-dominated
Drift
Condition: fixed T, long capture (minutes→hours)
Measurement: mean/baseline trend vs time
Discriminator: drift correlates with bias/reference wander → rail problem; correlates with T gradient → thermal loop/placement
Linearity
Condition: stepped illumination, multiple gain states (if used)
Measurement: code vs light fit residuals / INL proxy
Discriminator: kink at gain switch → ROIC/ADC transition; roll-off near high codes → full-well/saturation path
Dark current vs T
Condition: multi-temperature points (TEC on/off / setpoints)
Measurement: dark baseline vs T, DSNU trend
Discriminator: baseline follows TEC PWM artifacts → coupling; baseline follows T slowly → true dark current + drift

The page later expands on “how to fix,” but acceptance starts with “how to prove.”

Fastest two measurements (first response debug)

  • TP1: Bias/Reference integrity — measure ripple, spikes, and slow wander on the critical analog bias/ref rail(s).
  • TP2: Dark baseline stability — capture dark codes and track mean + σ over time and temperature setpoints.

Quick discriminator logic: rail ripple ↔ banding suggests power/return coupling; baseline step ↔ TEC PWM suggests cooler-drive injection; baseline slope ↔ temperature suggests dark current + thermal gradient.

Figure F1 — Front-end boundary & evidence hooks

Front-End Acceptance Boundary Measurable KPIs: noise • dark baseline • linearity • drift • stability InGaAs FPA SWIR pixels ROIC TIA / Integrator CDS / Gain ADC Low-noise Digital Boundary Bias Rails & References Ripple → banding / drift Clock / Sample Timing Clean edges → stable codes TEC Cold Plate T PID loop Heater / Anti-dew Noise Dark Lin Drift Stable
Figure F1. System boundary for SWIR/InGaAs front-end acceptance: detector → ROIC readout → ADC → digital codes, with bias/reference, sampling clocks, and TEC temperature control as the primary evidence hooks.
Cite this figure
Figure F1 — SWIR/InGaAs front-end acceptance boundary (detector → ROIC → ADC) with bias, timing, and TEC loop hooks.

H2-2. InGaAs Detector Realities That Drive the Analog Design

Engineering translation: physics → constraints → symptoms

InGaAs SWIR arrays are often limited by dark current, leakage sensitivity, and temperature-driven baseline motion. These are not “nice-to-have” considerations; they directly shape ROIC topology, bias strategy, and the need for stable cooling.

Focus of this chapter: only detector behaviors that directly force front-end electrical design decisions. It does not discuss ISP-side denoise or system interfaces.

Pixel equivalent model (what the ROIC actually sees)

The front-end input node is a sensitive summing junction. Small leakage currents or surface contamination can look like signal, and pixel capacitance sets the kTC floor and integration behavior.

  • Iph: photo-generated current (useful signal)
  • Idark: dark current (temperature-sensitive baseline lift)
  • Cpd: pixel/input capacitance (integration slope and kTC noise)
  • Rsh: shunt/leakage path (humidity/contamination/ESD device leakage can reduce effective Rsh)

Temperature sensitivity: why “baseline” becomes the main enemy

As temperature rises, Idark typically increases rapidly (often perceived as a near-exponential trend on a linear plot). Two practical consequences dominate front-end work:

  • Noise floor lifts because dark-related shot components increase with baseline current.
  • Non-uniformity worsens when temperature gradients exist (local baseline differences across the array).

Practical acceptance framing: a stable thermal setpoint is not only about “cooler performance” — it is about keeping baseline + noise statistics inside a repeatable envelope.

Leakage sensitivity: invisible paths that behave like “ghost signal”

The sensitive node can be corrupted by leakage paths that change with humidity, surface films, bias voltage, or temperature. Leakage often presents as slow drift, corner anomalies, or “sparkle” pixels that do not correlate with illumination.

Common leakage contributors
PCB surface films & residues
Connector insulation leakage
ESD/protection device leakage
Bias resistor networks (high impedance nodes)
Package/window moisture paths
Fast discriminator
If the anomaly tracks humidity/handling → suspect leakage/contamination
If it steps with TEC PWM → suspect cooler-drive injection
If it tracks absolute temperature slowly → suspect true Idark + gradient

Design actions forced by detector realities (bridge to next chapters)

  • Dark current & temperature → requires TEC loop design, sensor placement discipline, and stability proof.
  • Cpd & kTC → requires appropriate ROIC readout (CTIA/TIA/integrator) and CDS-compatible sampling.
  • Leakage & Rsh → requires guarding, cleanliness, humidity strategy, and bias network discipline.

The next chapters will formalize a noise budget and show how each circuit choice reduces a specific term on that budget.

Figure F2 — InGaAs pixel equivalent model (sensitive node)

InGaAs Pixel Equivalent Model Sensitive node: leakage and temperature drift can look like signal Guard ring / cleanliness zone Sensitive Node Iph photo current Idark dark current Cpd pixel capacitance Rsh leakage path ROIC Input TIA / CTIA / Integrator Temperature effect T ↑ → Idark ↑ T ↑ → Noise ↑ Leakage / contamination can mimic signal at the sensitive node
Figure F2. InGaAs pixel equivalent seen by the ROIC: photo current (Iph) plus dark current (Idark), pixel capacitance (Cpd), and shunt/leakage (Rsh) at a sensitive node. Temperature increases dark current and noise.
Cite this figure
Figure F2 — InGaAs pixel equivalent model (Iph, Idark, Cpd, Rsh) highlighting leakage sensitivity and temperature-driven baseline/noise rise.

H2-3. Noise Budget: From Shot/Dark to kTC and Read Noise

Goal: a “noise ledger” that justifies every design choice

A SWIR/InGaAs front-end must treat noise as a ledger: each term has a source, an injection point, a suppression knob, and a measurement method. Without this ledger, improvements are not repeatable and root-cause isolation becomes guesswork.

Shot noise (signal-related) Dark-related noise (Idark + temperature) kTC / reset (sampling + node capacitance) 1/f (low-frequency readout) ADC quantization (ENOB + full-scale mapping)

Engineering grouping: which knob suppresses which term

Temperature / dark-driven
Dominant when T rises or baseline current increases.
Primary knobs: TEC stability, thermal gradient control, integration strategy.
Typical symptom: baseline and σ change with temperature setpoint.
Readout / reset-driven
Dominant at low light and low temperature when Idark is controlled.
Primary knobs: CDS timing, reset strategy, input node handling, gain mapping.
Typical symptom: artifacts shift with sample phase, not with illumination.
Sampling / reference-driven
Dominant when clock/reference/power coupling injects deterministic components.
Primary knobs: reference integrity, clock cleanliness, layout/return control.
Typical symptom: banding at switching/clock-related frequencies.
Quantization-driven
Dominant when effective resolution is insufficient for the noise floor.
Primary knobs: ADC selection, full-scale usage, gain switching threshold.
Typical symptom: step-like granularity at low signal.

Noise ledger (source → suppression → how to measure → fingerprint)

Use this as a fixed “acceptance table header.” Each row must be tied to at least one observable waveform or statistics plot.

Noise term Main driver Suppression knob Minimum measurement Fingerprint (fast discriminator)
Shot Signal current / photons Increase illumination (if allowed), optimize integration usage σ vs mean under stepped light σ grows with mean; weak temperature dependency
Dark-related Idark, temperature, gradient TEC stability, reduce gradients, maintain setpoint envelope Dark baseline + σ vs temperature points Baseline and σ track temperature setpoint
kTC / reset Node capacitance, reset feedthrough CDS, reset timing, sampling capacitor strategy CDS on/off comparison, sample phase sweep Artifacts move with sampling phase, not with light
1/f Low-frequency readout components CDS, reduce low-frequency sensitivity, stabilize bias Allan-type drift proxy (mean vs time), low-freq PSD proxy Noise reduces with time averaging only up to a corner, then drift dominates
Quantization ENOB, full-scale mapping Use more of full-scale, adjust gain switch threshold Histogram at low signal, code-step visibility Granularity (stairs) persists even when rails are clean

Practical separation rules: temperature correlation → dark-dominated; phase correlation → reset/CDS/clock coupling; banding at switch freq → reference/power injection.

Figure F3 — Noise flow map (sources → injection → suppression hooks → SNR/DR)

Noise Budget Flow Map Sources → ROIC injection points → suppression hooks → output metrics Noise sources Shot Dark kTC 1/f Quant ROIC Readout Injection + suppression hooks CDS Tint integration Gain switch full-scale mapping ADC codes Output SNR DR
Figure F3. Noise flow map: shot/dark/kTC/1/f/quantization enter the ROIC at different points. CDS, integration time, and gain/full-scale mapping are the primary suppression hooks before the ADC determines code-level SNR and dynamic range.
Cite this figure
Figure F3 — Noise flow map (Shot/Dark/kTC/1/f/Quant → ROIC hooks: CDS/Tint/Gain → ADC → SNR/DR).

H2-4. ROIC Readout Architectures: CTIA vs TIA vs Integrator

What this chapter decides

Readout topology determines how the sensitive node behaves, how linear the current-to-code transfer is, and how well CDS and reset strategies can suppress kTC/1/f terms from the noise ledger.

Evaluation axes kept at the front-end boundary: noise floor, linearity, bandwidth/frame boundary, and CDS/reset compatibility.

CTIA (capacitive feedback amplifier)

  • Advantage: strong input node control, good linearity for charge integration, CDS-friendly behavior.
  • Cost: requires stable amplifier behavior and well-defined feedback capacitor; dynamic range ties to Cf and swing.
  • Best fit: low current / high sensitivity regimes where node stability and linearity are critical.
  • Common pitfall: reset feedthrough or sampling phase errors appear as repeatable offsets/ghost patterns.

Noise-ledger link: CTIA choices usually target kTC/reset artifacts and linearity control while keeping CDS effective.

TIA (resistive/active transimpedance)

  • Advantage: direct current-to-voltage conversion, bandwidth-friendly for faster readout needs.
  • Cost: input stability depends on loop gain and parasitics; more sensitive to coupling and reference/clock injection.
  • Best fit: scenarios that prioritize bandwidth or require continuous conversion behavior.
  • Common pitfall: oscillation/peaking or coupling shows up as deterministic banding or frequency-linked patterns.

Noise-ledger link: TIA decisions often trade bandwidth against coupling sensitivity; reference/clock integrity becomes mandatory.

Integrator (switched integration / sample-and-hold readout)

  • Advantage: integrates small currents naturally; can improve effective SNR by extending integration time.
  • Cost: highly sensitive to leakage paths and drift; reset and CDS timing must be disciplined.
  • Best fit: ultra-low signal regimes where integration time is the main lever.
  • Common pitfall: slow baseline motion (leakage/drift) dominates after longer integration, masking improvements.

Noise-ledger link: integrator choices amplify the need to control leakage and drift before expecting gains from longer Tint.

Selection rules (text decision tree)

  • If low-light performance is limited by kTC/reset artifacts → prioritize topologies with robust CDS compatibility (often CTIA-style behavior).
  • If requirements push for higher bandwidth/frame margin → consider bandwidth-friendly paths (often TIA-style), but treat coupling control as a first-class constraint.
  • If the dominant lever is longer integration time → integrator-style readout can help, but only after leakage/drift controls are proven.
  • If anomalies shift with sampling phase → suspect reset/CDS timing, not illumination or “random noise.”

Figure F4 — Topology comparison (same template, three columns)

ROIC Readout Architectures CTIA vs TIA vs Integrator — same template, comparable tradeoffs CTIA TIA Integrator Sensitive node Amp + Cf charge domain Reset Cs Tradeoff tags Noise ↓ Linearity ↑ BW: depends Sensitive node Amp + Rf current→voltage Reset Cs Tradeoff tags BW ↑ Noise: conditional Linearity: conditional Sensitive node Integrator Tint lever Reset Cs Tradeoff tags Noise ↓ BW ↓ Drift risk ↑
Figure F4. Three readout archetypes drawn with the same template (node → core element → reset + sample cap) to make tradeoffs comparable: noise, linearity, bandwidth/frame margin, and drift sensitivity.
Cite this figure
Figure F4 — CTIA vs TIA vs Integrator comparison (node, feedback element, reset, sample cap; tags for noise/linearity/BW/drift risk).

H2-5. Low-Leakage Design: Biasing, Guarding, and “Invisible” Error Paths

Why SWIR front-ends fail “silently”

In SWIR/InGaAs readout, the most damaging errors often come from invisible leakage currents rather than incorrect calculations. When the input node is high-impedance and the signal current is small, leakage can masquerade as dark current, baseline drift, or slow “ghost” motion that looks like sensor behavior but is actually board-level physics.

Engineering goal: treat leakage as a measurable error path. Each mitigation must be verified by a repeatable evidence test (humidity/temperature/bias sensitivity and time-series drift).

Six leakage classes (path → trigger → symptom)

PCB surface — humidity film / ionic residue Connector — insulation leakage / contamination ESD device leakage — temperature sensitive Bias resistor network — high voltage + residue Humidity film — transient after exposure Guard ring break — discontinuity / wrong drive
  • Surface residue (flux/cleaning): leakage grows with humidity and can change after handling. Symptom: drift improves after drying or cleaning.
  • Connector/line leakage: motion/pressure changes the baseline. Symptom: touching or re-seating cables shifts offsets.
  • Protection leakage (ESD/TVS): temperature-dependent leakage. Symptom: worse at higher temperature, often slow recovery.
  • Bias networks: high impedance resistors create a leakage amplifier. Symptom: baseline changes strongly with bias voltage.
  • Humidity film: thin water layer forms a resistive path. Symptom: “good in lab, bad in rainy season.”
  • Guard discontinuity: guard exists but is ineffective. Symptom: local region behaves like a leakage antenna.

Fast discriminators (evidence fingerprints)

Humidity correlation
If baseline/σ improves after drying → prioritize PCB surface and connector leakage paths.
Temperature correlation
If errors worsen with temperature and recover slowly → suspect ESD/protection leakage or absorbent materials.
Bias correlation
If a bias change shifts baseline/drift strongly → focus on bias resistor network and hidden surface paths.
Touch / motion correlation
If handling cables changes behavior → treat connector/insulation contamination as primary.

Mitigation blocks: Guard ring drive + Clean/Conformal/Sealing

A guard ring is not “a ring of copper.” It is a driven or referenced shield that reduces the surface potential difference around the sensitive node, lowering the leakage driving force. Any discontinuity or wrong reference makes the guard ineffective.

  • Guard continuity: avoid breaks, vias-to-nowhere, and unguarded “gaps” around the node.
  • Guard reference: keep guard potential close to the sensitive node potential (minimize ΔV on the surface).
  • Clean / conformal / sealing: remove ionic residue, prevent moisture films, and isolate connector leakage points.

Verification must be paired with the same drift test: dark baseline vs time, plus humidity/bias/temperature sensitivity checks.

Figure F5 — Leakage path map (center node → 6 arrows → 2 countermeasures)

Leakage Path Map Invisible error paths around the sensitive node Sensitive node PCB surface ionic residue Connector insulation leak ESD leakage temp sensitive Bias network high-Z path Humidity film surface ΔV Guard break discontinuity Guard ring drive Clean / Conformal / Sealing
Figure F5. Leakage path map centered on the sensitive input node. Six outward arrows represent common “invisible” error paths. Countermeasures are treated as engineering blocks that must be verified with humidity/temperature/bias sensitivity tests.
Cite this figure
Figure F5 — Leakage path map (PCB surface, connector, ESD leakage, bias network, humidity film, guard break) + mitigations (guard drive; clean/conformal/sealing).

H2-6. ADC Choices and Sampling Strategy for SWIR Readout

Front-end boundary: ADC is part of the noise ledger

ADC selection is not only a datasheet choice. In SWIR readout, ADC and sampling define how well CDS works, whether quantization becomes visible at low signal, and whether deterministic artifacts appear when clock/reference integrity is insufficient.

This chapter stays at the front-end boundary: ROIC output → S/H → ADC → codes. It does not cover link/protocol, compression, or ISP processing.

ADC families (front-end view)

  • SAR: clear sampling moments and throughput-friendly conversion; requires disciplined reference and sampling integrity.
  • ΣΔ: strong low-frequency resolution behavior; tradeoffs include bandwidth and latency envelopes that must fit the readout schedule.
  • Column-parallel: short analog path and massive parallelism; requires careful column-to-column consistency verification.

Practical rule: if “code granularity” is visible at low signal, quantization/ENOB is too close to the noise floor or full-scale mapping is inefficient.

On-ROIC column ADC vs off-chip ADC (when each becomes rational)

Prefer column ADC
High parallel throughput needs, shortest analog path, tight integration with readout timing.
Verification focus: column uniformity and deterministic coupling checks.
Prefer off-chip ADC
Centralized high-performance conversion, easier swap/upgrade, ROIC output quality is sufficient.
Verification focus: routing/return integrity and S/H behavior.

ENOB vs bandwidth: what “enough bits” means in SWIR

“Enough” effective bits means the quantization term stays below the target noise floor with margin. If low-signal histograms show step-like granularity, or if σ stops improving as expected while the analog chain is stable, quantization or code-domain artifacts may be dominating.

Evidence approach: compare (a) σ vs integration time and (b) histogram smoothness at low signal. Quantization dominance leaves persistent code steps even when temperature and leakage controls are proven.

Sampling + CDS timing: where artifacts enter

SWIR readout commonly uses paired samples to suppress reset and low-frequency components. Sampling must be treated as part of the analog chain: sample phase and hold behavior can turn into deterministic patterns if clock edges or reference integrity inject energy into the sampling moment.

  • Reset → Integrate: defines charge accumulation window.
  • Sample1: captures baseline reference condition.
  • Sample2 (CDS): captures signal condition; subtraction suppresses kTC and low-frequency components when timing is consistent.
  • Convert: conversion must preserve the paired-sample intent (avoid coupling at conversion edges).

Figure F6 — Sampling timeline + ADC location options (clock cleanliness highlighted)

Sampling Strategy + ADC Placement ROIC output → S/H → ADC (two options) + simplified CDS timeline ADC placement options ROIC out S / H Option A on-ROIC column ADC ADC (column) Option B off-chip ADC ADC (off-chip) Clock cleanliness poor edges → deterministic artifacts Simplified CDS timeline Reset Integrate Sample1 baseline Sample2 CDS Convert
Figure F6. Two ADC placement options at the front-end boundary (column ADC vs off-chip ADC) with a simplified CDS timeline. Clock cleanliness is explicitly treated as a first-class condition for avoiding deterministic artifacts.
Cite this figure
Figure F6 — ROIC→S/H→ADC placement (column vs off-chip) + Reset/Integrate/Sample1/Sample2(CDS)/Convert timeline + clock cleanliness note.

H2-7. Power & Reference Integrity: Ripple, PSRR, and Bias Stability

What power noise becomes inside a SWIR front-end

In a SWIR/InGaAs chain, power and reference issues rarely appear as “random noise only.” They can translate into false signal (baseline shift), slow drift (offset/gain walk), and deterministic stripes (row/column-correlated artifacts).

Engineering mindset: treat ripple, PSRR limits, and reference drift as coupling paths that can be tested with minimal probes and confirmed by correlation rules (frequency, temperature, and load dependency).

Three coupling channels to watch (front-end boundary)

Bias rail → ROIC node (PSRR / return coupling) Vref → ADC codes (global code drift) AGND/DGND (shared impedance / bounce)
  • Bias coupling: ripple on analog bias rails can modulate the sensitive node and show up as baseline wobble or pattern noise.
  • Reference coupling: Vref drift maps directly into code space; even a stable ROIC output can look like “gain change.”
  • Return coupling: shared impedance between analog and digital returns can inject switching currents into sampling moments.

Minimum probing: the first two measurements

Measurement #1
Analog bias rail (probe near the ROIC/bias injection point)
Goal: capture ripple amplitude and dominant frequencies under real operating load.
Measurement #2
ROIC output baseline (dark condition, fixed integration time)
Goal: check whether baseline variation correlates with rail ripple / load state.

Optional tie-breakers when ambiguity remains: ADC Vref node (to confirm code-domain drift) and AGND↔DGND delta near the single-point connection (to expose return coupling).

Correlation criteria (fast discriminators)

  • Ripple frequency ↔ stripe signature: if stripe strength changes when a ripple tone changes, the coupling is power-related.
  • Temperature ↔ bias drift: if baseline drift tracks bias voltage drift across temperature, the bias/reference chain is dominant.
  • Load state ↔ baseline jump: if enabling a load (digital burst, TEC PWM state) causes synchronous baseline steps, suspect return coupling.
Practical habit: always record baseline vs time together with one rail waveform snapshot. A repeatable correlation is stronger evidence than a one-time “looks noisy” observation.

Stabilization priorities (repair order)

Fixes should follow the coupling path hierarchy. Start with the simplest local corrections and validate with the same two-probe evidence: bias rail ripple and ROIC baseline stability.

  • Local decoupling / filtering at the bias injection point (minimize impedance where the sensitive block draws current).
  • Return discipline: keep analog return clean; enforce single-point tie between AGND and DGND where intended.
  • Reference isolation: treat Vref as a precision signal (buffering/RC isolation as appropriate, and verify drift behavior).

Figure F7 — Power & reference coupling paths (bias, Vref, and returns)

Power & Reference Coupling Paths Ripple → bias modulation, Vref drift → code drift, returns → bounce Buck LDO Analog bias rail ROIC sensitive node coupling risk Reference (Vref) ADC codes drift AGND DGND tie single point
Figure F7. Coupling paths that convert ripple and drift into false signal and stripes: bias rail modulation into the ROIC sensitive node, Vref drift into ADC codes, and AGND/DGND return strategy with an intentional single-point tie.
Cite this figure
Figure F7 — Buck/LDO→bias rail→ROIC node coupling, Vref→ADC code drift, AGND/DGND split with single-point tie.

H2-8. Cooler Drive: TEC/Peltier Power Stage Without Polluting the Front-End

Why TEC drive is a top contamination source

TEC/Peltier control is a high-current power stage. Its switching edges and return currents can inject energy into the SWIR front-end as ripple, ground bounce, and near-field coupling. The result is often deterministic artifacts rather than random noise.

Engineering goal: the TEC loop must regulate temperature while keeping its switching energy out of the ROIC node and the ADC reference/sampling moments.

TEC stage options (risk-focused view)

  • PWM H-bridge: supports bidirectional heat pumping; risk centers on large switching loops and return complexity.
  • Synchronous buck (single-direction): simpler current path; risk centers on ripple tones and shared impedance into sensitive returns.

The front-end question is not the controller brand. It is whether switching energy finds a clean exit path without sharing impedance with analog bias and reference nodes.

Switching frequency selection (principles that prevent visible artifacts)

Avoid visible coupling bands
Choose fSW so ripple does not map into row/column patterns through beat effects with sampling and readout schedules.
Optional spread-spectrum
When deterministic tones are problematic, spreading energy can reduce narrowband correlation (verify with stripe correlation tests).
Rule: change fSW → check whether artifact signature moves Rule: change duty/load → check whether baseline steps follow

Isolation essentials (layout & returns)

  • Separate return: TEC high-current return must not share impedance with ROIC analog return.
  • Loop area minimization: reduce magnetic coupling from inductor and switching loops into sensitive traces.
  • Switch node distance: keep dv/dt nodes physically away from the ROIC input neighborhood and Vref routing.
Evidence check: when TEC PWM is enabled/disabled, ROIC baseline and stripe strength should not step synchronously.

Filtering & shielding blocks (where they matter)

  • LC filter: place and tune to reduce the ripple component that correlates with artifacts (verify by frequency correlation).
  • Shielding: treat the power inductor/switch loop as a field source; reduce coupling into the ROIC/ADC region.

Any mitigation should be validated by repeating the same minimal probes: (1) TEC rail ripple / PWM signature and (2) ROIC baseline or artifact metric.

Figure F8 — TEC drive & isolation map (risk arrows + countermeasures)

TEC Drive Without Polluting the Front-End power stage → TEC → cold plate, with controlled coupling paths TEC driver PWM / H-bridge or sync buck TEC module Cold plate temperature control T ROIC + ADC sensitive readout risk coupling risk coupling Countermeasures separate return LC filter shielding spread spectrum optional verify by correlation change fSW / load
Figure F8. TEC driver and isolation map. Risk coupling arrows highlight how switching energy can pollute ROIC/ADC. Countermeasures focus on returns, filtering, shielding, and (optional) spectrum spreading, verified via artifact correlation tests.
Cite this figure
Figure F8 — TEC driver→TEC→cold plate with risk coupling arrows to ROIC/ADC + countermeasures (separate return, LC filter, shielding, optional spread-spectrum).

H2-9. Temperature Sensing & Control Loop: PID, Placement, and Stability Proof

What “temperature control pass” means in a SWIR front-end

Temperature control is a closed-loop system: sensor placement, thermal lag, controller behavior, and plant dynamics together decide whether the detector stays stable. A stable setpoint alone is not sufficient; acceptance must be based on measurable loop behavior and repeatable evidence.

ΔT ripple (steady fluctuation) settling time (step response) overshoot (thermal overrun)

Front-end linkage: temperature ripple and drift typically map into dark current behavior and baseline stability. Verification should correlate temperature evidence with a stable dark baseline statistic where applicable.

Sensor placement: 3 rules that prevent “measuring the wrong temperature”

  • Close to the controlled object: place the sensor on the cold plate / near the detector thermal path, not near switching heat sources.
  • Minimize thermal lag: keep the heat path to the sensor predictable (controlled contact, controlled interface material).
  • Make placement repeatable: define position, pressure, and attachment process so the loop behaves similarly across builds.
A placement that is “electrically convenient” but thermally incorrect can create a loop that appears stable while the detector temperature still drifts.

Thermal lag & hidden instability traps

The thermal plant has inertia and delay. If the sensed temperature lags the actual detector temperature, the controller can over-correct. Saturation (current limit / PWM limit) can also accumulate integral action and later release it as overshoot.

  • Lag-driven oscillation: delayed feedback makes corrective action arrive late, creating repeated over/under correction.
  • Integrator windup: output saturation hides error, integral term grows, then causes overshoot when saturation clears.
  • Placement mismatch: sensor reads a local hot/cold spot instead of the detector thermal state.

How to prove loop stability (evidence template)

Proof #1 — Step response
Apply a setpoint step (up/down). Record overshoot and settling time. Check for damped vs sustained oscillation.
Proof #2 — Steady-state ripple
Hold conditions constant. Measure ΔT ripple and verify it remains bounded with repeatable amplitude.
Proof #3 — Disturbance recovery
Introduce a thermal disturbance (ambient change or load change). Measure recovery time and confirm no secondary oscillation.

Practical discriminator: change sensor placement A↔B or adjust the plant boundary condition and verify whether the stability evidence remains consistent. A robust loop should not be “stable only in one mounting configuration.”

Figure F9 — Thermal plant + sensor placement + PID loop (with acceptance bubbles)

Thermal Model + Control Loop placement • lag • stability proof PID controller Driver Plant TEC + thermal mass + ambient TEC Thermal mass Amb ient T Sensor A near FPA Sensor B near driver ΔT ripple settling time
Figure F9. Thermal plant + sensor placement + PID loop. Placement A/B changes the effective feedback signal. Stability proof uses step response, steady ripple, and disturbance recovery; acceptance bubbles highlight ΔT ripple and settling time.
Cite this figure
Figure F9 — Plant(TEC+thermal mass+ambient) + Sensor A/B placement + PID→Driver→Plant closed-loop, with ΔT ripple and settling time metrics.

H2-10. Anti-Dew, Packaging, and Environmental Edge Cases

Why dew and packaging issues look like “mysterious image failure”

SWIR systems often operate with a cold plate and a window close to the cold region. When the window or internal surfaces fall below the dew point, condensation can form as a thin film, causing rapid contrast loss and scattering. Packaging and contamination can also create slow drift and recurring defects that resemble electronics problems.

Engineering boundary: focus on conditions and countermeasures that directly change condensation risk and contamination-driven drift, without expanding into full reliability standards or external compliance procedures.

Dew risk logic (dew point and temperature margin)

  • Core condition: if window/cold-surface temperature drops below the dew point, condensation risk is active.
  • Margin mindset: evaluate how far the window temperature is from the dew point (“temperature margin”).
  • Evidence capture: log ambient conditions plus window/cold-plate temperature during failure reproduction.
Practical discriminator: if image degradation correlates with humidity changes or cold-start transients, prioritize dew/contamination checks before chasing analog noise.

Anti-dew countermeasures (what they solve and what they trade)

Window heater
Raises window temperature above dew point; trades power and requires careful thermal interaction with the cold plate.
Seal & enclosure
Limits moisture ingress; trades process consistency and introduces thermal stress considerations.
Dry gas / desiccant
Lowers internal humidity; trades lifetime/maintenance and relies on seal quality.
Conformal / surface protection
Reduces surface film/leakage and contamination effects; trades material selection risks and process control.

Contamination and “invisible” window issues

Not all field failures are liquid droplets. Window haze, molecular contamination, and cold-surface deposition can build up slowly and show up as persistent contrast loss or changing fixed-pattern behavior. These issues often improve temporarily after cleaning, which is a strong diagnostic clue.

  • Clue: slow degradation over days/weeks, sensitive to cleaning or enclosure opening.
  • Action boundary: focus on sealing, clean handling, and internal humidity control mechanisms.

Environmental edge cases: drift and defect recurrence

Thermal cycling and mechanical stress can shift alignment and contact conditions, creating repeatable drift signatures and defect recurrence. Validation should include temperature transitions and dark statistics checks to separate “environmental drift” from pure electronics noise.

Evidence output idea: record a dark baseline statistic before/after controlled thermal transitions and compare with the same temperature setpoint.

Figure F10 — Dew risk and countermeasure blocks (window, cold plate, humidity)

Dew Risk + Packaging Countermeasures window • cold plate • humidity • seal/heater/dryness Ambient humidity moist air risk ingress Sealed cavity (concept) Optical window Heater dew point line Cold plate Seal Desiccant dry gas condensation
Figure F10. Dew risk concept: when cold surfaces or the window approach the dew point, condensation can form and rapidly degrade image quality. Countermeasures include a window heater, sealing, and internal humidity reduction (desiccant/dry gas).
Cite this figure
Figure F10 — Optical window + cold plate in humid ambient with dew point threshold line and countermeasure blocks (heater, seal, desiccant/dry gas).

H2-11. Calibration & NUC at the Front-End Boundary: What Must Be Stored and Proven

Boundary definition: what this page covers (and what it does not)

This chapter defines the calibration / NUC data assets the front-end must support at the hardware/firmware boundary: what must be captured, stored, versioned, and proven in validation. It does not describe ISP math or NUC algorithm internals.

Goal: prevent field failures caused by wrong/missing datasets, temperature drift mismatch, and untraceable updates.

Minimum dataset: what must be stored (deliverables)

Dark offset (dark frame / black offset) Gain map (pixel gain non-uniformity) Temp-drift table (offset/gain vs T) Bad pixel map + version
  • Dark offset: stabilizes baseline and prevents “false signal” under dark conditions.
  • Gain map: reduces fixed-pattern brightness differences across pixels/columns at a defined operating mode.
  • Temp-drift table: maintains calibration validity across cold-start and ambient shifts without requiring algorithm detail here.
  • Bad pixel map: provides a controlled defect list with traceable lifecycle (generated/updated/expired).

Optional extension (still boundary-safe): multiple calibration sets keyed by integration time / gain mode / readout mode, each with a dataset ID.

Data must be structured (header + payload), not “loose files”

The most common calibration failure mode is not “bad math,” but using the wrong dataset under the right-looking conditions. The boundary should define a rigid dataset structure.

Header (metadata)
dataset_id, mode keys, temperature range, timestamp, CRC, revision.
Payload (maps/tables)
offset map, gain map, drift table, bad pixel map (and generation conditions).

Mode keys should include only what the front-end truly controls (example: gain mode, integration time, readout mode). Avoid embedding ISP-only concepts here.

Temperature drift table: why the boundary must support it

SWIR/InGaAs behavior is temperature sensitive. Even with a stable setpoint, the system experiences cold-start transients, mounting variation, and environmental disturbances. The boundary must therefore support:

  • Temperature readout used as a key (sensor location and meaning must be defined).
  • Lookup behavior (select a drift table entry or bracket range).
  • Correction hook (apply a pre-defined drift adjustment block without specifying ISP math).

Proof target: drift behavior vs temperature should remain bounded when the correct table is applied, and should degrade predictably when mismatched.

Bad pixel map lifecycle: versioning is part of the deliverable

  • Generation event: factory calibration, rework, or controlled service procedure.
  • Update trigger: after thermal cycling, long runtime, or controlled self-check (policy-defined).
  • Version meaning: the same sensor SN can legitimately have multiple bad-pixel map versions over time.
Boundary rule: bad pixel map must be stored with a version and a generation condition tag, otherwise field analysis becomes non-repeatable.

Traceability block: minimum fields that must be stored and readable

Identity
sensor SN / lot, board revision, ROIC revision (if applicable).
Software & dataset
firmware version, dataset_id, dataset version, timestamp, CRC.

Practical requirement: field logs must be able to answer “which dataset was used” without inference.

Evidence: what must be proven (proof, not promise)

  • Dark repeatability: dark baseline statistic remains stable after applying the correct offset dataset.
  • Temperature sweep: drift remains bounded across the defined temperature range when using the drift table.
  • Version replay: loading a prior dataset version reproduces expected behavior (traceability works).
  • NVM integrity: CRC/consistency checks pass across power cycles and controlled brownout scenarios.
Minimal instrumentation approach: temperature log + dataset ID log + one baseline statistic log is often enough to prove correctness.

Concrete MPN examples (calibration storage + temperature readout)

The part numbers below are examples commonly used for calibration dataset storage and temperature measurement. Selection should follow capacity, endurance, operating temperature grade, and interface availability.

NVM for calibration datasets (examples)

  • SPI NOR Flash: Winbond W25Q64JV, W25Q128JV
  • SPI NOR Flash: Macronix MX25L12835F
  • SPI NOR Flash: Micron MT25QL128ABA (family example)
  • I²C EEPROM: Microchip 24LC512 / 24AA512
  • SPI EEPROM: ST M95M02 (family example)
  • FRAM (high endurance): Infineon/Cypress FM25V02A
  • FRAM (high endurance): Fujitsu MB85RS2MT
Quick chooser: NOR Flash for larger maps; EEPROM for smaller datasets; FRAM when frequent updates or power-cut robustness is prioritized.

Temperature readout components (examples)

  • Digital temp sensor (I²C): TI TMP117, TMP116
  • Digital temp sensor (I²C): Analog Devices ADT7420
  • RTD front-end (for PT100/PT1000): Maxim MAX31865
Placement matters more than sensor brand: log the sensor ID/location as part of dataset metadata to avoid cross-build mismatch.
Optional (only if dataset authenticity is required at the boundary): Microchip ATECC608B or Infineon OPTIGA™ Trust M can be used for key storage and signing. Keep the implementation details on the Security subpage.

Figure F11 — Calibration dataset flow (capture → store → runtime lookup)

Calibration / NUC Data Flow (Boundary View) capture → compute → NVM → runtime lookup → apply Capture dark / uniform frame Compute offset / gain maps Store in NVM maps + drift table + bad map CRC + dataset_id + version Runtime temp read (NTC/RTD) Lookup drift table by T Apply correction boundary hook offset / gain / bad map maps/tables Versioning / Traceability SN / lot fw ver dataset ver timestamp CRC metadata linkage
Figure F11. Boundary-level calibration flow: capture and compute datasets, store maps/tables in NVM with CRC and dataset versioning, then use runtime temperature readout to select drift entries and apply boundary correction hooks. Traceability fields make field analysis repeatable.
Cite this figure
Figure F11 — Capture→Compute→NVM (maps/tables + CRC + dataset_id) and Runtime Temp→Lookup→Apply, with Versioning/Traceability fields (SN/lot/fw/dataset/timestamp/CRC).

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12. FAQs (Accordion) ×12

Each answer follows the same proof path: First two checksDiscriminatorFirst fix. Scope is limited to this page’s evidence chain (noise/leakage/power/TEC/temp control/calibration/NVM).

Dark image still looks “bright” — is it dark current or offset drift?

First two checks: log cold-plate temperature and capture a dark frame mean/median; also measure analog bias rail drift (DMM or scope).

Discriminator: if brightness tracks temperature, suspect dark current; if it tracks bias/boot-to-boot baseline, suspect offset drift or wrong dataset.

First fix: rebind correct offset dataset/version; tighten bias/reference (e.g., ADM7150 + ADR4525) and validate across a temperature sweep.

Noise increases after enabling TEC — coupling or ground return?

First two checks: probe TEC current waveform and ROIC output noise (RMS/PSD) in dark; capture before/after TEC enable.

Discriminator: if noise peaks at TEC switching frequency/harmonics, it is coupling; if broadband rises with load, suspect ground return/shared impedance.

First fix: separate TEC return, add LC at TEC stage, and consider TEC controller ADN8834/MAX1968; verify noise delta with TEC PWM shifted away from sampling windows.

Vertical banding appears only at certain TEC PWM duty — why?

First two checks: log banding frequency in image statistics and scope TEC PWM/rail ripple simultaneously.

Discriminator: if banding strength correlates with PWM duty or ripple amplitude, it is deterministic coupling into bias/reference or readout timing.

First fix: move PWM frequency, add post-filtering, and synchronize “quiet time” around sampling; upgrade low-noise rails (LT3042/TPS7A47) and validate duty sweep.

Only corner pixels drift with temperature — leakage or thermal gradient?

First two checks: compare corner vs center dark statistics across temperature steps; log sensor temperature at two points (near FPA vs near edge).

Discriminator: if drift follows local temperature mismatch, it is thermal gradient; if drift worsens with humidity/handling, suspect leakage/contamination paths.

First fix: improve sensor placement and thermal contact; enforce guarding/cleaning and sealing. Add a drift table keyed to the correct sensor location (TMP117/ADT7420).

Responsivity changes day-to-day — window contamination or bias drift?

First two checks: run a repeatable flat-field capture and log bias/reference voltages at the same operating mode.

Discriminator: if bias/reference is stable but flat-field scale changes, suspect window haze/condensation history; if scale tracks Vbias/Vref drift, suspect bias stability.

First fix: add anti-dew margin (window heater + seal + desiccant) and tighten references (ADR4525/ADR4540). Track changes with timestamped dataset IDs.

After ESD, image becomes noisier — input leakage path or ROIC damage?

First two checks: measure input/bias leakage (dark frame offset shift) and check rail ripple/noise; compare to a known-good unit if available.

Discriminator: sudden offset increase with humidity sensitivity suggests new leakage paths; persistent noise/linearity degradation suggests device-level damage.

First fix: inspect/clean/coat sensitive nodes and replace leaky protectors; use low-leak ESD parts (TI TPD1E10B06, Nexperia PESD5V0S1UL) and verify offset recovery.

Non-uniformity gets worse at low temperature — cal map mismatch or condensation?

First two checks: capture dark + flat at low temperature and log dew risk (ambient RH + window/cold-plate temp).

Discriminator: if artifacts appear near dew conditions or change after warming, suspect condensation; if stable but wrong across modes, suspect wrong dataset (gain/offset) for that temperature/mode.

First fix: enforce anti-dew controls and bind the correct temp-drift table version. Store per-mode datasets in NVM with CRC (W25Q128JV / FM25V02A).

Linearity breaks near highlights — full-well, ADC, or gain switching?

First two checks: run a controlled illumination ramp and log ADC code vs exposure; monitor gain-switch control state (or ROIC mode flag).

Discriminator: early saturation points to full-well/CTIA headroom; a sharp kink at a threshold points to gain switching; code-step patterns suggest ADC range/clock issues.

First fix: adjust headroom/bias, validate gain-switch timing, and verify ADC reference integrity (ADR4525). Keep correction at boundary level, not ISP math.

Noise improves with longer integration but then worsens — shot vs 1/f vs drift?

First two checks: sweep integration time and plot noise vs time; log temperature and baseline drift simultaneously.

Discriminator: initial improvement indicates read noise averaging/shot dominance; later worsening indicates 1/f, drift, or leakage/thermal instability.

First fix: stabilize temperature loop (step/settling proof), tighten bias/reference rails, and reduce leakage paths. If drift correlates with TEC action, rework coupling and PWM timing.

Random “sparkle” pixels — bad pixel map, EMI injection, or ADC metastability?

First two checks: compare sparkle locations over repeated frames and log if positions repeat; probe clock/rail integrity during events.

Discriminator: fixed locations suggest bad pixels; random bursts aligned with switching/IO activity suggest EMI injection; rare single-sample glitches during timing edges suggest sampling/ADC metastability.

First fix: update bad-pixel map versioning, add filtering/grounding, and harden clocks. Add common-mode filtering where needed (TDK ACM2012 series) and re-validate.

Two cameras behave differently with same ROIC — sensor lot or bias tolerance?

First two checks: compare dark current trend vs temperature for both units and log actual bias/reference voltages under load.

Discriminator: if curves diverge strongly with temperature, suspect sensor lot variation or mounting thermal differences; if differences track Vbias/Vref, suspect tolerance or regulation/PSRR limits.

First fix: tighten bias limits and add per-unit dataset IDs with traceability (SN/lot/fw). Use low-drift references (ADR4525) and validate using the same calibration procedure.

Calibration doesn’t stick after reboot — NVM integrity or version mismatch?

First two checks: read back dataset header (dataset_id, version, CRC) after reboot; verify power-down/brownout behavior on the NVM rail.

Discriminator: CRC mismatch or partial headers indicate write/hold-up issues; consistent CRC but wrong behavior indicates selecting the wrong dataset key (mode/temperature mapping).

First fix: add power-loss protection/hold-up, use robust storage (FM25V02A FRAM), and enforce strict dataset keying + rollback rules.

Figure F12 — Front-end FAQ decision loop (2 checks → discriminator → first fix)

Front-End FAQ Troubleshooting Loop two measurements → discriminator → first fix → verify → trace Symptom banding / drift / noise First 2 checks rail + baseline / temp Discriminator correlation test First fix leakage / rails / TEC Verify + trace dataset_id / version / CRC
Figure F12. A repeatable front-end debug loop: measure two signals, decide by correlation, apply the smallest first fix, then verify and log dataset identity (dataset_id/version/CRC).
Cite this figure
Figure F12 — Symptom → First 2 checks → Discriminator → First fix → Verify + trace (dataset_id/version/CRC).