CT Detector Front-End for Photodiode Array Readout
← Back to: Medical Electronics
This page explains how CT detector front-ends turn tiny photodiode currents into stable, time-aligned digital data with enough dynamic range, low noise and robust calibration to support modern low-dose CT imaging, and shows which IC roles, design checks and safeguards are needed to avoid long-term artifacts and reliability issues.
CT Detector Front-End in the CT Imaging Chain
A CT scanner converts controlled X-ray exposure into diagnostic images through a chain of X-ray tube and high-voltage supply, beam collimation, patient or phantom, a curved detector array and a data acquisition system that feeds the reconstruction engine. The detector front-end sits on the detector side of the gantry, where X-ray photons are turned into electrical signals that can be digitised reliably.
In this context the detector module can be viewed as a consolidated block: scintillator + photodiode array + TIA + ADC + clock/sync + digital output. The scintillator and photodiodes convert X-ray dose into current, the TIA chain converts that current into a voltage suitable for high-resolution ADCs, clock and synchronisation signals define integration windows, and the digital interface transfers formatted data to the rotating or stationary DAS and reconstruction system.
Modern CT detectors span from a few dozen channels in compact systems to several hundred or even a few thousand channels in multi-slice and cone-beam designs. Elements are typically arranged in a curved or ring-shaped geometry around the patient, sometimes in multiple rows, which drives tight requirements on front-end density, routing, synchronisation and thermal management inside the gantry.
From this position in the chain, the CT detector front-end must deliver high linearity and wide dynamic range from air scans and high-dose shots down to heavily attenuated signals, maintain very low noise and stable offset to avoid image artifacts, and stay precisely synchronised while remaining calibratable over temperature cycling and long-term ageing.
Scintillator and Photodiode Array Characteristics
A CT detector tile typically combines a scintillator material such as GOS or LYSO with a silicon photodiode array. The scintillator converts incident X-ray photons into visible light, while the photodiodes translate the resulting light flux into an electric current. The geometry and tiling of these cells, including one or multiple rows arranged along a curved gantry, define the number of electrical channels that the front-end must handle.
Photocurrent levels can span several orders of magnitude. Low-dose scans and heavily attenuated beams can produce currents in the pA to tens of pA range, while air scans or high-dose conditions can drive currents up to hundreds of nA or a few microamps. Integration windows are typically on the order of tens to hundreds of microseconds, sometimes up to about one millisecond depending on gantry speed and slice configuration, so both peak current and integrated charge must be considered.
Real detectors are also influenced by non-ideal effects. Dark current and leakage increase with temperature and can dominate the signal at low dose unless compensated. Scintillator afterglow introduces a decaying tail after each exposure, so part of the previous view can leak into the next integration window. Optical and electrical cross-talk between neighbouring cells impact spatial resolution and channel-to-channel matching and must be managed at both sensor and layout level.
These physical characteristics drive hard requirements on the TIA and ADC chain: wide dynamic range with carefully chosen gain settings, low input-referred noise for low-dose scans, predictable reset and recovery behaviour to control afterglow influence between views, and access to temperature and reference information so that dark current and drift can be calibrated over the life of the system.
CT Detector TIA Architecture and Key Metrics
The front-end of a CT detector typically uses inverting transimpedance amplifiers where photodiode current flows into a high-value feedback resistor and capacitor around a low-noise operational amplifier. This structure converts small detector currents into voltages that can be digitised while controlling bandwidth and stability. In many detector ASICs the same topology appears in dozens or hundreds of channels, with programmable feedback gain and options such as dual-range or bipolar support to cover low-dose and high-dose scan conditions.
Key sizing decisions tie the expected photodiode current range, transimpedance gain and ADC full-scale together. Currents can span from pA-level signals during low-dose scans up to microamp-level signals during air scans or high-dose protocols. Feedback resistance must therefore be high enough to resolve weak signals without burying them in noise, yet low enough that the output voltage stays safely within the ADC input range under worst-case dose. Dual-gain or multi-gain schemes and carefully chosen integration windows are often used so that both extremes remain inside a usable dynamic range.
Noise, bandwidth and input capacitance form a coupled design space. Total noise includes contributions from detector shot noise, amplifier input voltage and current noise and feedback resistor thermal noise. The effective bandwidth should be high enough to allow the signal to settle within the integration window but not so high that unnecessary wideband noise is integrated. Large input capacitance from the photodiode, PCB routing and amplifier input pins challenges stability and can introduce overshoot or ringing unless feedback capacitance, phase margin and layout are chosen with the CT detector environment in mind.
In a multi-channel detector ASIC, tens to hundreds of TIA channels share the same package and often common supplies and references. Channel density and matching become as important as single-channel noise. Gain and offset trims, accessible through digital interfaces, help align channels and reduce ring artifacts in the reconstructed image. Provision for per-channel calibration, reference monitoring and test modes further simplifies factory and in-system calibration strategies.
Large signals or stray X-ray events can saturate the TIA and drive the amplifier and protection structures into non-linear regions. Protection mechanisms and controlled recovery behaviour are therefore critical so that overload events do not damage the front-end and so that subsequent views can return to linear operation quickly enough to avoid persistent image artifacts.
ΣΔ and SAR ADC Roles and Trade-offs in CT Detectors
CT detector front-ends rely on high-resolution ADCs to convert TIA output voltages into digital data for reconstruction. Sigma-delta ADCs provide excellent noise performance and effective resolution by oversampling and digital filtering, which suits integration-based readout where each detector channel produces one or a small number of samples per view. Successive-approximation ADCs offer higher sampling rates and good energy efficiency, making them attractive in multiplexed architectures or specialised high-speed scan modes.
Typical CT detector requirements span 14-bit to 20-bit resolution depending on system class and imaging performance targets. For high-end systems, effective resolution and linearity over a wide dynamic range are key to avoiding subtle ring artifacts and preserving low-contrast details at low dose. The number of detector channels, the number of views per rotation and the desired gantry speed combine to set the overall throughput, which in turn drives whether per-channel converters or multiplexed solutions are viable within the available power and area budget.
With per-channel converters, each TIA output connects directly to its own ADC, often a sigma-delta, providing simple timing and minimal channel interaction at the cost of increased silicon area, digital interface width and total power. Multiplexed architectures instead route several TIA outputs through a low-crosstalk sample-and-hold and analogue multiplexer into one or a few high-speed SAR ADCs. This reduces converter count and can improve energy efficiency but demands careful design of the multiplexer, timing control and calibration to limit inter-channel coupling and maintain view-to-view alignment.
Another dimension is the placement of converters. Integrating sigma-delta ADCs and digital filters on the detector ASIC shortens the analogue path and simplifies transmission across the slip-ring by sending digital data only, but places more power dissipation and complexity inside the gantry. Locating ADCs on a separate DAS board eases cooling and can simplify future upgrades, yet requires analogue signal routing from the detector modules and more attention to noise, grounding and shielding along that path.
Whatever ADC architecture is chosen, its full-scale range, reference stability, input common-mode and power supply behaviour must align with the TIA chain. Maximum expected TIA output swing, including overload margins, should remain within the ADC input range while still providing enough codes for low-dose signals. The ADC reference and supplies must be routed and decoupled to avoid coupling digital switching noise into the analogue path, and the overall timing must stay compatible with the integration windows and synchronisation strategy used across the detector array.
Clock & Sync Distribution: View-to-View Consistency and Jitter Budget
In a CT detector front-end, clock and synchronisation signals tie together X-ray exposure, gantry angle, detector integration windows and data capture across multiple ASICs, ADCs and processing stages. Each view must correspond to a well-defined X-ray pulse and gantry angle, and all detector channels need integration windows that start and stop in a tightly aligned manner so that reconstruction receives a consistent data set for every projection.
Typical synchronisation objects include the X-ray tube exposure or pulse timing, the gantry rotation encoder that reports angle or view index, detector integration start and reset signals, and data framing events shared between detector modules, ADCs, FPGAs and the reconstruction engine. Frame sync, line sync and integration start/stop lines together define when each detector element is light-sensitive, when charge is integrated and when conversion and readout occur relative to gantry motion and X-ray emission.
The clock distribution network typically starts from a low-jitter system reference, such as a low-phase-noise VCXO or PLL-based clock source. This master clock fans out through low-jitter buffers to detector modules, DAS boards and central processing FPGAs, often in a hierarchical structure that balances trace length, loading and skew. Alongside the clock tree, dedicated sync and trigger lines convey frame sync, integration start and other timing markers so that detector ASICs, ADCs and logic devices share a common time base and view index.
Sampling clock jitter and inter-module skew directly affect image quality. For rapidly varying signals or short integration windows, clock jitter converts into amplitude uncertainty and constrains achievable SNR and effective resolution. Skew between detector modules shifts integration windows in time, degrading geometric consistency across the ring and potentially creating ring artifacts or blurring. A clear jitter and skew budget, expressed relative to integration window width and system resolution targets, provides a concrete design goal for clock tree and PCB layout.
Crossing the slip-ring or other rotating interface adds further constraints. Differential signalling such as LVDS or optical links is typically used to carry clocks and sync across the rotating boundary, with careful attention to common-mode noise and contact variation. Redundant sync lines, error detection and re-synchronise mechanisms at the receiving FPGA help protect against intermittent disturbances so that detector front-ends maintain stable view-to-view timing over the system lifetime.
Calibration, Linearity and Long-Term Stability
CT detector performance depends not only on low-noise and wide-dynamic-range hardware but also on regular calibration that maintains offset, gain and linearity over time. A complete strategy typically includes dark offset calibration under zero X-ray conditions, gain calibration using air scans or reference phantoms and multi-point linearity calibration across representative dose levels. These procedures reduce channel mismatch, suppress ring artifacts and preserve quantitative accuracy across operating modes.
Dark offset calibration measures each channel’s response when the detector is shielded or when X-ray emission is disabled. Under these conditions the residual output reflects dark current, TIA offset and ADC offset. A per-channel offset value derived from this measurement can be subtracted digitally or applied through on-chip trimming so that low-dose and low-contrast regions are not dominated by fixed pattern offsets from the analogue front-end.
Gain calibration relies on air scans or reference phantoms at one or more dose settings to expose detector elements to uniform radiation paths. Differences in response between channels under the same conditions are attributed to gain mismatch in the scintillator, photodiode, TIA and ADC chain. Per-channel gain coefficients derived from these measurements align channel responses and reduce striping or ring artifacts. Additional calibration points at multiple dose levels support linearity calibration, allowing reconstruction or front-end logic to compensate for non-linear behaviour across the dynamic range.
Hardware support for calibration is critical. Controlled reference current or voltage injection paths near the photodiode or TIA input allow test signals to be applied without X-ray exposure, simplifying production and service calibration. On-chip temperature sensors and reference monitors provide additional context, linking calibration results to die temperature and reference voltage or current levels. Digital registers or OTP memory store per-channel coefficients so that calibration data persist across power cycles and product lifetime, while still allowing field updates if required.
Temperature and ageing drive long-term drift. Photodiode leakage and dark current increase with temperature, TIA gain and offset vary with amplifier parameters and resistor tempco, and ADC references carry their own temperature coefficients. Over months and years, scintillator light yield, photodiode sensitivity and passive components can change, especially in high-dose or high-temperature environments. Periodic recalibration or background monitoring is therefore needed to keep offset, gain and linearity within bounds defined by image quality requirements.
At the IC level, it is advantageous to reserve dedicated monitoring points such as reference monitors, calibration multiplexers and test modes that expose internal nodes for measurement. These hooks enable manufacturing and system integrators to implement efficient calibration flows without intrusive probing, and they give service tools a path to verify detector health over the lifetime of the CT system while preserving linearity and stability.
Noise, Dynamic Range and Dose Management Trade-offs
CT detector front-ends must cover signal levels ranging from very low-dose protocols, where patient dose is minimised, up to high-contrast scans and air scans used for calibration. Across this span, effective SNR and dynamic range determine how well low-contrast details are preserved and how strongly ring or streak artifacts appear. The front-end therefore needs a signal chain that keeps noise low at the bottom end while avoiding early saturation at the top end, with dose management strategies that respect these limits.
Transimpedance gain is one of the primary levers. High TIA gain improves resolution for weak currents in low-dose scans, but reduces the available headroom for high-dose or air-scan conditions and can drive the amplifier or ADC into saturation. Lower gain extends input range but leaves low-dose signals closer to the noise floor and more sensitive to offset. Many multi-channel detector front-end ASICs, such as devices in the ADAS1128, ADAS1134 or ADAS1256 families, address this tension by offering multiple gain ranges or dual integration paths per channel so that both low-dose and high-dose regimes are covered without sacrificing linearity.
Integration time forms a second axis of trade-off. Longer integration windows collect more charge per view and improve SNR for a given dose, but widen the effective angular span of each projection on a rotating gantry and make the system more vulnerable to patient motion and cardiac or respiratory cycles. Shorter integration windows improve temporal resolution and reduce motion blur, yet require lower front-end noise and often higher ADC throughput to maintain SNR. The integration window must also remain compatible with the clock and sync jitter budgets defined for the detector and DAS timing chain.
ADC resolution and architecture complete the picture. Higher nominal resolution and ENOB improve the ability to resolve small changes at low dose, but they cost silicon area and power, especially in systems with hundreds or thousands of channels. Multi-channel SAR or pipeline ADCs such as members of the ADS5294 or AD9257 families can deliver high aggregate throughput at moderate resolution, while multi-channel sigma-delta ADCs such as devices in the AD7779 class favour resolution and noise performance. The chosen converter architecture must balance per-channel performance with overall power and data bandwidth constraints in the gantry.
Practical designs often combine several techniques. Dual-gain or dual-range readout captures high-gain and low-gain views in parallel, with an overlap region that allows cross-calibration. Auto-ranging schemes switch gain or range based on estimated signal level, but require careful design of the switching thresholds and transition behaviour so that additional noise or small discontinuities do not leak into reconstructed images. Overload and saturation flags from the front-end give the system clear feedback when dose or configuration pushes channels beyond their linear range, allowing reconstruction and dose control algorithms to react.
All of these choices sit under strict power and thermal constraints inside the rotating gantry. High dynamic range TIAs, high-resolution ADCs and dense clock trees increase power density and temperature, which in turn drive leakage, offset and reference drift. The optimal operating point is therefore a combination of adequate dynamic range, suitable calibration hooks and a dose management strategy that respects both image quality requirements and the thermal realities of the detector module.
IC Role Mapping and Typical Combinations
A CT detector front-end is built from a set of specialised IC roles that together implement current-to-digital conversion, timing, isolation and health monitoring. Understanding these roles and their key specifications helps define realistic device requirements and choose compatible combinations. This section focuses on roles and example part numbers rather than any specific vendor preference so that the architecture remains flexible over time.
Multi-channel PD array TIA ASIC
The multi-channel TIA ASIC interfaces directly with the photodiode array, providing low-noise, low-drift transimpedance conversion with selectable gain and, in some devices, dual-range or dual-integration support. Important specifications include input current range, transimpedance gain options, channel density, noise current, channel-to-channel matching and on-chip calibration features such as offset and gain trims. Example devices in this class include ADAS1128, ADAS1134 and ADAS1256, as well as detector front-ends such as AFE2256 for flat-panel and X-ray applications.
Sigma-delta and SAR ADCs
Sigma-delta and SAR converters digitise TIA outputs and define effective resolution and throughput. Sigma-delta converters favour high resolution and noise performance, while multi-channel SAR or pipeline ADCs provide higher sampling rates and good energy efficiency for many channels. Key metrics are resolution and ENOB, INL/DNL, per-channel sampling rate, input interface and reference voltage requirements. Representative multi-channel devices include ADS5294 and ADS5292 families on the SAR side, and multi-channel sigma-delta converters such as AD7779 or similar high-resolution front-end ADCs.
Clock buffers, jitter cleaners and PLLs
Clock distribution ICs fan out low-jitter references to detector ASICs, ADCs and FPGAs and often integrate PLL or jitter-cleaning functions. Low RMS jitter, good phase noise, adequate output formats and channel counts, and flexible input reference options are critical. Example parts include timing devices such as AD9528 or AD9517 families, and clock jitter cleaners or distribution chips similar to LMK04828 or LMK04832 used in high-performance imaging systems.
Voltage references and temperature sensors
Precision references define ADC full-scale and TIA bias points, while temperature sensors tag calibration data with die or module temperature. Reference temperature coefficient, long-term drift and noise determine how often recalibration is required. Typical reference devices include ADR4550 or REF5050 series parts for stable voltage generation. Digital temperature sensors such as ADT7420 or TMP117 provide accurate temperature data over the operating range of detector modules.
Digital isolators, LVDS and serializers
Digital isolators and serializers carry data, clocks and control signals across isolation barriers and slip-ring or fibre links. Isolation withstand voltage, CMTI, data rate, propagation delay and power consumption are important. Example isolators include ADuM640x or ADuM14xx/ADuM64xx series and ISO7842-class devices. Channel-link serializers and LVDS drivers such as DS90CR218A or DS90CR285 families are typical for aggregating parallel data into high-speed serial links.
Monitoring and housekeeping ICs
Monitoring and housekeeping ICs supervise supply rails, sequencing, currents and fault conditions in detector modules. They provide health information for calibration and field diagnostics. Desired features include multi-rail voltage and current monitoring, programmable thresholds, logging and digital interfaces such as I²C, SPI or PMBus. Examples include multi-rail power managers like LTC2977 or LTC2974, hot-swap and monitor devices such as ADM1177, and current or power monitors similar to INA219 or INA226 used on key supply rails.
Typical IC combinations
A compact detector module often uses a highly integrated multi-channel front-end ASIC such as ADAS1128, ADAS1134 or ADAS1256 that combines TIA and ADC functions, along with a small precision reference and modest clock buffer. A more modular design separates TIA stages from multi-channel ADCs like ADS5294 or AD9257 and adds a dedicated jitter cleaner such as LMK04828 plus high-accuracy references and sensors. High-end systems may combine integrated detector ASICs with external jitter cleaners, multi-rail monitors such as LTC2977, redundant serializer paths and richer on-board diagnostics to support advanced calibration and reliability strategies.
Clocking, isolation and power management will be expanded from a system perspective in other medical electronics pages, while this section keeps the focus on the CT detector front-end role and the way its IC building blocks fit together in practical combinations.
Design Checklist & Common Pitfalls (Engineer-Friendly)
This section is intended as a practical self-check page for CT detector front-end design. Engineers can use the checklist to verify that key analyses and hooks are in place from photodiode all the way to system monitoring, and use the common pitfalls list to avoid issues that frequently appear late in integration or during image reconstruction.
Design checklist: key questions to verify
Photodiode array and detector physics
- Capacitance quantified: has typical and worst-case photodiode capacitance per pixel been measured or specified, including package and routing parasitics, and fed into TIA stability and noise analysis?
- Dark current vs. temperature: is dark current characterised across the relevant temperature range and included in the noise and offset budgets for low-dose operation?
- Channel matching: are photodiode matching and spread documented so that expected fixed-pattern artifacts and required calibration depth are understood?
- Afterglow and cross-talk: have scintillator afterglow and optical or capacitive cross-talk between pixels been evaluated for impact on baseline and neighbour channels?
TIA: noise, gain and stability
- Complete noise budget: does the design include a written noise budget combining photodiode shot noise, TIA voltage and current noise, feedback resistor noise and ADC quantisation noise, with specific SNR targets at low-dose conditions?
- Gain and input range mapping: are transimpedance gain settings mapped to expected current ranges for low-dose, normal and air-scan conditions, with clear margins to saturation?
- Loop stability with real capacitance: has loop stability been analysed with realistic input capacitance (photodiode + wiring + amplifier input), across temperature and device spread?
- Large-signal behaviour: has TIA recovery time from saturation or overload been measured or simulated, and is it compatible with view rate and rotation speed without causing ghost artifacts?
ADC: resolution, throughput and input interface
- Resolution vs. SNR target: is required ENOB derived from image quality and low-dose SNR targets, rather than assuming that an 18- or 20-bit converter is automatically sufficient?
- Input range matching: does the TIA output swing make efficient use of the ADC input range and comply with its common-mode and differential input requirements?
- Channel count and view throughput: for the chosen view rate and samples per view, does the ADC architecture provide comfortable throughput margin across all channels, not just at the data-sheet limit?
Clock and synchronisation
- Jitter budget quantified: has a numerical jitter budget been derived from sampling rate and integration timing, and are all contributors from clock source, PLL, fan-out and links accounted for?
- Inter-module skew limits: are maximum allowed skew values between detector modules or ASICs defined and verified against integration window width?
- Slip-ring redundancy and monitoring: do clock, sync and data links across the rotating interface have redundancy or error detection, and is there a defined re-synchronisation behaviour after disturbances?
Calibration hooks and data handling
- Calibration injection paths: are there controlled injection points near the photodiode or TIA input that allow dark, gain and linearity calibration without requiring X-ray exposure?
- Coefficient storage strategy: where are per-channel offset, gain and linearity coefficients stored (on-chip OTP, EEPROM or external memory), and is the power-up load and update flow clearly defined?
- Temperature and reference tagging: are die temperature and reference readings captured alongside calibration data so that drift sources can be separated and tracked over time?
Reliability, drift and long-term behaviour
- Extreme operating points: are leakage, offset and noise evaluated at maximum temperature and dose conditions, and linked to thermal simulation results of the gantry environment?
- Ageing and radiation drift monitoring: is there a plan for periodic air or phantom scans and trends of calibration coefficients to detect long-term drift?
- Degradation and fall-back modes: if a subset of channels degrades, is there a defined degraded operating mode or redundancy path, rather than a full module outage?
Common pitfalls to avoid
- No guard or clean layout around PD + TIA inputs: sensitive high-impedance nodes near the photodiode and TIA input are routed without guard rings, clear separation from noisy nets or strict cleanliness control. Residual contamination or parasitic conduction introduces extra leakage and coupling, degrading low-dose noise and increasing fixed-pattern artifacts.
- Ignoring large-signal recovery when scanning fast: design focuses only on small-signal frequency response and neglects recovery time from saturation or overload. Under high-dose views or fault flashes, TIA outputs recover too slowly, causing baseline shifts and ghost-like artifacts in subsequent views.
- Uneven clock distribution creating channel-to-channel skew: the clock tree inside the gantry has very different path lengths and buffering between modules, with no quantified skew limit. Integration windows no longer align in time, leading to geometric inconsistencies and ring artifacts in reconstructed images.
- No monitoring of slip-ring errors or timing drift: high-speed LVDS or serializer links across the slip-ring operate without error detection, statistics or re-synchronisation mechanisms. Bit errors or timing drift accumulate silently, and only appear later as streaks or noisy bands in images that are hard to debug.
- Factory-only calibration with no field path: the design assumes a single factory calibration is sufficient for the entire lifetime, with no support for field recalibration or coefficient updates. Temperature, ageing and reference drift gradually erode image quality until a major service intervention is required.
- Reference and temperature monitors not integrated into the flow: reference monitors and temperature sensors are present in the schematic but not integrated into calibration, diagnostics or logging. As a result, it is difficult to distinguish detector drift from reference or thermal issues during troubleshooting.
- Noise budget based only on typical conditions: noise analysis uses typical device parameters at room temperature and does not consider worst-case process, temperature and layout parasitics. Low-dose performance in real systems falls short of expectations, forcing heavier image processing or dose increases that could have been avoided with a more conservative design.
Using this checklist and pitfalls list early in the design and layout phases helps catch weak spots while changes are still inexpensive, instead of discovering them only after the first images are reconstructed.
H2-10 · FAQs × 12 – CT Detector Front-End
This FAQ summarises practical questions that CT detector front-end designers often ask. Each answer gives concise guidance you can map back to earlier sections on architecture, dynamic range, timing, calibration and reliability, and ends with quantitative hints that can be turned into concrete design targets for a specific platform.
1) When is a dedicated CT detector TIA ASIC preferable to discrete op amp TIAs?
2) How much dynamic range is typically required for modern CT detectors, and how is it achieved?
3) How do sigma-delta and SAR ADCs compare for different CT scan modes and detector architectures?
4) What jitter and skew levels are acceptable for CT detector clocks to avoid visible image artifacts?
5) How often should dark and gain calibration be performed in a CT system?
6) What layout practices help reduce leakage and crosstalk in dense photodiode plus TIA arrays?
7) How should integration time and TIA gain be adjusted when migrating to lower-dose CT protocols?
8) What safeguards are needed when CT detector photodiode arrays see unexpected high dose or overload conditions?
9) How can reference drift and temperature effects be monitored over the life of a CT detector front-end?
10) What common failure modes are seen in field returns for CT detector front-ends?
11) How should CT detector front-end data be packaged for transmission across a slip-ring?
12) What test modes should be built into CT detector front-end ICs to simplify manufacturing and system test?
Key quantitative design targets snapshot
These values are indicative starting points only. Final targets should be refined with system-level image quality, safety and regulatory requirements.
| Metric | Typical target range | Related FAQs |
|---|---|---|
| Effective detector dynamic range | ≈ 90–100 dB using dual-gain paths and high-ENOB ADCs | Q2, Q7 |
| Detector clock RMS jitter | Low tens of picoseconds for main sampling clocks | Q4, Q11 |
| Inter-module integration skew | < 5–10% of the integration window width | Q4 |
| Dark calibration cadence | On start-up and after ≈ 5–10 °C temperature change | Q5, Q9 |
| Gain calibration cadence | Daily or weekly, and after detector or board replacement | Q5, Q10 |
| Reference voltage tempco | ≈ 3–5 ppm/°C for precision detector references | Q2, Q9 |
| Typical system-level ENOB | ≈ 14–16 effective bits after noise and non-idealities | Q2, Q3 |
| Slip-ring link reliability target | Bit error rates in the 10⁻¹² class or better with CRC | Q11, Q10 |