123 Main Street, New York, NY 10001

Fluoroscopy / DSA Acquisition Chain: ADCs & Sync

← Back to: Medical Imaging & Patient Monitoring

A fluoroscopy/DSA acquisition chain succeeds when timing is deterministic and every frame is traceable—so subtraction stays stable, latency stays predictable, and artifacts can be diagnosed with logs instead of guesswork. Focus on controlling sync points, budgeting jitter/throughput, and proving “no drops” with counters, timestamps, and buffer watermarks.

H2-1 · What this chain does in fluoroscopy vs DSA

The acquisition chain turns X-ray exposure timing and detector readout into repeatable frames that downstream processing can trust. Fluoroscopy prioritizes real-time usability (low end-to-end delay and stable live appearance), while DSA prioritizes subtraction stability (frame-to-frame consistency so small vessel contrast is not buried by drift, timing misalignment, or non-repeatable offsets).

How the engineering pressure changes (fluoro vs DSA)

  • Dynamic range: Fluoro needs a stable live image across anatomy and tool movement; DSA needs dynamic range plus stable linear behavior so subtraction does not leave background residue.
  • Stability: Fluoro stability is “no pumping/flicker”; DSA stability is “the baseline cancels cleanly,” making slow drift (offset/gain/reference/trigger skew) disproportionately visible after subtraction.
  • Frame-to-frame consistency: Fluoro inconsistency looks like shimmer; DSA inconsistency becomes structured artifacts (ghosting, edge echoes, residual fixed-pattern signals). Deterministic frame start, exposure gate, and readout window alignment is mandatory.
  • Latency: Fluoro is highly sensitive to end-to-end delay and delay jitter (operator feedback). DSA can tolerate slightly more latency, but cannot tolerate inconsistent processing paths that change frame alignment.

Practical design intent for this page: define the acquisition chain “control points” that must be deterministic and measurable— exposure trigger, frame sync, sampling clock quality, frame IDs/timestamps, and the end-to-end latency budget—so real-time ISP and DSA subtraction operate on repeatable inputs.

Fluoroscopy/DSA acquisition chain with sync points, triggers, and latency bracket Block diagram from exposure pulse to detector to AFE/ADC to FPGA/ISP, with an optional DSA subtraction branch to display/record. Sync points and trigger/gate controls are marked, along with an exposure-to-display latency bracket and measurement points. Exposure-to-Display Acquisition Chain (Fluoro + DSA) Deterministic sync + measurable latency + repeatable frames Exposure Pulse / Gate Trigger in Detector Readout window AFE + ADC High-speed sampling FPGA / ISP Pre-process Output Display / Record DSA Subtract Mask vs contrast frames Optional path Control & Monitor Points Frame Start Exposure Gate ADC Sample Clock Frame ID / Timestamp Counters + Logs E2D latency budget (Exposure → Display/Record) Exposure gate ADC pipeline ISP stages Buffers Output path

Figure F1: Use the marked sync/trigger points and the E2D bracket to keep live fluoroscopy responsive and DSA subtraction repeatable.

H2-2 · End-to-end signal & timing budget (the numbers that matter)

Budgeting turns “image quality and real-time feel” into measurable constraints. The minimum set is: throughput (data rate headroom), timing windows (exposure/integration/readout alignment), and exposure-to-display latency (including latency jitter). These budgets prevent hidden frame drops, misalignment, and subtraction instability.

Budget primitives (define these first)

  • Nx, Ny: active pixels (or ROI) used by the acquisition path
  • bpp: bits per pixel (including packing/overhead as implemented)
  • fps: frame rate (steady for fluoro; consistent sequencing for DSA)
  • Texp: exposure pulse / integration window
  • Tread: detector readout window aligned to the frame
  • Te2d: exposure-to-display latency (track both average and jitter)

Throughput budget (practical, not idealized)

Start with Throughput ≈ Nx × Ny × bpp × fps, then account for line/blanking overhead, framing headers, alignment padding, and bursty readout peaks. Headroom matters: when throughput is tight, systems often show latency wobble and intermittent frame instability long before “obvious drops” appear.

Item What to include Evidence / hook
Raw data rate Nx × Ny × bpp × fps Config + counter sanity checks
Protocol overhead Headers, padding, blanking, alignment Link utilization, payload ratio
Peak vs average Burst readout and buffer refill behavior FIFO fill level over time
Headroom target Margin to avoid frame-time coupling Drop counter = 0, stable latency distribution

Timing window & latency budget (measure what matters)

  • Timing window: keep exposure gating aligned to the intended integration/readout windows so frame start and exposure timing do not drift.
  • E2D latency: split the exposure-to-display path into measurable contributors (ADC pipeline → ISP stages → buffers → output).
  • Observability: place a frame ID + timestamp at acquisition entry and at presentation output; add event counters for sync loss, buffer overflow, and trigger miss to make instability diagnosable.
Exposure-to-display latency budget bars for fluoroscopy and DSA acquisition A timeline-style budget diagram showing the end-to-end latency from exposure gate to displayed frame. Contributors include readout window, ADC pipeline, ISP processing, buffering, and output path. Measurement markers indicate where to capture timestamps and frame counters. E2D Latency Budget (Exposure → Display) Break latency into measurable contributors + track jitter Time Exposure gate Readout window ADC pipeline ISP stages Buffers Out Latency jitter watch-outs Buffer waterline coupling Non-deterministic processing paths Trigger misalignment Measurement hooks Input timestamp + frame ID Sync/trigger miss counters Output proof Presented timestamp + frame ID Latency histogram (avg + jitter)

Figure F2: Use a contributor-based latency budget and measurement hooks to keep fluoroscopy responsive and DSA frames repeatably aligned.

H2-3 · High-speed ADC selection: resolution vs speed vs linearity

In fluoroscopy and DSA, the “best” ADC is the one that meets effective resolution at the actual operating point. A nominal 16-bit label is meaningless unless ENOB and SFDR hold at the target sampling rate and input spectrum. For DSA subtraction, repeatable noise floor and linear behavior matter as much as raw speed, because non-repeatable offsets and stable spurs can survive subtraction as structured residue.

Selection rules that prevent late-stage surprises

  • Anchor ENOB to conditions: require ENOB at your sampling rate (Fs) and representative input frequencies (fin), not only a low-frequency datasheet headline.
  • Use SFDR as a subtraction risk indicator: stable spurs and nonlinearity products can remain visible after DSA subtraction, even when average noise looks acceptable.
  • Check linearity where it matters: INL/DNL and harmonic behavior drive how cleanly background cancels during subtraction.
  • Validate the input chain as a system: driver bandwidth/distortion, input common-mode, and anti-alias filtering can degrade ENOB/SFDR more than the ADC itself.
  • Choose speed vs ENOB deliberately: higher sampling rate helps fast motion capture; higher ENOB/linearity helps subtraction stability and low-contrast vessel visibility.

Pipeline vs SAR: practical boundaries

ADC family Typically chosen when DSA/fluoro relevance What to verify
Pipeline Higher throughput and wider input bandwidth are needed; some pipeline latency is acceptable. Strong option for high-rate acquisition; must still prove SFDR/linearity so subtraction does not leave residue. ENOB/SFDR at target fin, latency stability, channel matching, clock-jitter sensitivity.
SAR (multi-channel) Deterministic conversion timing and efficiency are needed; parallel channels cover throughput. Can be attractive for stable, repeatable behavior; ensure input bandwidth and distortion remain adequate at speed. THD/SFDR at operating amplitude, driver settling, input kickback, and per-channel gain/offset stability.
Key reminder The label “14–16 bit” is not the requirement; the requirement is ENOB + SFDR + linearity at the actual Fs/fin and across the operating range (temperature, gain settings, and frame timing).

A reliable decision pattern is to start from the imaging mode: if motion/real-time feedback dominates, prioritize sampling bandwidth and deterministic throughput. If subtraction stability and low-contrast visibility dominate, prioritize ENOB/linearity and spur cleanliness at the operating point—even when that means not chasing the highest sampling rate.

ADC architecture impact matrix for fluoroscopy and DSA acquisition A matrix comparing pipeline ADC and SAR ADC across ENOB-at-speed, latency, power, linearity/SFDR, and integration complexity, using icon-based ratings and short labels suitable for engineering trade-offs. ADC Architecture Impact Matrix (High-Speed Acquisition) Compare at the operating point: ENOB@Fs/fin + SFDR + linearity ENOB @ speed Latency Power Linearity / SFDR Complexity Pipeline High Fs SAR Parallel ch. High ENOB@Fs/fin Medium Higher Bias + Fs Good Check spur map Medium High Depends on Fs Low Lower Efficient Good Driver settling Medium Tip: require ENOB/SFDR at your Fs and fin; for DSA, spur cleanliness and linearity often matter more than chasing maximum Fs.

Figure F3: A quick decision matrix—tie ENOB/SFDR to operating conditions, then choose architecture based on throughput, latency, and subtraction stability.

H2-4 · Aperture jitter: why “clock cleanliness” dominates high-frequency SNR

Aperture jitter turns sampling time uncertainty into amplitude error. The key rule is simple: the higher the input frequency, the more clock jitter behaves like a noise source. As fin rises, jitter can set a hard SNR ceiling even when the ADC’s nominal resolution looks sufficient.

Practical budget and acceptance (no PLL deep dive)

  • Budget total RMS jitter for the sampling clock at the ADC pins (source + distribution + fanout).
  • Validate with SNR vs fin: if SNR drops predictably as fin increases, jitter is likely the dominant limit.
  • Prove improvement by substitution: swapping to a cleaner clock source/tree should move the SNR ceiling upward if jitter was the bottleneck.
  • Document evidence: record test conditions (Fs, fin, amplitude, temperature range) and provide an SNR/SFDR snapshot (or key points) as the acceptance proof.

Common misreads to avoid

  • “The ADC bit depth is not enough”: at higher fin, jitter can cap SNR well below the nominal bit level.
  • “PLL spec looks fine”: effective jitter at the ADC depends on the full clock path and must be verified at the sampling point.
Aperture jitter to sampling uncertainty to SNR loss relationship A visual relationship diagram showing clock RMS jitter producing sampling time uncertainty, which becomes amplitude error at higher input frequency, raising the noise floor and reducing SNR. Includes a compact rule label and measurement hooks. Aperture Jitter → Sampling Error → SNR Loss Rule of thumb: higher fin makes jitter more painful Sampling Clock RMS jitter (at ADC pins) Clock edge uncertainty time uncertainty Δt Sampling Moment Δt shifts sample phase higher fin → larger amplitude error amplitude error SNR Result noise floor up SNR down How to prove jitter is the limiter Measure SNR/SFDR at multiple fin points Swap to cleaner clock tree; look for SNR ceiling shift Confirm effective jitter at the ADC sampling point Record Fs, fin, amplitude, temperature as acceptance evidence

Figure F4: Jitter creates time uncertainty; at higher fin it becomes amplitude noise and raises the effective noise floor, reducing SNR.

H2-5 · Clock tree & sync points you must control (without turning into a timing textbook)

A stable acquisition chain is defined by specific sync points, not by a single “good clock.” These points must be controllable and observable so every frame can be aligned, verified, and reproduced—especially for DSA subtraction where small timing drift becomes visible residue. This section focuses on what must be controlled and proven, not internal PLL design or network timing protocols.

Minimum sync/monitor checklist (use as a spec template)

Point Control Monitor If it slips Acceptance proof
Frame start Deterministic frame boundary Frame counter continuity DSA echo, intermittent misalignment No drop/reorder counters; stable phase
Line start Stable line cadence Line marker / line counter Banding, tearing, scan artifacts No drift under fixed conditions
Exposure pulse Trigger and gate timing Trigger miss / duplicate count Brightness pumping, global flicker Pulse↔integration phase is stable
ADC sample clock Low effective RMS jitter at ADC pins SNR/SFDR vs fin checkpoints High-frequency detail loss Meets SNR at target fin/Fs
ISP boundaries Deterministic pipeline path Per-stage latency, FIFO level Latency wobble, subtraction instability Latency histogram is narrow
Frame ID / timestamp Traceability end-to-end Input↔output mapping Hidden reorder/duplication One-to-one mapping, error=0

A practical rule: every “must-control” sync point should also have a “must-monitor” hook. When a symptom appears (DSA residue, flicker, banding, latency jitter), the fastest diagnosis is to correlate image behavior with event counters and timestamps captured at these points. Cross-device time sync and protocol details belong on the dedicated timing page; the focus here is the acquisition chain’s internal determinism.

Acquisition chain block diagram with sync points and monitor points A simplified exposure-to-output block diagram with blue circular markers for sync points and square markers for monitor points, covering frame start, line start, exposure pulse, ADC clock, ISP boundaries, and frame ID/timestamp hooks. Sync Points (●) and Monitor Points (■) in the Acquisition Chain Control + measure these points to keep frames repeatable for fluoro and DSA Exposure Pulse / Gate Detector Readout AFE + ADC Sample FPGA / ISP Stages Output Display ExpPulse FrameStart LineStart ADCclk StageBoundary FrameID/TS Trigger counter Frame/line counters SNR/SFDR check Latency + FIFO logs Presented FrameID/TS Legend Sync point (control/align) Monitor point (measure/log) Tip: a sync point without a monitor hook becomes a “black box” during DSA artifact investigations.

Figure F5: Mark and instrument these sync/monitor points to keep frame alignment deterministic without turning the design into a timing textbook.

H2-6 · Trigger & exposure gating: aligning detector integration with X-ray pulses

With pulsed X-ray, image stability depends on a strict relationship between three windows: exposure pulse, detector integration, and readout gate. When these drift or overlap incorrectly, the symptom is not subtle—DSA can show subtraction residue, and fluoroscopy can show global flicker, ghosting, or brightness pumping. The goal is to keep the overlap deterministic and to log when alignment slips.

Correct window relationships (what “aligned” means)

  • Integration covers exposure: the intended integration window must contain the X-ray pulse with defined margin.
  • Blanking removes transients: blank during switching/settling so partial or invalid samples do not enter the frame.
  • Readout follows predictably: the readout gate should occur in a fixed phase relationship after integration so frame-to-frame alignment does not drift.

Fast diagnosis: exposure mismatch vs readout-window drift

  • Global brightness pumping (whole-frame mean level jumps) usually points to exposure/integration misalignment or trigger misses/duplicates. Verify pulse counters and integration gate phase.
  • Edge ghosting / subtraction echoes (structured residue) often points to frame/readout window drift or inconsistent frame alignment. Verify frame start markers and readout gate phase stability.
  • Lag-like residue that worsens with motion suggests alignment errors are being magnified by scene changes. Compare a static phantom vs motion to confirm.

The most reliable proof uses both image-domain evidence and event-domain evidence: track per-frame mean/variance (for flicker and pumping) while logging trigger miss counts, window-slip events, and frame ID/timestamp mapping from acquisition entry to displayed output.

Timing alignment of exposure pulse, integration window, and readout gate A waveform-style timing diagram comparing correct alignment versus misalignment. It shows exposure pulse, integration window, blanking, and readout gate as stacked tracks with clear overlap and drift indicators. Pulse / Integration / Readout Alignment (Pulsed X-ray) Left: correct overlap · Right: misalignment drift Time Correct Misaligned Exposure pulse Integration Blanking Readout gate Integrate blank Readout covered Integrate drift / slip blank Readout Typical symptoms ghost / lag brightness pumping DSA residue Tip: correlate frame mean/variance with trigger miss and window-slip counters to separate exposure misalignment from readout drift.

Figure F6: Keep exposure, integration, blanking and readout gates in a fixed phase relationship; drift produces visible flicker and subtraction artifacts.

H2-7 · Real-time ISP pre-processing: what must be deterministic for DSA subtraction

DSA subtraction does not forgive “almost the same” frames. Any frame-to-frame variation in correction path, parameters, timing, or geometry becomes a visible residue after subtraction. The goal on the acquisition side is not to maximize processing, but to keep processing repeatable: same path, same parameters, same latency, and stable noise statistics.

Determinism checklist (the minimum DSA-safe behavior)

  • Fixed correction path: frames must not take different pipelines based on load or scene conditions.
  • Stable parameter set: offset/gain and defect handling rules must remain constant for a given acquisition mode.
  • Stationary noise: noise behavior should be stable across frames (no periodic spikes or drifting baseline).
  • Geometric stability: processing must not introduce frame-dependent scaling/cropping shifts inside the subtraction window.
  • Traceability: embed or log FrameID + timestamp + ISP config signature so subtraction failures can be correlated to events.

What “acquisition-side ISP” must keep stable

Stage Why it matters for subtraction If not deterministic Proof / check
Offset / black level Prevents low-frequency residue and drifting background Gray haze, unstable baseline in diff frames Frame mean stays stable under fixed scene
Gain consistency Keeps contrast response constant across mask/live frames Brightness pumping; subtraction strength varies Per-frame histogram/variance is stable
Defect pixel handling Avoids flicker-like points that subtraction amplifies Sparkle, jumping dots, localized residue Fixed map/rules; no frame-dependent toggling
Flat-field (concept-level) Reduces fixed-pattern effects only if reference stays stable Structured shadows/stripes that remain after subtraction Diff-frame energy does not show periodic peaks

Minimal acceptance method (fast and reproducible)

  • Use a fixed scene (phantom) and record FrameID + timestamp + ISP config signature per frame.
  • Track frame mean/variance and diff-frame energy (frame n − frame n−1). Look for drift or periodic spikes.
  • If switching to a “more adaptive” ISP increases diff-frame energy or residue, determinism is being violated even if images look fine frame-by-frame.
Acquisition-side ISP pipeline for DSA subtraction with determinism markers A processing pipeline from RAW frames through offset/gain correction, defect pixel handling, flat-field (concept), then subtraction and output. A bracket highlights stages that must be stable and deterministic. Real-time ISP Pre-processing for DSA (Acquisition Side) Deterministic stages keep subtraction residue low RAW frames FrameID + TS Offset / Gain Stable params Defect pixels Fixed rules Flat-field Stable ref Subtract Amplifies mismatch Output DSA view MUST BE STABLE same path · same params · same latency Evidence hooks (log/measure) FrameID + timestamp continuity Config signature (params version) Diff-frame energy / residue counters

Figure F7: Subtraction amplifies mismatch—keep acquisition-side corrections deterministic and prove it with FrameID/TS and stable statistics.

H2-8 · Artifact map & quick triage: where banding/lag/ghosting usually comes from

Fast triage works best when it starts with what can be disproven quickly. The recommended order is Timing/throughputAnalog chainAlgorithm/processing. Many “algorithm-looking” artifacts are timing drift, dropped/reordered frames, or non-deterministic ISP parameters.

Quick triage table (symptom → likely source → what to measure)

Symptom Most likely source Minimal measurements Fast falsification
Banding / stripes Line timing drift, readout cadence, sync marker instability Line counter/marker stability, readout window phase, frame-internal periodicity If markers and phase are stable, move to analog noise and clipping checks
Lag / after-image Exposure/integration misalignment, blanking/readout gating errors, frame inconsistency Pulse/integration/readout waveforms, trigger-miss & window-slip counters, diff-frame energy If waveforms align and counters stay zero, check analog settling and saturation
Ghosting / edge echo (DSA) Frame alignment drift, non-deterministic ISP params/path, hidden reorder FrameID/TS one-to-one mapping, ISP config signature stability, latency histogram If IDs and latency are deterministic, investigate analog chain stability next
Dropped frames / jumps Throughput shortfall, buffer overflow, trigger jitter, path switching under load Drop/reorder counters, buffer watermark, end-to-end latency distribution If counters show events, fix timing/throughput before tuning processing

Triage principle: lock the domain first

  • Timing domain: dropped/reordered frames, phase slip, window drift, sync loss (counters + waveforms).
  • Analog domain: noise floor changes, clipping, drift, settling issues (SNR/variance + saturation flags).
  • Algorithm/processing domain: only after timing and analog causes are disproven (avoid “tuning around” a drift problem).
Artifact-to-root-cause tree with quick checks A tree diagram mapping common artifacts (banding, lag, ghosting, dropped frames) to root cause classes (timing/sync, analog chain, deterministic ISP) and minimal quick checks (counters, waveforms, SNR, FrameID mapping). Artifact Map → Root Causes → Minimal Checks Prove timing first, then analog, then processing Artifacts (symptoms) Banding stripes Lag after-image Ghosting edge echo Dropped frames jumps Root-cause classes Timing & Sync phase slip · window drift · reorder Analog Chain noise · drift · clipping · settling Determinism & ISP params change · path switching · latency wobble Minimal checks Counters drop · miss · slip Waveforms pulse · integrate · readout SNR / variance noise stability FrameID mapping no reorder/dup Tip: if counters show drop/reorder/slip, fix timing/throughput first; tuning ISP cannot compensate for non-deterministic frames.

Figure F8: A triage tree that narrows artifacts to timing, analog, or determinism causes using the smallest set of measurements.

H2-9 · Data path throughput & buffering: avoid hidden frame drops

Many acquisition chains do not “announce” frame drops. Under load they often switch to silent behaviors: buffering grows, latency wobbles, frames get duplicated/reordered, or a drop happens only occasionally. For subtraction stability, the requirement is simple: prove end-to-end determinism with watermarks + frame counters + event logs, not just “the image still moves.”

Observability trio (minimum set that catches hidden drops)

Buffer watermarks
Log FIFO fill level statistics (min/avg/max) plus high-water events. Watermarks reveal backpressure and sustained overload.
Frame counters (one-to-one mapping)
Track input FrameID and output FrameID. Count gaps, duplicates, and reorder events explicitly.
Event log (evidence chain)
Record buffer_overflow, backpressure_assert, frame_drop, frame_repeat, reorder_detected, late_frame, deadline_miss. Correlate each event with timestamp and current watermark.

Fast falsification: throughput issue or not?

  • If FIFO fill level repeatedly hits or sticks near High WM and frame counters show gaps/dup/reorder, the root cause is data path overload/backpressure, not “image processing.”
  • If watermarks are stable and counters are clean but artifacts remain, move to timing alignment and acquisition-side determinism checks.
  • Latency should be treated like a signal: a multi-peak latency histogram often indicates buffering mode switches under load.

Minimal acceptance (quick but meaningful)

  • Run a sustained capture at target frame rate and processing configuration.
  • Pass condition: 0 drop, 0 reorder, 0 dup; watermark does not reach High WM; latency distribution remains narrow and single-mode.
  • If any of the above fails, fix throughput/backpressure behavior before tuning subtraction or ISP parameters.
FIFO watermark and frame counter visualization for hidden frame drops A two-panel diagram: top shows FIFO fill level over time with low/high watermark lines and overload events; bottom shows frame ID continuity with markers for gaps and duplicates correlated to watermark peaks. Watermark + Frame Counter: Detect Hidden Drops FIFO level and FrameID continuity must agree A) FIFO fill level vs time B) Frame counter continuity FIFO level time Low WM High WM backpressure overflow risk time gap dup normal gap dup/reorder marker

Figure F9: When FIFO watermarks rise, frame counters must still remain one-to-one; otherwise hidden drops or duplicates are present.

H2-10 · Calibration & drift control that matters for subtraction stability

Subtraction turns slow drift into visible residue. Even when each frame looks acceptable, offset, gain, temperature drift, and reference movement can cause mask/live mismatch, background haze, or low-frequency artifacts after subtraction. A robust system treats calibration as an auditable control loop: sources → apply → verify → log/version.

What drift matters most (in subtraction terms)

  • Offset drift: baseline moves → subtraction leaves gray haze and low-frequency residue.
  • Gain drift: response changes → contrast pumping and unstable vessel visibility.
  • Reference drift: global scale shifts → mask/live mismatch that grows over time or temperature.

Engineering strategy (concept + acceptance)

Control layer Purpose Evidence / acceptance
Power-on self-check Catch gross offset/gain/reference issues before acquisition PASS/FAIL + CalVersionID recorded; baseline within limits
Periodic calibration Correct time-dependent drift during long sessions Frame mean/variance stays stable; diff-frame residue does not grow
Temperature segmentation Keep parameters stable within temperature zones TempZoneID + CalVersionID; stable subtraction residue across zones

Traceability rule (non-negotiable for triage)

Every calibration update should produce a CalVersionID (plus timestamp and temperature zone), and every acquired frame should be traceable to the active CalVersionID. If subtraction residue appears, the first question becomes answerable: what changed—temperature, time, or parameters.

Calibration closed-loop for subtraction stability with logging and versioning A closed-loop diagram: calibration sources feed parameter generation, parameters are applied to acquisition ISP, verification checks stability metrics, then results are logged with CalVersionID and temperature zone, closing the loop. Calibration Control Loop (Subtraction Stability) sources → apply → verify → log/version Cal sources internal ref dark / baseline known level Apply offset table gain table temp zone Verify mean / variance diff energy limits Log & version CalVersionID timestamp TempZoneID PASS/FAIL Rule Each frame records FrameID + TS + CalVersionID for traceability

Figure F10: Calibration becomes a stable control loop only when verification results are logged and versioned, and frames remain traceable to CalVersionID.

H2-11 · Validation checklist (bring-up → production → field)

A fluoroscopy/DSA acquisition chain is “validated” only when each claim is measurable, recorded, and reproducible. The checklist below is written as Measure → Log → Pass/Fail so failures can be triaged to timing/sync, throughput/buffering, or drift/calibration without guesswork.

Bring-up (lock determinism first)

MEASURE LOG PASS/FAIL
  • Clock lock & reference health → measure lock/LOS/LOL stability over time → log lock events + timestamps → PASS: no lock-loss in sustained run; FAIL: any lock-loss is time-correlated to artifacts.
  • Trigger-to-frame alignment → measure exposure pulse / integration / readout gates alignment → log trigger-miss, late-trigger counters → PASS: zero misses and stable phase; FAIL: phase slip or intermittent misses.
  • FrameID one-to-one mapping → measure input FrameID vs output FrameID continuity → log gap/dup/reorder counters + snapshot at failure → PASS: drop=0, dup=0, reorder=0; FAIL: any non-zero counter.
  • Known pattern & dark-field stability → measure frame mean/variance + banding indicators → log temperature + mode + CalVersionID → PASS: stable statistics; FAIL: drift or periodic spikes.
  • Latency distribution → measure exposure→display latency histogram → log p50/p95/p99 + mode switches → PASS: single-mode, narrow distribution; FAIL: multi-peak wobble indicates buffering mode changes.

Production (consistency + boundary conditions)

Test focus What to record Pass/Fail evidence
Unit-to-unit consistency
(same mode, same fps/exposure)
watermark min/avg/max; latency p99; counters (drop/dup/reorder); dark stats; CalVersionID distribution within limits; any outlier includes full log bundle (mode/temp/config snapshot)
Boundary conditions
fps / exposure window / temperature zones
per-boundary run length (frames/time); counters; lock events; watermark headroom PASS requires measurable headroom and clean counters; FAIL requires reproducible steps and captured artifacts
Mode transitions
(fps switch, exposure mode change)
transition timestamps; short-window counters; latency spikes; CalVersionID changes PASS: no drops/reorder and no lock-loss; FAIL: any transition-induced event is tagged and repeatable

Field (observability and alarms)

  • Event log bundle: sync-loss, lock-lost, deadline-miss, buffer-overflow, frame drop/dup/reorder (with timestamp + watermark + mode).
  • Drop statistics: per-hour and per-session counters; exportable summary for service triage.
  • Alarms that point to root cause: “sync lost” and “frame continuity broken” are actionable; include last known lock state and buffer headroom.
Validation checklist cards for bring-up, production, and field Three-column card layout showing measurable validation items for bring-up, production, and field. Each item is tagged as Measure, Log, and Pass/Fail to keep evidence reproducible. Validation Checklist: Bring-up → Production → Field Every line is measurable, logged, and reproducible MEASURE LOG PASS/FAIL Bring-up lock determinism Production consistency Field observability Clock lock stable log LOS/LOL events Trigger aligned miss=0, phase stable FrameID 1:1 drop/dup/reorder=0 Dark/pattern stats stable Unit-to-unit p99 latency, watermark Boundary tests fps/exposure/temp Transitions no events on switch CalVersionID traceable records Event log sync/lock/drop Drop statistics per session/hour Actionable alarms include watermark Tip: attach timestamp + mode + watermark snapshot to every failure so issues can be reproduced quickly.

Figure F11: A validation checklist is only useful when every line maps to a recorded measurement and a reproducible PASS/FAIL decision.

H2-12 · IC / BOM selection cues (what to ask suppliers)

Supplier conversations move faster when questions demand evidence. The table below is structured as Question → Why it matters → Pass/Fail evidence. It stays focused on the acquisition chain and avoids security/compression components by design.

Example part-number cues (for RFQ shortlists)

These are typical families used in high-speed acquisition chains. Final selection depends on channel count, interface, and the validated jitter/latency budget.

Block Example IC / Part number Why it is relevant
ADC AD9653, AD9656, AD9253, AD9680; TI ADS42LB69 Multi-channel high-speed ADC families commonly used for parallel acquisition and stable timing/latency characterization.
Clocking LMK04828, LMX2594; AD9528; Si5345 Jitter cleaner / clock tree devices where integrated jitter and output skew can be specified and monitored.
Trigger / sync I/O SN65LVDS104; LMK00304; ADN4604 Differential fanout / distribution / routing devices used to keep trigger propagation deterministic and observable.

Procurement inquiry table (ask for evidence)

Block Questions to ask Why it matters Pass/Fail evidence
ADC 1) At the target sample rate, what are ENOB/SNR/SFDR at the stated input frequency points?
2) What input bandwidth and driver requirements apply to maintain linearity at the target rate?
3) Is latency fixed, and does it change across modes or temperature?
4) What are channel-to-channel gain/phase/latency matching specs for multi-channel alignment?
5) What integrated jitter requirement is recommended for the sampling clock to preserve SNR at the target fin?
DSA subtraction amplifies frame-to-frame noise and mismatch. ADC performance must be specified at the actual rate/fin and remain stable with temperature and mode. Test report with conditions (rate/fin/temp), plots or tables; configuration notes; and a reproducible bring-up script. PASS requires data at the target points, not “typical” marketing values.
Clocking 1) What is integrated jitter (state the integration bandwidth) at the required output frequency?
2) What output-to-output skew/phase alignment can be guaranteed (typ/max)?
3) What lock/LOS/LOL monitoring is available and how is it exposed (pins/status/counters)?
4) What happens on reference loss (holdover/auto-switch behavior) and what gets logged?
5) What is the recommended measurement method so lab results match vendor claims?
Clock cleanliness dominates high-frequency SNR and cross-channel coherence. Monitoring prevents “silent degradation” in long runs. Phase noise plot + jitter integration statement; skew report; lock-loss test log example. PASS requires documented integration bandwidth and monitoring outputs.
Trigger I/O 1) What is propagation delay (typ/max) and delay jitter across voltage/temperature?
2) Is the trigger path deterministic (no adaptive buffering or mode-dependent latency)?
3) What observability exists (counters, timestamps, monitor outputs) to detect miss/late/glitch?
4) What input thresholds and noise immunity guidance is provided for real cabling environments?
5) What test setup is recommended to validate deterministic timing in-system?
Exposure gating and subtraction stability depend on deterministic timing. Without observability, field failures become “mystery artifacts.” Delay/jitter characterization report; example counter/log definitions; in-system validation steps. PASS requires measurable counters and repeatable tests.
Supplier RFQ table: questions, why they matter, pass/fail evidence A visual procurement inquiry table with three blocks (ADC, Clocking, Trigger I/O), each showing example questions, why they matter, and the evidence required for pass/fail decisions. Procurement RFQ: Ask for Evidence Questions → Why it matters → Pass/Fail evidence Questions Why it matters Evidence ADC ENOB/SFDR at target rate & fin? Channel match (gain/phase/latency)? Fixed latency across temp/modes? Subtraction amplifies mismatch and drift Multi-channel coherence drives stability PLOT REPORT CFG CLOCKING Integrated jitter (bandwidth stated)? Output skew/phase alignment guaranteed? LOS/LOL monitoring exposed? Clock noise dominates high-frequency SNR Monitoring prevents silent degradation PLOT REPORT LOG TRIGGER I/O Deterministic delay & delay jitter (typ/max)? Observability: counters/timestamps/monitor? Noise immunity guidance for cabling? Exposure gating needs deterministic timing Field triage requires visible evidence REPORT LOG TEST Rule: without plots/reports/logs at the target conditions, claims are not accepted as pass/fail evidence.

Figure F12: A supplier RFQ becomes effective when questions require measurable evidence at the target operating points.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-13 · FAQs (Fluoroscopy / DSA Acquisition Chain)

1) How do you pick ADC resolution vs sampling rate for fluoroscopy vs DSA?
For fluoroscopy, prioritize sampling rate and deterministic latency so motion stays smooth and timing stays aligned under real-time constraints. For DSA, prioritize stable ENOB/SFDR and drift control because subtraction amplifies frame-to-frame noise and mismatch. A practical rule is: increase rate to meet timing/throughput first, then raise resolution only after sync and buffering are proven stable.
2) What ENOB/SFDR is “enough” for subtraction stability in DSA (and what is wasted)?
ENOB/SFDR is “enough” when subtraction residuals stop improving because the dominant error has moved to timing, drift, or buffering. It is wasted if frame counters show gaps/reorder, if latency is multi-peaked from hidden buffering, or if baseline offset/gain drifts with temperature. Prove determinism and drift control first, then evaluate whether higher ENOB reduces residual energy in repeated runs.
3) When does aperture jitter become the dominant limit, and how do you budget it?
Aperture jitter becomes dominant when input frequency content is high enough that sampling-time uncertainty looks like noise, even if the ADC has strong static specs. Budget jitter by splitting it into reference, cleaner/PLL, distribution, and ADC aperture contributions, then allocating margin for temperature and mode changes. Validate using integrated jitter with a stated integration band and correlate it to SNR changes at representative input frequencies.
4) Why can a “good PLL spec” still produce visible banding or frame-to-frame noise?
A PLL can look “good” on a headline jitter number while still injecting deterministic spurs, reference coupling, or mode-dependent behavior that shows up as banding or periodic residuals. Visible artifacts often track a repeatable period, not random noise. Require the full phase-noise plot, the integration bandwidth for jitter, and a lock/LOS/LOL event record. If banding correlates with sync points or spur frequencies, treat it as a determinism issue, not “more bits.”
5) Which sync points must be deterministic (frame start, exposure pulse, ADC clock), and how do you verify alignment?
Treat these as “must-control” points: frame start, line start, exposure pulse, integration/readout gates, ADC sampling clock, ISP stage boundaries, and FrameID/timestamp insertion. Verify alignment using a three-layer method: observe key pins, prove FrameID continuity end-to-end (no drop/dup/reorder), and log any sync-loss or late-trigger events with timestamps and buffer watermarks. Determinism is proven only when all three agree over long runs.
6) How do you distinguish exposure-gating mismatch vs readout-window drift using only captured frames?
Exposure-gating mismatch typically causes frame-to-frame brightness pumping or ghosting that tracks pulse timing, while readout-window drift often creates structured banding tied to line or frame timing. Use simple frame statistics: per-frame mean, per-line mean profiles, and consecutive-frame subtraction residuals. If artifacts correlate with trigger counters or lock events, suspect gating. If artifacts evolve slowly with temperature or mode dwell time, suspect drift or window misplacement.
7) What pre-processing must be stable before DSA subtraction (offset/gain/flat-field), and what can be adaptive later?
Before subtraction, keep offset correction, gain correction, defect handling, and flat-field-style normalization stable and versioned, because any frame-varying change becomes subtraction residue. After subtraction, display-side mapping and mild enhancement can be adaptive as long as it does not alter the subtraction inputs. Practical guidance is: anything that changes the baseline or scale of raw frames must be fixed, logged, and tied to a CalVersionID during acquisition.
8) What are the top causes of ghosting/lag in DSA, and what quick tests isolate timing vs drift vs processing?
Common causes include exposure/integration misalignment, residual drift in offset/gain, and hidden buffering that breaks determinism. Isolate quickly with a minimal test set: fixed exposure timing, dark-field capture, a short temperature step, and a repeatable live-mask sequence. If the issue tracks trigger events or lock/LOS/LOL logs, it is timing. If it trends with temperature or time, it is drift. If latency distribution becomes multi-peaked, it is buffering or backpressure.
9) How do hidden buffers create “no obvious drops but unstable latency,” and what counters/logs catch it?
Hidden buffers can absorb overload temporarily, then release pressure in bursts, producing stable-looking video but unstable exposure-to-display latency and frame-to-frame timing. This breaks subtraction consistency without obvious “dropped frames.” Catch it with buffer watermarks (min/avg/max and time series), deadline-miss counters, and FrameID/timestamp continuity at multiple boundaries. If watermark oscillates near high thresholds and latency shows multiple peaks, determinism is already compromised.
10) What measurements prove “no frame drops”: frame counters, timestamps, CRCs, or watermarks—and where to place them?
The minimum proof set is FrameID at input and output boundaries plus timestamps at key stage boundaries, because they detect drops, duplicates, reorders, and latency jitter. Watermarks explain why timing changes happen under load and should be logged continuously. CRC helps detect corruption but cannot replace FrameID continuity. Place these at acquisition entry, after deterministic ISP stages, and before display/record, so any mismatch is localized to one segment.
11) How often should offset/gain calibration be refreshed, and what drift symptoms indicate it is overdue?
Refresh cadence should be driven by measured drift, not a fixed calendar. Use periodic refresh plus event-based refresh when temperature crosses defined zones, when modes switch, or when subtraction residual energy rises beyond a threshold. Symptoms include baseline mean drift, growing dark-field variance, residual banding that increases with time, and failure to return to a stable baseline after a temperature step. Always log CalVersionID, TempZoneID, and the refresh trigger reason.
12) What are the best supplier questions for ADC/clock/trigger parts to avoid late-stage image artifacts?
Ask for evidence at your target operating points. For ADCs: ENOB/SFDR at the target sample rate and representative input frequencies, plus channel matching and fixed latency across temperature and modes. For clocking: phase-noise plot, integrated jitter with stated bandwidth, output skew, and monitoring of LOS/LOL events. For trigger I/O: deterministic propagation delay and delay jitter, plus counters or monitor outputs that prove alignment over long runs.