HDR/WDR Imaging Chain
← Back to: Imaging / Camera / Machine Vision
Multi-exposure scheduling → motion-aware fusion → de-flicker/de-banding → HDR-aware denoise → tone mapping & gamma consistency. This page stays focused on the HDR/WDR chain and its measurable evidence—transport, lighting-driver topology, and full ISP encyclopedias are referenced only.
H2-1. Definition & Boundary: What the HDR/WDR chain owns
Goal: make the HDR/WDR scope unambiguous, and lock the evidence chain (what is visible ↔ what is measurable) so field arguments end with data.
Extractable definition (45–55 words)
An HDR/WDR imaging chain schedules multiple exposures (segmented or interleaved) and fuses them to preserve highlight detail while recovering usable shadow SNR. It then suppresses flicker/banding and noise before tone mapping and gamma tuning for a stable, natural-looking output under motion and lighting constraints.
What this page owns (deep scope)
- Exposure scheduling: segmented vs interleaved/staggered timing, exposure ratio trade-offs, and how timing creates/avoids artifacts.
- Fusion: merge weighting, motion/ghosting suppression, saturation handling, and “what debug stats prove it”.
- De-flicker / de-banding: 50/60 Hz stripe formation (beat with line timing), and practical mitigation knobs.
- HDR-aware denoise: where NR belongs after fusion, and how to avoid plasticky textures and temporal trails.
- Tone mapping & gamma consistency: highlight roll-off, black-crush avoidance, and stable color/gamma across exposure ratios.
References only (one-line link, no deep dive)
- Full ISP pipeline modules (DPC/LSC/distortion, AE/AWB/AF algorithm deep dive) → Image Signal Processor (ISP)
- Strobe/lighting driver hardware and topology → Vision Lighting Controller
- Interface/transport and host capture (CoaXPress/10GigE/USB3/PCIe DMA) → Machine-Vision Interfaces / Frame Grabber
- System-level timing hubs (PTP/1588 architecture) → Sync/Trigger & Timing Hub
- Calibration parameter storage/versioning mechanics → Calibration & NVM
Evidence chain: visible symptom ↔ measurable proof
- Banding / stripes ↔ line timing + flicker frequency + measured stripe period (beat signature).
- Ghosting ↔ motion-mask hit rate + merge confidence / weight-map summary (motion areas rejected or blended).
- Highlight clip ↔ saturation pixel ratio in short exposure + highlight histogram pile-up.
- Black crush ↔ shadow histogram compression + shadow ROI SNR/noise floor after lifting.
- Brightness pumping ↔ exposure window drift + strobe alignment error + per-frame exposure-ID logs.
- Color shift ↔ fused chroma drift vs exposure ratio + tone-curve/gamma mismatch across scenes.
H2-2. When to choose multi-exposure HDR vs alternatives
Goal: prevent “HDR for HDR’s sake”. Multi-exposure is justified only when it recovers information that single exposure + tone mapping cannot, without creating worse motion or flicker artifacts.
3-question decision gate (fast, field-friendly)
- How large is the scene dynamic range? If highlights saturate while shadows are unreadable, HDR/WDR is on the table.
- How much motion exists? Faster motion increases ghosting risk and demands stronger motion-aware fusion (or fewer exposures).
- Is lighting flicker present? 50/60 Hz mains flicker or PWM lighting can dominate banding; scheduling must align or compensate.
When single exposure + tone mapping is enough
- Highlights rarely saturate: histogram right edge does not pile up; specular peaks are small and non-critical.
- Shadows remain usable: shadow ROI SNR is adequate (information is noisy but readable after gentle lift).
- Motion is high: the cost of multi-exposure ghosting exceeds DR benefit (especially with rolling artifacts).
- Flicker is uncontrolled: until flicker/banding evidence is characterized, multi-exposure can amplify stripes.
When dual/triple exposure is justified (must show measurable gains)
- Single-exposure failure proof: persistent highlight saturation ratio (specular/metal/weld points) + lost texture that cannot be reconstructed.
- Shadow readability proof: shadow ROI becomes readable only if long exposure increases effective SNR (not just brightness).
- Information loss proof: key ROI (codes/textures/edges) is missing in single exposure even after best tone mapping.
- Stability proof: with fixed scheduling, banding/brightness pumping decreases (not increases) across representative lighting.
Alternatives (referenced only, no deep dive here)
- Sensor-side WDR modes (device-dependent) → reference: Global-Shutter CMOS Image Sensor
- Controlled illumination (stable strobe/flicker sync) → reference: Vision Lighting Controller
- System transport constraints (bandwidth/latency/host capture) → reference: Machine-Vision Interfaces / Frame Grabber
H2-3. Exposure scheduling: segmented vs interleaved (frame/line level)
Core control surface of HDR/WDR: how exposures are inserted, how the S/L ratio is chosen, and when timing turns into banding or motion artifacts. This chapter focuses on timing geometry and measurable evidence (not sensor pixel/ROIC circuit details).
Scheduling families (what “segmented” and “interleaved” really mean)
- Frame-sequential / Segmented: short (S) and long (L) exposures appear as separate frames or large time blocks. Simple, but the time gap increases ghosting risk under motion.
- Line-interleaved / Staggered: S/L exposures are interleaved within one frame at line level (or small slices). Lower motion gap, but more sensitive to flicker/beat interactions with line timing.
- Hybrid: segmented blocks with interleaving inside each block. Often used to balance motion, bandwidth, and tuning complexity.
Key knobs (what changes, what breaks)
- Exposure ratio (L/S): higher ratio expands dynamic range potential, but increases the chance of hard blending boundaries (halo/edge discontinuity) and motion mismatch.
- Integration times (tS, tL): longer tL improves shadow SNR but increases motion disparity; too-short tS preserves highlights but may raise noise in the “highlight detail” path.
- Temporal spacing (Δt between S and L samples): larger Δt raises ghosting probability; interleaving reduces Δt but can amplify beat banding if flicker is not aligned.
- Readout overlap / blanking budget: overlap affects achievable frame rate and the stability of exposure windows; unstable windows often show up as brightness pumping.
- Rolling shutter geometry (no circuit deep dive): different lines integrate at different times, so flicker/strobe misalignment becomes spatial stripes inside a single frame.
Two predictable risks (and when they become inevitable)
- Banding / stripes becomes likely when the sampling windows (line timing + exposure windows) do not align with the lighting waveform (50/60 Hz or PWM). The visible stripe period often matches a beat signature between flicker frequency and line/frame timing.
- Ghosting becomes likely when Δt is large and the scene has motion; the merge must choose between preserving highlight detail and preventing double edges. Large L/S ratios worsen “winner-takes-all” merging near edges.
Beat-banding verification (no formulas, fully testable)
- Measure: flicker frequency (mains/PWM) + line time / frame rate + exposure window placement.
- Observe: stripe period and whether it changes predictably when frame rate or window placement changes.
- Confirm: adjust scheduling (segmented↔interleaved / window shift) and verify stripes move or collapse as expected.
Evidence checklist (settle arguments with data)
- Timing evidence: line period / frame period, exposure-active window(s), strobe timing, lighting flicker frequency.
- Statistic evidence: histogram/RAW stats per exposure ID (S vs L), stripe period measurement vs timing change, merge debug summaries (if available).
- Minimum “first capture” set: (1) exposure-active + frame-valid waveform, (2) strobe/trigger waveform, (3) S/L histograms, (4) stripe period estimate.
H2-4. Sync hooks that matter (trigger, strobe alignment, rolling artifacts)
HDR/WDR reliability depends on a few simple sync hooks: trigger, strobe alignment, and exposure notification. Many “banding problems” are timing misalignment problems—measurable on waveforms—before any algorithm tuning. This chapter does not cover PTP/1588 timing-hub systems; it stays at the interface hooks level.
Sync hooks (what must be observable and alignable)
- Trigger-in (external trigger): defines frame start / exposure schedule anchor. Prevents schedule drift and enables repeatable regression.
- Free-run (internal timing): simplest wiring, but schedule stability depends on internal clocks; drift shows as brightness pumping or inconsistent banding patterns.
- Strobe-out / light sync: ensures illumination happens inside the intended exposure window (especially for S exposure preserving highlights).
- Exposure notify / exposure ID: logs which exposure (S/L) is active per frame/segment; required to correlate artifacts with schedule state.
Alignment goals → failure signatures
- Goal: strobe pulse fully inside the target exposure window → if misaligned: banding, brightness pumping, local over/under exposure.
- Goal: stable frame counter and exposure ID → if unstable: fusion weights “chase” timing drift, causing inconsistent output across frames.
- Goal: consistent scheduling across cameras (same exposure ratio and exposure ID cadence) → if inconsistent: multi-camera brightness/color mismatch and stereo/3D inconsistency.
Note: Multi-camera consistency here means identical exposure cadence and alignment. Time-distribution network design belongs to the Sync/Trigger & Timing Hub page.
Evidence capture (2 waveforms + 3 logs)
- Waveform A: Trigger-in or Strobe pulse.
- Waveform B: Sensor exposure-active / frame-valid (or equivalent timing pin / debug output).
- Log 1: frame counter (monotonic, no jumps).
- Log 2: exposure ID (S/L or segment index per frame).
- Log 3: dropped frames / schedule error counters (if available).
H2-5. Fusion pipeline: merge strategy and ghosting suppression
5 fusion modules (each must expose evidence)
- Input normalization: bring S/L into a comparable space (gain/black-level consistency, per-exposure stats labeling).
- Alignment: geometric + temporal consistency so edges land on the same pixels before blending.
- Motion detection: produce a reliability mask (not just “moving/not moving”) to prevent double edges.
- Weighting & decision: a weight map with a controlled transition band (avoid hard seams).
- Saturation & transition: handle near-saturation regions with smooth roll-off (avoid white halos and clipped edges).
Merge weights (what to tune, what breaks)
- What it does: pushes highlight detail toward S and shadow SNR toward L, but the transition band decides whether the merge looks natural.
- What to measure: weight curve shape (smooth vs hard knee), transition width, and whether weights jump at strong edges.
- Failure signatures: halo (bright outline), edge tearing (hard seam), unnatural highlight roll-off.
- First fix: widen/soften the transition band before increasing local contrast or sharpening.
Motion mask (how to suppress ghosting without killing texture)
- What it does: marks where multi-exposure blending is unreliable; motion areas must become more conservative (often closer to single-exposure behavior).
- What to measure: motion-mask hit-rate (frame/ROI), overlap with edges, and frame-to-frame stability (mask jitter causes pumping).
- Failure signatures: ghosting (double edges), edge drift, texture wash-out (over-conservative masking).
- First fix: stabilize the mask (reduce jitter) before making it more sensitive.
Saturation handling (why highlight transitions often look wrong)
- What it does: treats “near saturation” as a soft zone (not a hard on/off) so the blend rolls off smoothly at specular edges.
- What to measure: saturated pixel map per exposure ID, near-saturation band size, and whether weights jump at saturated boundaries.
- Failure signatures: clipped highlights, white/gray halos, harsh “cut-out” specular edges.
- First fix: implement soft saturation thresholds + smooth weight constraints around saturation neighborhoods.
Artifact-to-metric mapping (fast root-cause isolation)
- Ghosting dominates → motion-mask hit-rate too low/unstable, large Δt between S and L, edge coverage gaps.
- Halo dominates → transition band too narrow, weights change too aggressively near strong gradients, saturation neighborhood mishandled.
- Edge tearing dominates → alignment error + hard decision (weights snap at edges).
- Brightness pumping → weights/masks drift frame-to-frame (often tied to schedule instability; correlate with exposure ID logs).
H2-6. De-flicker & De-banding: why stripes happen and how to kill them
Mechanism (sampling + scanning, not a lighting-hardware lecture)
- Lighting modulation provides a time-varying brightness waveform (mains flicker or PWM envelope).
- Rolling / line scanning samples that waveform at different times per line, mapping time variation into spatial stripes within a frame.
- Multi-exposure does not create flicker, but it can amplify mismatch: S/L windows can land at different flicker phases, producing stronger stripe contrast or pumping.
Priority order (what to fix first)
- 1) Align exposure windows: shift/lock windows relative to flicker phase; choose interleaving/hybrid scheduling to reduce phase gaps.
- 2) Stabilize sampling strategy: hold exposure cadence (exposure ID / ratio) for regression; avoid AE drifting windows while diagnosing.
- 3) De-banding estimation/filter: only after the dominant stripe frequency/beat behavior is confirmed; aggressive filtering can damage texture.
Evidence chain (make banding measurable)
- Stripe period measurement: quantify stripe spacing and how it moves when frame rate / line time changes.
- Timing parameters: line time, frame rate, exposure windows, and any strobe timing reference.
- Simplified spectrum idea: use frequency peaks to identify a dominant flicker/beat component, then validate on the time axis.
Contrast experiments (strongest proof)
- Locked exposure vs Auto exposure: if stripes worsen or “walk” under AE, window drift is a primary contributor.
- Shift strobe/window delay: if stripe phase shifts predictably, alignment dominates (fix timing before tuning fusion/NR).
- Change FPS / line time: if stripe period changes predictably, it confirms deterministic beat coupling.
H2-7. Denoise under HDR: keep detail, avoid plasticky look
Why HDR noise is harder than single-exposure
- Shadow lift amplifies noise: tone and fusion can push low-SNR regions upward, making grain and chroma speckle obvious.
- Non-uniform contribution: different pixels are dominated by S or L exposure depending on weights and saturation handling, producing patchy noise behavior.
- Motion coupling: temporal NR can turn fusion mismatch into trails and “water-like” temporal artifacts when motion reliability is low.
Spatial vs Temporal NR (HDR-coupled guidance)
- Spatial NR: stable and predictable, but risk of over-smooth (texture collapse) if applied uniformly.
- Temporal NR: strong SNR gain, but must follow motion/reliability signals; otherwise trails dominate perceived quality.
- Rule of thumb: in motion-uncertain regions, reduce temporal blending first; then use chroma-heavy spatial NR to control color speckle.
Luma/Chroma separation (a practical way to avoid “plastic”)
- Chroma-first: chroma noise is highly visible yet less critical to fine detail—stronger suppression is usually safer.
- Luma-conservative: luma carries texture and perceived sharpness; keep luma NR weaker in textured/edge regions.
- Edge/texture awareness: use a texture/edge indicator to prevent luma smoothing from collapsing micro-contrast.
Evidence chain (what to measure before changing strength)
- ROI SNR: measure in shadows/midtones/highlights separately; HDR failures often hide in lifted shadows.
- Detail metrics: texture retention (local contrast / texture energy) and edge contrast (before/after NR).
- Temporal artifacts: trail length/strength in motion sequences; correlate with motion-reliability and temporal blend ratio.
H2-8. Tone mapping, gamma, and color consistency after fusion
Tone curve essentials (knee / shoulder / gamma)
- Shadows: too aggressive lift → gray haze + noise visibility; too strong suppression → black crush.
- Midtones: insufficient slope → flat image; excessive local boost → unnatural contrast.
- Highlights: hard shoulder → highlight clip; overly soft shoulder → washed highlights (loss of “specular punch”).
Global vs Local tone mapping (HDR-specific risk)
- Global: stable and predictable; fewer halos and fewer temporal surprises, but may under-deliver “local punch” in extreme scenes.
- Local: stronger local contrast, but higher risk of halo, edge glow, and region-to-region inconsistency—especially after multi-exposure fusion.
- Practical guardrail: if customers report “glowing edges” or “unnatural micro-contrast,” reduce local strength before re-tuning fusion.
Color consistency under multi-exposure (no AWB deep dive)
- Exposure-mix sensitivity: as L/S weights change across the image, any path mismatch becomes visible as hue shifts.
- Highlight roll-off consistency: color can drift in highlights if tone/gamma treatment differs per channel or per exposure contribution.
- Skin and neutral stability: track gray/skin patches across different exposure ratios; instability is a fusion+tone integration issue, not only “white balance.”
Evidence chain (make “premium look” testable)
- Histogram before/after: confirm shadow lift and highlight compression are intentional (not accidental clipping/crush).
- Gray card trend: observe neutral patches across exposure ratios; watch for consistent bias direction (systematic drift).
- Color card trend: compare color patch shifts as L/S ratio changes; large ratio sensitivity indicates integration instability.
H2-9. Artifact library: symptoms → likely causes (HDR/WDR-specific)
Symptom Ghosting (motion double image)
Likely modules Fusion motion decision, exposure S/L timing gap (Δt), weight transition hardness.
First evidence Motion-mask hit-rate (frame/ROI) + S/L exposure timeline (Δt) for the same output frame.
First fix Reduce fusion aggressiveness in motion regions (more conservative weights) and shrink Δt by scheduling (segmentation/interleaving choice).
Symptom Halo (edge glow / bright outline)
Likely modules Fusion blending around edges, local tone mapping strength, saturation neighborhood handling.
First evidence Weight-map transition width summary near edges + highlight saturation distribution around edges.
First fix Smooth / widen the weight transition first, then reduce local tone mapping strength (avoid masking fusion instability with contrast).
Symptom Banding / Stripe (horizontal/vertical stripes)
Likely modules Exposure window alignment, flicker coupling, de-banding estimation, timing mismatch.
First evidence Stripe period measurement + line/frame timing (row time, FPS) to check beat frequency behavior.
First fix Lock exposure scheduling (stable window/phase) and verify stripe moves predictably when FPS/row time changes.
Symptom Color shift (hue drift / skin tone instability)
Likely modules Fusion→tone integration consistency (mixing changes response), channel-consistent tone/gamma application.
First evidence Gray/color card trend vs exposure ratio (L/S mix sweep) + per-exposure-ID histogram/RAW stats.
First fix Sweep exposure ratio under a fixed scene and check systematic drift; stabilize fusion-to-tone consistency before “more contrast” tuning.
Symptom Flicker (brightness pumping / frame-to-frame jumps)
Likely modules Exposure scheduling stability (exposure ID/ratio jitter), temporal NR stability, local tone temporal consistency.
First evidence Exposure ID/ratio log across frames + output histogram mean/percentile drift (frame-to-frame).
First fix Lock scheduling cadence (and exposure ratio) to separate “control loop” instability from tone/temporal artifacts.
Symptom Black crush / Highlight clip (muddy shadows / blown highlights)
Likely modules Tone curve (knee/shoulder), fusion saturation handling, inconsistent highlight roll-off.
First evidence Histogram endpoints (shadow floor / highlight ceiling) + saturation pixel ratio per exposure ID (S/L separated).
First fix Adjust shoulder/roll-off to avoid hard clipping, then verify shadow region is not crushed before lifting (avoid noise explosion).
H2-10. Measurements that settle arguments (what to capture first)
Case “Banding is from lighting” vs “banding is from scheduling/window phase”
First 2 captures (1) Stripe period measurement, (2) line/frame timing (row time, FPS, exposure window placement).
One discriminator Change FPS/row time slightly: does stripe period shift according to beat-frequency prediction?
First fix Lock exposure window phase (stable schedule), then re-check stripe stability before applying stronger de-banding filters.
Case “Fusion algorithm is weak” vs “S/L Δt is too large”
First 2 captures (1) S/L exposure timeline for the same output frame (Δt), (2) motion mask ratio (frame/ROI).
One discriminator Reduce Δt (scheduling) while keeping merge weights: does ghosting drop significantly?
First fix Make motion regions more conservative (weights) and shrink Δt; confirm by replaying the same motion clip.
Case “Brightness pumping is AE/control loop” vs “tone/temporal instability”
First 2 captures (1) exposure ID/ratio log across frames, (2) output histogram mean/percentiles across frames.
One discriminator Lock scheduling cadence and exposure ratio: if pumping disappears, root cause is control/scheduling.
First fix Stabilize scheduling first; only then tune tone/local/temporal settings for consistency.
Case “Color drift is white balance” vs “fusion→tone integration inconsistency”
First 2 captures (1) gray/color card trend vs exposure ratio sweep, (2) per-exposure-ID histogram/RAW stats (S/L separated).
One discriminator If drift correlates strongly with exposure ratio, integration consistency is the primary suspect.
First fix Fix fusion-to-tone consistency under ratio changes before adding stronger local contrast or aggressive NR.
Case “Shadows look dirty because NR is weak” vs “shadows are lifted/crushed incorrectly”
First 2 captures (1) shadow ROI SNR, (2) tone curve shadow segment (lift level) + histogram shadow endpoint.
One discriminator Change only shadow lift (not NR): does the “dirty” perception track the curve change?
First fix Set shadow lift to a stable baseline first, then suppress chroma noise; keep luma NR conservative on texture.
H2-11. Validation & tuning plan (bring-up → golden scene → regression)
Step 1 Bring-up baseline (before any “beauty tuning”)
- Freeze the environment: fixed scene + fixed lighting + stable temperature (avoid chasing drift).
- Expose the HDR state: log exposure ID, S/L ratio, shutter times, and frame counters per frame.
- Verify scheduling integrity: confirm S/L windows occur where expected (Δt, overlap, segmented vs interleaved).
- Enable debug summaries: motion-mask ratio and weight-map summaries (no full dump needed).
- Start with “fail fast” checks: banding, ghosting, highlight clip, black crush, pumping.
Rule: if artifacts exist in the baseline, do not tune tone/NR to hide them. Fix scheduling/fusion stability first.
Step 2 Golden scenes (repeatable by anyone)
- Backlight DR: bright window + dark interior (tests highlight roll-off + shadow usability).
- Specular highlight: metal / solder / reflective label (tests saturation handling + halo risk).
- Low-light texture: dark fabric / printed text (tests shadow SNR + “plasticky” NR risk).
- Flicker stress: 50/60Hz or PWM lighting (tests beat banding + exposure-phase stability).
- Motion stress: moving edges/people/robot arm (tests ghosting + motion-mask decisions).
Golden scenes must be re-shootable: fixed camera pose, fixed exposure mode, and saved lighting settings.
Step 3 Metrics that matter (quantify what users complain about)
- Dynamic range: usable stops (clip threshold + shadow SNR threshold).
- Shadow quality: ROI SNR + texture preservation (avoid “mud” vs “plastic”).
- Color consistency: ΔE trend on chart across exposure ratios (detect systematic drift).
- Banding score: stripe amplitude/period stability vs FPS/row time changes.
- Ghosting score: motion-mask hit-rate + residual double-edge energy (motion regions).
- Tone stability: histogram percentiles drift frame-to-frame (detect pumping).
Key rule: always compute stats per exposure ID (S/L separated), otherwise evidence becomes ambiguous.
Step 4 Parameter sweep plan (controlled changes, measurable outcomes)
- Exposure scheduling: segmented vs interleaved, S/L ratio sweep, Δt minimization, window phase lock.
- Fusion: weight curve sweep (highlight vs shadow priorities), motion threshold sweep, transition width sweep.
- De-banding: flicker estimation strength and lock strategy (validate with stripe period predictability).
- NR under HDR: luma/chroma balance, temporal strength (watch for trails vs detail loss).
- Tone mapping: knee/shoulder sweep, highlight roll-off smoothness, shadow lift baseline.
Sweep discipline: change one family at a time and store results with a run ID; never “turn five knobs” blindly.
Step 5 Lock a versioned HDR profile (traceable, reviewable)
- Create a profile ID: HDR_PROFILE_ID + scene set + firmware hash + sensor lot tag.
- Freeze the baseline: store golden captures + metric baselines (per-scene) for future diffs.
- Define pass bands: acceptable ranges for banding score, ghosting score, ΔE drift, clip ratios.
- Keep parameter sets human-reviewable: “what changed and why” notes for each locked release.
Storage/packing format and NVM layout belong to the Calibration & NVM page; keep this page focused on the SOP.
Step 6 Regression checklist (what forces a rerun)
- Firmware/ISP change: exposure scheduler, fusion thresholds, NR/tone kernels.
- Sensor lot change: black level drift, gain curve shifts, saturation behavior shifts.
- Operating envelope: temperature corners, different flicker frequencies, different motion patterns.
- Factory/field modes: any “auto” re-enabled must be verified for stability (pumping, banding).
Regression principle: rerun a small “high-sensitivity subset” first (flicker + motion + backlight). Expand only if failures appear.
Procurement-ready Golden scene assets & instruments (example MPNs)
- Dynamic range chart: DSC Labs Xyla 21 — CDX1-81W (21-step, built-in light source).
- Color reference: X-Rite ColorChecker Classic — MSCCC (24-patch target).
- Resolution/MTF & tonal response chart: Imatest SFRplus chart — QI-SFR10-P-RM (example chart code).
- Flicker measurement: UPRtek MF250N handheld spectral flicker meter (FFT + flicker metrics).
These items make the SOP repeatable across teams: one DR target, one color target, one sharpness/tonal chart, and one flicker meter.
H2-12. FAQs ×12 (Evidence-based, no scope creep)
1“HDR ON makes it more flickery.” Should the first suspect be row timing or lighting frequency?
First 2 captures: (1) flicker/stripe period (time or spatial), (2) row time/FPS + exposure window phase (where S/L windows land).
Discriminator: slightly change FPS/row time—if the flicker/stripe period shifts predictably, it’s scan/window coupling (beat behavior). If it stays locked, it’s mostly lighting-frequency driven.
First fix: lock the exposure scheduling cadence/phase first, then tune de-flicker/de-banding strength.
2“Moving edges show ghosting.” Is it fusion weights or a loose motion mask?
First 2 captures: (1) motion-mask hit-rate (frame/ROI), (2) S/L Δt on the timeline for the same output frame (how far apart exposures are).
Discriminator: reduce Δt via scheduling without changing weights—if ghosting drops sharply, Δt/scheduling is the driver. If not, mask/weight policy is the driver.
First fix: make motion regions conservative (weighting) and shrink Δt; re-test on the same motion clip.
3“Shadows are visible but the image looks plasticky.” Is NR too strong or is tone mapping flattening it?
First 2 captures: (1) shadow ROI SNR + texture/detail retention indicator, (2) tone curve summary (shadow/mid segments; lift vs compression).
Discriminator: change only the tone curve (keep NR fixed). If the “plasticky” feel tracks the curve, tone is the root. If it doesn’t, NR policy is the root.
First fix: set a stable tone baseline first, then limit NR (especially luma) while controlling chroma noise in deep shadows.
4“Highlights don’t clip, but color turns yellow/green.” Is it fusion color consistency or gamma/curve?
First 2 captures: (1) gray/color chart ΔE trend vs exposure ratio sweep, (2) per-exposure-ID stats (S/L separated) to see which exposure dominates highlights.
Discriminator: if the color shift correlates strongly with L/S ratio, fusion-to-tone consistency is likely. If it correlates weakly, tone/gamma channel consistency and roll-off policy are likely.
First fix: stabilize highlight roll-off and fusion→tone consistency before “more contrast” tuning.
5“Banding appears only under certain lamps.” How to quickly confirm 50/60 Hz beat behavior?
First 2 captures: (1) stripe period (spatial or temporal), (2) row time/FPS + exposure window placement (segmented vs interleaved phase).
Discriminator: adjust window phase or FPS slightly. Beat-coupled banding will shift in a predictable way as scan timing changes.
First fix: align/lock exposure windows to a stable phase first; then apply de-banding estimation if needed.
6“How to set exposure ratio? What goes wrong if S/L differs too much?”
First 2 captures: (1) highlight saturation ratio (does S protect highlights?), (2) shadow ROI SNR (does L add usable info?).
Discriminator: sweep ratio under a fixed scene. If ratio increases ghosting/banding risk and latency while adding little usable shadow information, it’s too aggressive.
First fix: choose the smallest ratio that meets highlight protection and shadow usability metrics.
7“Is interleaved always better than segmented?” What constraints decide?
First 2 captures: (1) Δt/overlap between S and L windows, (2) banding sensitivity vs phase (stripe period stability).
Discriminator: if motion ghosting is the dominant failure mode, interleaving (smaller Δt) often wins. If beat-coupled banding dominates, segmented with strong phase control may be safer.
First fix: pick a baseline mode and make window phase lockable for regression.
8“HDR increases latency.” Where to suspect first?
First 2 captures: (1) per-stage timing (scheduler→fusion→NR→tone) or module counters, (2) whether temporal steps add buffering (e.g., motion handling, temporal NR).
Discriminator: downgrade one class (e.g., temporal NR or complex fusion) and re-measure. If latency drops immediately, buffering/compute is the driver; otherwise frame structure/scheduling is the driver.
First fix: create a low-latency baseline profile and compare quality loss on golden scenes.
9“Same params look good in daytime but soft at night.” Is it parameter range or missing golden scenes?
First 2 captures: (1) confirm golden scene coverage includes night low-light + high-contrast point lights, (2) metric deltas at night (SNR, ghosting score, banding score).
Discriminator: if a missing scene can reproduce and quantify the failure consistently, the scene set is incomplete. If failures vary unpredictably, the control/tuning range may be unstable.
First fix: expand the golden scene set first, then sweep parameters under controlled conditions.
10“Debanding removes stripes but also kills detail.” How to find the balance?
First 2 captures: (1) banding score (stripe amplitude/period), (2) texture/detail retention indicator (edge/texture energy).
Discriminator: check diminishing returns: if extra de-banding yields little stripe improvement but linear detail loss, it’s beyond the optimal point.
First fix: lock the minimum acceptable banding score, then keep NR conservative to preserve texture.
11“Brightness jumps periodically.” Is it exposure scheduling or strobe alignment?
First 2 captures: (1) Trigger/Strobe waveform, (2) Exposure-active/Frame-valid waveform (actual exposure windows).
Discriminator: if the strobe phase drift relative to the exposure window matches the brightness jump period, it’s alignment. If not, it’s scheduling/control loop instability.
First fix: align and lock strobe to the exposure window first, then re-check for residual pumping.
12“How to turn HDR tuning into a regression-ready versioned asset?”
First 2 captures: (1) locked golden captures per scene, (2) locked metric baselines + pass bands (banding/ghosting/ΔE/clip ratios).
Discriminator: a change (FW/sensor lot) must produce a clear pass/fail outcome on the same scene+metric set. If it does, the tuning is assetized.
First fix: define HDR_PROFILE_ID + baseline pack (scenes + metrics + key parameter summary). Reference “Calibration & NVM” only for storage mechanics.