123 Main Street, New York, NY 10001

HDR/WDR Imaging Chain

← Back to: Imaging / Camera / Machine Vision

Multi-exposure scheduling → motion-aware fusion → de-flicker/de-banding → HDR-aware denoise → tone mapping & gamma consistency. This page stays focused on the HDR/WDR chain and its measurable evidence—transport, lighting-driver topology, and full ISP encyclopedias are referenced only.

H2-1. Definition & Boundary: What the HDR/WDR chain owns

Goal: make the HDR/WDR scope unambiguous, and lock the evidence chain (what is visible ↔ what is measurable) so field arguments end with data.

Extractable definition (45–55 words)

An HDR/WDR imaging chain schedules multiple exposures (segmented or interleaved) and fuses them to preserve highlight detail while recovering usable shadow SNR. It then suppresses flicker/banding and noise before tone mapping and gamma tuning for a stable, natural-looking output under motion and lighting constraints.

What this page owns (deep scope)

  • Exposure scheduling: segmented vs interleaved/staggered timing, exposure ratio trade-offs, and how timing creates/avoids artifacts.
  • Fusion: merge weighting, motion/ghosting suppression, saturation handling, and “what debug stats prove it”.
  • De-flicker / de-banding: 50/60 Hz stripe formation (beat with line timing), and practical mitigation knobs.
  • HDR-aware denoise: where NR belongs after fusion, and how to avoid plasticky textures and temporal trails.
  • Tone mapping & gamma consistency: highlight roll-off, black-crush avoidance, and stable color/gamma across exposure ratios.

References only (one-line link, no deep dive)

Evidence chain: visible symptom ↔ measurable proof

  • Banding / stripes ↔ line timing + flicker frequency + measured stripe period (beat signature).
  • Ghosting ↔ motion-mask hit rate + merge confidence / weight-map summary (motion areas rejected or blended).
  • Highlight clip ↔ saturation pixel ratio in short exposure + highlight histogram pile-up.
  • Black crush ↔ shadow histogram compression + shadow ROI SNR/noise floor after lifting.
  • Brightness pumping ↔ exposure window drift + strobe alignment error + per-frame exposure-ID logs.
  • Color shift ↔ fused chroma drift vs exposure ratio + tone-curve/gamma mismatch across scenes.
Figure F1 — HDR/WDR chain boundary and evidence hooks. Focus stays on exposure scheduling, fusion, de-banding/NR, and tone mapping/gamma consistency.
Cite this figure: HDR/WDR Imaging Chain — Ownership Map (F1) Use: “ICNavigator — HDR/WDR Imaging Chain, Figure F1, accessed YYYY-MM-DD.”

H2-2. When to choose multi-exposure HDR vs alternatives

Goal: prevent “HDR for HDR’s sake”. Multi-exposure is justified only when it recovers information that single exposure + tone mapping cannot, without creating worse motion or flicker artifacts.

3-question decision gate (fast, field-friendly)

  • How large is the scene dynamic range? If highlights saturate while shadows are unreadable, HDR/WDR is on the table.
  • How much motion exists? Faster motion increases ghosting risk and demands stronger motion-aware fusion (or fewer exposures).
  • Is lighting flicker present? 50/60 Hz mains flicker or PWM lighting can dominate banding; scheduling must align or compensate.
DR ↑ Ghosting risk ↑ Banding risk ↑ Compute/latency ↑ Tuning complexity ↑

When single exposure + tone mapping is enough

  • Highlights rarely saturate: histogram right edge does not pile up; specular peaks are small and non-critical.
  • Shadows remain usable: shadow ROI SNR is adequate (information is noisy but readable after gentle lift).
  • Motion is high: the cost of multi-exposure ghosting exceeds DR benefit (especially with rolling artifacts).
  • Flicker is uncontrolled: until flicker/banding evidence is characterized, multi-exposure can amplify stripes.

When dual/triple exposure is justified (must show measurable gains)

  • Single-exposure failure proof: persistent highlight saturation ratio (specular/metal/weld points) + lost texture that cannot be reconstructed.
  • Shadow readability proof: shadow ROI becomes readable only if long exposure increases effective SNR (not just brightness).
  • Information loss proof: key ROI (codes/textures/edges) is missing in single exposure even after best tone mapping.
  • Stability proof: with fixed scheduling, banding/brightness pumping decreases (not increases) across representative lighting.

Alternatives (referenced only, no deep dive here)

Practical cost reminders: Multi-exposure increases RAW bandwidth and buffering pressure, adds fusion latency, and expands the tuning surface. If the use-case does not demand recoverable detail in both highlights and shadows, simpler pipelines often outperform “complex HDR” in stability and throughput.
Figure F2 — Decision flow to choose single vs dual vs triple exposure, based on measurable DR, motion risk, and flicker/banding constraints.
Cite this figure: HDR/WDR Decision Flow (F2) Use: “ICNavigator — HDR/WDR Imaging Chain, Figure F2, accessed YYYY-MM-DD.”

H2-3. Exposure scheduling: segmented vs interleaved (frame/line level)

Core control surface of HDR/WDR: how exposures are inserted, how the S/L ratio is chosen, and when timing turns into banding or motion artifacts. This chapter focuses on timing geometry and measurable evidence (not sensor pixel/ROIC circuit details).

Scheduling families (what “segmented” and “interleaved” really mean)

  • Frame-sequential / Segmented: short (S) and long (L) exposures appear as separate frames or large time blocks. Simple, but the time gap increases ghosting risk under motion.
  • Line-interleaved / Staggered: S/L exposures are interleaved within one frame at line level (or small slices). Lower motion gap, but more sensitive to flicker/beat interactions with line timing.
  • Hybrid: segmented blocks with interleaving inside each block. Often used to balance motion, bandwidth, and tuning complexity.

Key knobs (what changes, what breaks)

  • Exposure ratio (L/S): higher ratio expands dynamic range potential, but increases the chance of hard blending boundaries (halo/edge discontinuity) and motion mismatch.
  • Integration times (tS, tL): longer tL improves shadow SNR but increases motion disparity; too-short tS preserves highlights but may raise noise in the “highlight detail” path.
  • Temporal spacing (Δt between S and L samples): larger Δt raises ghosting probability; interleaving reduces Δt but can amplify beat banding if flicker is not aligned.
  • Readout overlap / blanking budget: overlap affects achievable frame rate and the stability of exposure windows; unstable windows often show up as brightness pumping.
  • Rolling shutter geometry (no circuit deep dive): different lines integrate at different times, so flicker/strobe misalignment becomes spatial stripes inside a single frame.

Two predictable risks (and when they become inevitable)

  • Banding / stripes becomes likely when the sampling windows (line timing + exposure windows) do not align with the lighting waveform (50/60 Hz or PWM). The visible stripe period often matches a beat signature between flicker frequency and line/frame timing.
  • Ghosting becomes likely when Δt is large and the scene has motion; the merge must choose between preserving highlight detail and preventing double edges. Large L/S ratios worsen “winner-takes-all” merging near edges.

Beat-banding verification (no formulas, fully testable)

  • Measure: flicker frequency (mains/PWM) + line time / frame rate + exposure window placement.
  • Observe: stripe period and whether it changes predictably when frame rate or window placement changes.
  • Confirm: adjust scheduling (segmented↔interleaved / window shift) and verify stripes move or collapse as expected.

Evidence checklist (settle arguments with data)

  • Timing evidence: line period / frame period, exposure-active window(s), strobe timing, lighting flicker frequency.
  • Statistic evidence: histogram/RAW stats per exposure ID (S vs L), stripe period measurement vs timing change, merge debug summaries (if available).
  • Minimum “first capture” set: (1) exposure-active + frame-valid waveform, (2) strobe/trigger waveform, (3) S/L histograms, (4) stripe period estimate.
Practical first-fix sequence: If banding dominates, fix scheduling alignment first (window placement / interleaving strategy / frame rate) before touching fusion weights. If ghosting dominates, reduce Δt (interleave or reduce ratio) and strengthen motion rejection before increasing denoise strength.
Figure F3 — Scheduling geometry (segmented vs interleaved) and why flicker + line timing can create deterministic beat banding. Use timing and stripe-period evidence before changing fusion weights.
Cite this figure: Exposure Scheduling vs Flicker (F3) Use: “ICNavigator — HDR/WDR Imaging Chain, Figure F3, accessed YYYY-MM-DD.”

H2-4. Sync hooks that matter (trigger, strobe alignment, rolling artifacts)

HDR/WDR reliability depends on a few simple sync hooks: trigger, strobe alignment, and exposure notification. Many “banding problems” are timing misalignment problems—measurable on waveforms—before any algorithm tuning. This chapter does not cover PTP/1588 timing-hub systems; it stays at the interface hooks level.

Sync hooks (what must be observable and alignable)

  • Trigger-in (external trigger): defines frame start / exposure schedule anchor. Prevents schedule drift and enables repeatable regression.
  • Free-run (internal timing): simplest wiring, but schedule stability depends on internal clocks; drift shows as brightness pumping or inconsistent banding patterns.
  • Strobe-out / light sync: ensures illumination happens inside the intended exposure window (especially for S exposure preserving highlights).
  • Exposure notify / exposure ID: logs which exposure (S/L) is active per frame/segment; required to correlate artifacts with schedule state.

Alignment goals → failure signatures

  • Goal: strobe pulse fully inside the target exposure window → if misaligned: banding, brightness pumping, local over/under exposure.
  • Goal: stable frame counter and exposure ID → if unstable: fusion weights “chase” timing drift, causing inconsistent output across frames.
  • Goal: consistent scheduling across cameras (same exposure ratio and exposure ID cadence) → if inconsistent: multi-camera brightness/color mismatch and stereo/3D inconsistency.

Note: Multi-camera consistency here means identical exposure cadence and alignment. Time-distribution network design belongs to the Sync/Trigger & Timing Hub page.

Evidence capture (2 waveforms + 3 logs)

  • Waveform A: Trigger-in or Strobe pulse.
  • Waveform B: Sensor exposure-active / frame-valid (or equivalent timing pin / debug output).
  • Log 1: frame counter (monotonic, no jumps).
  • Log 2: exposure ID (S/L or segment index per frame).
  • Log 3: dropped frames / schedule error counters (if available).
Fast isolation tips: If stripes change predictably when strobe delay shifts, alignment is the dominant root cause. If ghosting changes predictably with exposure spacing (Δt) while stripes do not, scheduling geometry dominates. Lighting-driver hardware topology details belong to Vision Lighting Controller.
Figure F4 — Trigger, exposure windows, and strobe pulses. Correct alignment places strobe inside the intended window; misalignment produces banding and brightness pumping that are visible on waveforms and logs.
Cite this figure: Trigger / Exposure Window / Strobe Alignment (F4) Use: “ICNavigator — HDR/WDR Imaging Chain, Figure F4, accessed YYYY-MM-DD.”

H2-5. Fusion pipeline: merge strategy and ghosting suppression

Fusion is not one “magic blend.” It is an engineering pipeline with measurable knobs: input normalization, alignment, motion detection, weight/decision logic, and saturation handling. The goal is stable highlights, usable shadows, and minimal motion artifacts (ghosting / halo / edge tearing).

5 fusion modules (each must expose evidence)

  • Input normalization: bring S/L into a comparable space (gain/black-level consistency, per-exposure stats labeling).
  • Alignment: geometric + temporal consistency so edges land on the same pixels before blending.
  • Motion detection: produce a reliability mask (not just “moving/not moving”) to prevent double edges.
  • Weighting & decision: a weight map with a controlled transition band (avoid hard seams).
  • Saturation & transition: handle near-saturation regions with smooth roll-off (avoid white halos and clipped edges).

Merge weights (what to tune, what breaks)

  • What it does: pushes highlight detail toward S and shadow SNR toward L, but the transition band decides whether the merge looks natural.
  • What to measure: weight curve shape (smooth vs hard knee), transition width, and whether weights jump at strong edges.
  • Failure signatures: halo (bright outline), edge tearing (hard seam), unnatural highlight roll-off.
  • First fix: widen/soften the transition band before increasing local contrast or sharpening.

Motion mask (how to suppress ghosting without killing texture)

  • What it does: marks where multi-exposure blending is unreliable; motion areas must become more conservative (often closer to single-exposure behavior).
  • What to measure: motion-mask hit-rate (frame/ROI), overlap with edges, and frame-to-frame stability (mask jitter causes pumping).
  • Failure signatures: ghosting (double edges), edge drift, texture wash-out (over-conservative masking).
  • First fix: stabilize the mask (reduce jitter) before making it more sensitive.

Saturation handling (why highlight transitions often look wrong)

  • What it does: treats “near saturation” as a soft zone (not a hard on/off) so the blend rolls off smoothly at specular edges.
  • What to measure: saturated pixel map per exposure ID, near-saturation band size, and whether weights jump at saturated boundaries.
  • Failure signatures: clipped highlights, white/gray halos, harsh “cut-out” specular edges.
  • First fix: implement soft saturation thresholds + smooth weight constraints around saturation neighborhoods.

Artifact-to-metric mapping (fast root-cause isolation)

  • Ghosting dominates → motion-mask hit-rate too low/unstable, large Δt between S and L, edge coverage gaps.
  • Halo dominates → transition band too narrow, weights change too aggressively near strong gradients, saturation neighborhood mishandled.
  • Edge tearing dominates → alignment error + hard decision (weights snap at edges).
  • Brightness pumping → weights/masks drift frame-to-frame (often tied to schedule instability; correlate with exposure ID logs).
Scope boundary: This chapter treats fusion as a pipeline and its measurable outputs. A full ISP algorithm catalog belongs to Image Signal Processor (ISP).
F5 — Fusion Pipeline (Merge + Ghosting Suppression) Split fusion into measurable modules: align → detect motion → weight → merge → validate S exposure highlight path L exposure shadow SNR path Alignment geom + temporal edges match Motion detect reliability mask stable + edge-aware Weight map smooth transition no hard seams Merge / Decision saturation roll-off motion-safe fallback Fused output fused RAW / RGB validate artifacts + stats ghosting halo edge tearing Measure (evidence) • Motion mask hit-rate (frame / ROI) • Mask stability (frame-to-frame jitter) • Weight curve transition width • Saturated pixel map (per exposure) Fast isolation Ghosting → mask coverage / Δt Halo → hard weights / saturation edge Tearing → alignment + hard decision Pumping → drifting ID/weights First fixes 1) Stabilize motion mask 2) Widen weight transition band 3) Soft saturation roll-off 4) Re-check alignment
Figure F5 — Fusion is a measurable pipeline. When artifacts appear, map them back to motion-mask stability, weight transition behavior, saturation neighborhoods, and alignment.
Cite this figure: Fusion Pipeline (F5) — Use: “ICNavigator — HDR/WDR Imaging Chain, Figure F5, accessed YYYY-MM-DD.”

H2-6. De-flicker & De-banding: why stripes happen and how to kill them

Banding is not “mysterious noise.” It is a deterministic coupling between lighting modulation (50/60Hz or PWM), exposure windows, and rolling/line sampling. Multi-exposure HDR can amplify the visibility if windows are not aligned. The most reliable approach is: align first, stabilize sampling second, and filter/estimate last.

Mechanism (sampling + scanning, not a lighting-hardware lecture)

  • Lighting modulation provides a time-varying brightness waveform (mains flicker or PWM envelope).
  • Rolling / line scanning samples that waveform at different times per line, mapping time variation into spatial stripes within a frame.
  • Multi-exposure does not create flicker, but it can amplify mismatch: S/L windows can land at different flicker phases, producing stronger stripe contrast or pumping.

Priority order (what to fix first)

  • 1) Align exposure windows: shift/lock windows relative to flicker phase; choose interleaving/hybrid scheduling to reduce phase gaps.
  • 2) Stabilize sampling strategy: hold exposure cadence (exposure ID / ratio) for regression; avoid AE drifting windows while diagnosing.
  • 3) De-banding estimation/filter: only after the dominant stripe frequency/beat behavior is confirmed; aggressive filtering can damage texture.

Evidence chain (make banding measurable)

  • Stripe period measurement: quantify stripe spacing and how it moves when frame rate / line time changes.
  • Timing parameters: line time, frame rate, exposure windows, and any strobe timing reference.
  • Simplified spectrum idea: use frequency peaks to identify a dominant flicker/beat component, then validate on the time axis.

Contrast experiments (strongest proof)

  • Locked exposure vs Auto exposure: if stripes worsen or “walk” under AE, window drift is a primary contributor.
  • Shift strobe/window delay: if stripe phase shifts predictably, alignment dominates (fix timing before tuning fusion/NR).
  • Change FPS / line time: if stripe period changes predictably, it confirms deterministic beat coupling.
Scope boundary: This chapter addresses what causes stripes in the imaging chain and how to suppress them via timing alignment and measurable estimation. Lighting-driver topology details belong to Vision Lighting Controller.
F6 — Beat Banding (Flicker × Line Sampling → Stripes) Deterministic coupling: measure stripe period, then validate by shifting timing 1) Lighting modulation (50/60Hz or PWM envelope) f_flicker 2) Rolling/line sampling (each line samples at a different time) f_line 3) Beat envelope shows stripe period (difference behavior becomes visible) T_stripe Verify Measure stripe period → change FPS/line time → stripe shifts predictably Fix priority Align windows first → stabilize cadence → estimate/filter last
Figure F6 — Beat banding is deterministic. Use stripe-period measurements and timing changes (FPS/line time/window shift) to prove coupling, then fix alignment before heavy filtering.
Cite this figure: Beat Banding Mechanism (F6) — Use: “ICNavigator — HDR/WDR Imaging Chain, Figure F6, accessed YYYY-MM-DD.”

H2-7. Denoise under HDR: keep detail, avoid plasticky look

After HDR fusion, noise becomes non-uniform: shadows are lifted (noise becomes visible), chroma noise can dominate, and motion-safe fusion creates region-dependent artifacts. Noise reduction must be placed and constrained by measurable evidence (ROI SNR, texture retention, edge contrast, temporal trails), not by “stronger looks cleaner.”

Why HDR noise is harder than single-exposure

  • Shadow lift amplifies noise: tone and fusion can push low-SNR regions upward, making grain and chroma speckle obvious.
  • Non-uniform contribution: different pixels are dominated by S or L exposure depending on weights and saturation handling, producing patchy noise behavior.
  • Motion coupling: temporal NR can turn fusion mismatch into trails and “water-like” temporal artifacts when motion reliability is low.

Spatial vs Temporal NR (HDR-coupled guidance)

  • Spatial NR: stable and predictable, but risk of over-smooth (texture collapse) if applied uniformly.
  • Temporal NR: strong SNR gain, but must follow motion/reliability signals; otherwise trails dominate perceived quality.
  • Rule of thumb: in motion-uncertain regions, reduce temporal blending first; then use chroma-heavy spatial NR to control color speckle.

Luma/Chroma separation (a practical way to avoid “plastic”)

  • Chroma-first: chroma noise is highly visible yet less critical to fine detail—stronger suppression is usually safer.
  • Luma-conservative: luma carries texture and perceived sharpness; keep luma NR weaker in textured/edge regions.
  • Edge/texture awareness: use a texture/edge indicator to prevent luma smoothing from collapsing micro-contrast.

Evidence chain (what to measure before changing strength)

  • ROI SNR: measure in shadows/midtones/highlights separately; HDR failures often hide in lifted shadows.
  • Detail metrics: texture retention (local contrast / texture energy) and edge contrast (before/after NR).
  • Temporal artifacts: trail length/strength in motion sequences; correlate with motion-reliability and temporal blend ratio.
First-fix order: (1) make temporal blending motion-safe (reduce trails), (2) suppress chroma noise, (3) keep luma NR conservative on texture/edges, (4) apply light detail restore last. A full ISP NR module catalog belongs to Image Signal Processor (ISP).
F7 — HDR Denoise Placement (Luma/Chroma + Temporal Safety) Control noise without plastic look: split paths, motion-safe temporal blending, measurable KPIs Fusion output non-uniform noise lifted shadows + chroma speckle NR core split processing + constraints Motion / reliability gate controls temporal blending strength Luma NR edge/texture-safe Chroma NR stronger ok Temporal blending reduce trails in motion Detail restore light only avoid “sharpen noise” Clean output detail preserved stable over time over-smooth temporal trail KPIs to track • ROI SNR (shadows/mids/highlights) • Texture retention / edge contrast • Trail score in motion sequences First fixes 1) Reduce temporal blend in motion 2) Suppress chroma more than luma 3) Protect edges/texture regions Validation • Low-light texture + motion clip • Fixed settings → change one knob • Regression against KPIs
Figure F7 — HDR denoise should be split (luma/chroma) and motion-safe (temporal gating). Track ROI SNR, texture/edge retention, and motion trails to avoid a plasticky look.
Cite this figure: HDR Denoise Placement (F7) — Use: “ICNavigator — HDR/WDR Imaging Chain, Figure F7, accessed YYYY-MM-DD.”

H2-8. Tone mapping, gamma, and color consistency after fusion

Tone mapping decides whether HDR looks premium or broken. The job is not “make it brighter,” but to compress highlights (natural roll-off), lift shadows without crushing blacks, and keep color stable as exposure ratios change. Evidence must come from histograms and gray/color card trends, not subjective tuning alone.

Tone curve essentials (knee / shoulder / gamma)

  • Shadows: too aggressive lift → gray haze + noise visibility; too strong suppression → black crush.
  • Midtones: insufficient slope → flat image; excessive local boost → unnatural contrast.
  • Highlights: hard shoulder → highlight clip; overly soft shoulder → washed highlights (loss of “specular punch”).

Global vs Local tone mapping (HDR-specific risk)

  • Global: stable and predictable; fewer halos and fewer temporal surprises, but may under-deliver “local punch” in extreme scenes.
  • Local: stronger local contrast, but higher risk of halo, edge glow, and region-to-region inconsistency—especially after multi-exposure fusion.
  • Practical guardrail: if customers report “glowing edges” or “unnatural micro-contrast,” reduce local strength before re-tuning fusion.

Color consistency under multi-exposure (no AWB deep dive)

  • Exposure-mix sensitivity: as L/S weights change across the image, any path mismatch becomes visible as hue shifts.
  • Highlight roll-off consistency: color can drift in highlights if tone/gamma treatment differs per channel or per exposure contribution.
  • Skin and neutral stability: track gray/skin patches across different exposure ratios; instability is a fusion+tone integration issue, not only “white balance.”

Evidence chain (make “premium look” testable)

  • Histogram before/after: confirm shadow lift and highlight compression are intentional (not accidental clipping/crush).
  • Gray card trend: observe neutral patches across exposure ratios; watch for consistent bias direction (systematic drift).
  • Color card trend: compare color patch shifts as L/S ratio changes; large ratio sensitivity indicates integration instability.
Fast isolation tips: If highlights look harsh, adjust shoulder/roll-off before adding more denoise. If shadows look “muddy,” check for black crush and midtone slope changes before pushing local contrast. If skin/neutral drift changes with exposure ratio, verify fusion-to-tone integration consistency (not an AWB lecture).
F8 — Tone Mapping & Consistency (Curve + Evidence) Knee/shoulder and gamma decide black crush, highlight clip, and stable color under exposure mixing Tone curve shadows midtones highlights input output knee shoulder black crush highlight clip Histogram evidence before vs after tone mapping before after Gray / Color card trend track drift vs exposure ratio gray stable color drift? Rule If exposure ratio changes cause gray/color drift, fix fusion→tone consistency before “more contrast” tuning. Validate with histograms and card trends; avoid hiding errors with aggressive local tone mapping.
Figure F8 — Tone mapping is a controlled curve (knee/shoulder/gamma) plus evidence. Use histogram shifts and gray/color card trends across exposure ratios to ensure stable blacks, natural highlights, and consistent color.
Cite this figure: Tone Curve & Consistency Evidence (F8) — Use: “ICNavigator — HDR/WDR Imaging Chain, Figure F8, accessed YYYY-MM-DD.”

H2-9. Artifact library: symptoms → likely causes (HDR/WDR-specific)

This artifact library is a fast triage map. Each symptom points to HDR/WDR-owned modules (Exposure scheduling, Fusion, De-banding, NR, Tone mapping) and a first evidence that can be captured quickly. The goal is to end debates with measurable signals, then apply the fastest “first fix” to confirm root cause direction.

Symptom Ghosting (motion double image)

Likely modules Fusion motion decision, exposure S/L timing gap (Δt), weight transition hardness.

First evidence Motion-mask hit-rate (frame/ROI) + S/L exposure timeline (Δt) for the same output frame.

First fix Reduce fusion aggressiveness in motion regions (more conservative weights) and shrink Δt by scheduling (segmentation/interleaving choice).

Symptom Halo (edge glow / bright outline)

Likely modules Fusion blending around edges, local tone mapping strength, saturation neighborhood handling.

First evidence Weight-map transition width summary near edges + highlight saturation distribution around edges.

First fix Smooth / widen the weight transition first, then reduce local tone mapping strength (avoid masking fusion instability with contrast).

Symptom Banding / Stripe (horizontal/vertical stripes)

Likely modules Exposure window alignment, flicker coupling, de-banding estimation, timing mismatch.

First evidence Stripe period measurement + line/frame timing (row time, FPS) to check beat frequency behavior.

First fix Lock exposure scheduling (stable window/phase) and verify stripe moves predictably when FPS/row time changes.

Symptom Color shift (hue drift / skin tone instability)

Likely modules Fusion→tone integration consistency (mixing changes response), channel-consistent tone/gamma application.

First evidence Gray/color card trend vs exposure ratio (L/S mix sweep) + per-exposure-ID histogram/RAW stats.

First fix Sweep exposure ratio under a fixed scene and check systematic drift; stabilize fusion-to-tone consistency before “more contrast” tuning.

Symptom Flicker (brightness pumping / frame-to-frame jumps)

Likely modules Exposure scheduling stability (exposure ID/ratio jitter), temporal NR stability, local tone temporal consistency.

First evidence Exposure ID/ratio log across frames + output histogram mean/percentile drift (frame-to-frame).

First fix Lock scheduling cadence (and exposure ratio) to separate “control loop” instability from tone/temporal artifacts.

Symptom Black crush / Highlight clip (muddy shadows / blown highlights)

Likely modules Tone curve (knee/shoulder), fusion saturation handling, inconsistent highlight roll-off.

First evidence Histogram endpoints (shadow floor / highlight ceiling) + saturation pixel ratio per exposure ID (S/L separated).

First fix Adjust shoulder/roll-off to avoid hard clipping, then verify shadow region is not crushed before lifting (avoid noise explosion).

Library rule: Each symptom must map to one owned module and one “first evidence.” Avoid generic “ISP blame.” This page stays within HDR/WDR chain modules (Exposure/Fusion/De-band/NR/Tone).
F9 — HDR/WDR Fault Map (Symptom → Module) Use one “first evidence” to choose the module to debug first Symptoms Ghosting Halo Banding / Stripe Color shift Flicker Crush / Clip HDR/WDR-owned modules Exposure scheduling windows • ratio • Δt Fusion weights • motion mask De-banding / De-flicker beat freq • estimation NR luma/chroma • temporal Tone mapping knee • shoulder • gamma Start with “first evidence” timeline (Δt / window) • per-ID histogram • mask ratio • stripe period
Figure F9 — A symptom-to-module map for HDR/WDR debugging. Use a single “first evidence” capture to pick the module and stop guesswork.
Cite this figure: HDR/WDR Fault Map (F9) — Use: “ICNavigator — HDR/WDR Imaging Chain, Figure F9, accessed YYYY-MM-DD.”

H2-10. Measurements that settle arguments (what to capture first)

Turn arguments into captures. Each case uses a fixed format: First 2 capturesOne discriminatorFirst fix. The captures are HDR/WDR-specific: timing windows, per-exposure-ID stats, fusion summaries, and stripe period vs scan timing.

Case “Banding is from lighting” vs “banding is from scheduling/window phase”

First 2 captures (1) Stripe period measurement, (2) line/frame timing (row time, FPS, exposure window placement).

One discriminator Change FPS/row time slightly: does stripe period shift according to beat-frequency prediction?

First fix Lock exposure window phase (stable schedule), then re-check stripe stability before applying stronger de-banding filters.

Case “Fusion algorithm is weak” vs “S/L Δt is too large”

First 2 captures (1) S/L exposure timeline for the same output frame (Δt), (2) motion mask ratio (frame/ROI).

One discriminator Reduce Δt (scheduling) while keeping merge weights: does ghosting drop significantly?

First fix Make motion regions more conservative (weights) and shrink Δt; confirm by replaying the same motion clip.

Case “Brightness pumping is AE/control loop” vs “tone/temporal instability”

First 2 captures (1) exposure ID/ratio log across frames, (2) output histogram mean/percentiles across frames.

One discriminator Lock scheduling cadence and exposure ratio: if pumping disappears, root cause is control/scheduling.

First fix Stabilize scheduling first; only then tune tone/local/temporal settings for consistency.

Case “Color drift is white balance” vs “fusion→tone integration inconsistency”

First 2 captures (1) gray/color card trend vs exposure ratio sweep, (2) per-exposure-ID histogram/RAW stats (S/L separated).

One discriminator If drift correlates strongly with exposure ratio, integration consistency is the primary suspect.

First fix Fix fusion-to-tone consistency under ratio changes before adding stronger local contrast or aggressive NR.

Case “Shadows look dirty because NR is weak” vs “shadows are lifted/crushed incorrectly”

First 2 captures (1) shadow ROI SNR, (2) tone curve shadow segment (lift level) + histogram shadow endpoint.

One discriminator Change only shadow lift (not NR): does the “dirty” perception track the curve change?

First fix Set shadow lift to a stable baseline first, then suppress chroma noise; keep luma NR conservative on texture.

Minimum capture set for most debates: (A) Trigger/Strobe waveform + (B) Exposure-active/Frame-valid waveform, plus (C) per-exposure-ID histogram/RAW stats + (D) fusion summary (mask ratio, weight map summary).
F10 — “2 Waveforms + 2 Stats” (Capture First) If these four exist, most HDR/WDR root-cause debates end quickly Waveform A Trigger / Strobe phase matters Waveform B Exposure active / Frame valid S window L window Δt / overlap Stats A Histogram / RAW stats per exposure ID S L separate S/L or evidence is weak Stats B Fusion debug summary (mask / weight) Motion mask ratio 56% Weight transition narrow Conclusion Two timing windows + per-ID stats + fusion summary → fast isolation of scheduling vs fusion vs tone causes.
Figure F10 — Capture four items first: two waveforms (Trigger/Strobe and Exposure-active/Frame-valid) plus two stats panels (per-exposure-ID histogram/RAW stats and fusion summary). Most HDR/WDR debates end here.
Cite this figure: 2 Waveforms + 2 Stats Capture Grid (F10) — Use: “ICNavigator — HDR/WDR Imaging Chain, Figure F10, accessed YYYY-MM-DD.”

H2-11. Validation & tuning plan (bring-up → golden scene → regression)

This SOP turns HDR/WDR tuning into a repeatable pipeline: bring-up baselinegolden scenesmetric-driven tuningversioned HDR profile lockregression. It focuses on HDR/WDR-owned knobs (exposure scheduling, fusion, de-banding, NR, tone/gamma consistency). Storage implementation details belong to the Calibration & NVM subpage.

Step 1 Bring-up baseline (before any “beauty tuning”)

  • Freeze the environment: fixed scene + fixed lighting + stable temperature (avoid chasing drift).
  • Expose the HDR state: log exposure ID, S/L ratio, shutter times, and frame counters per frame.
  • Verify scheduling integrity: confirm S/L windows occur where expected (Δt, overlap, segmented vs interleaved).
  • Enable debug summaries: motion-mask ratio and weight-map summaries (no full dump needed).
  • Start with “fail fast” checks: banding, ghosting, highlight clip, black crush, pumping.

Rule: if artifacts exist in the baseline, do not tune tone/NR to hide them. Fix scheduling/fusion stability first.

Step 2 Golden scenes (repeatable by anyone)

  • Backlight DR: bright window + dark interior (tests highlight roll-off + shadow usability).
  • Specular highlight: metal / solder / reflective label (tests saturation handling + halo risk).
  • Low-light texture: dark fabric / printed text (tests shadow SNR + “plasticky” NR risk).
  • Flicker stress: 50/60Hz or PWM lighting (tests beat banding + exposure-phase stability).
  • Motion stress: moving edges/people/robot arm (tests ghosting + motion-mask decisions).

Golden scenes must be re-shootable: fixed camera pose, fixed exposure mode, and saved lighting settings.

Step 3 Metrics that matter (quantify what users complain about)

  • Dynamic range: usable stops (clip threshold + shadow SNR threshold).
  • Shadow quality: ROI SNR + texture preservation (avoid “mud” vs “plastic”).
  • Color consistency: ΔE trend on chart across exposure ratios (detect systematic drift).
  • Banding score: stripe amplitude/period stability vs FPS/row time changes.
  • Ghosting score: motion-mask hit-rate + residual double-edge energy (motion regions).
  • Tone stability: histogram percentiles drift frame-to-frame (detect pumping).

Key rule: always compute stats per exposure ID (S/L separated), otherwise evidence becomes ambiguous.

Step 4 Parameter sweep plan (controlled changes, measurable outcomes)

  • Exposure scheduling: segmented vs interleaved, S/L ratio sweep, Δt minimization, window phase lock.
  • Fusion: weight curve sweep (highlight vs shadow priorities), motion threshold sweep, transition width sweep.
  • De-banding: flicker estimation strength and lock strategy (validate with stripe period predictability).
  • NR under HDR: luma/chroma balance, temporal strength (watch for trails vs detail loss).
  • Tone mapping: knee/shoulder sweep, highlight roll-off smoothness, shadow lift baseline.

Sweep discipline: change one family at a time and store results with a run ID; never “turn five knobs” blindly.

Step 5 Lock a versioned HDR profile (traceable, reviewable)

  • Create a profile ID: HDR_PROFILE_ID + scene set + firmware hash + sensor lot tag.
  • Freeze the baseline: store golden captures + metric baselines (per-scene) for future diffs.
  • Define pass bands: acceptable ranges for banding score, ghosting score, ΔE drift, clip ratios.
  • Keep parameter sets human-reviewable: “what changed and why” notes for each locked release.

Storage/packing format and NVM layout belong to the Calibration & NVM page; keep this page focused on the SOP.

Step 6 Regression checklist (what forces a rerun)

  • Firmware/ISP change: exposure scheduler, fusion thresholds, NR/tone kernels.
  • Sensor lot change: black level drift, gain curve shifts, saturation behavior shifts.
  • Operating envelope: temperature corners, different flicker frequencies, different motion patterns.
  • Factory/field modes: any “auto” re-enabled must be verified for stability (pumping, banding).

Regression principle: rerun a small “high-sensitivity subset” first (flicker + motion + backlight). Expand only if failures appear.

Procurement-ready Golden scene assets & instruments (example MPNs)

  • Dynamic range chart: DSC Labs Xyla 21CDX1-81W (21-step, built-in light source).
  • Color reference: X-Rite ColorChecker ClassicMSCCC (24-patch target).
  • Resolution/MTF & tonal response chart: Imatest SFRplus chart — QI-SFR10-P-RM (example chart code).
  • Flicker measurement: UPRtek MF250N handheld spectral flicker meter (FFT + flicker metrics).

These items make the SOP repeatable across teams: one DR target, one color target, one sharpness/tonal chart, and one flicker meter.

F11 — Validation & Tuning SOP (Bring-up → Golden Scenes → Regression) Repeatable scenes + measurable metrics + versioned HDR profiles Golden scenes Backlight DR Specular highlight Low-light texture Flicker stress (50/60Hz, PWM) Capture per-ID stats + timing Compute metrics DR • SNR • ΔE • banding Tune (controlled sweeps) ratio • weights • curve Lock HDR profile ID • baseline • thresholds Regression FW update • sensor lot • temperature corners pass/fail vs locked baselines Key metrics Dynamic range (usable stops) Shadow ROI SNR + texture ΔE trend vs ratio sweep Banding score (stripe amp) fail → tune again Discipline Scene is fixed • Change one knob family • Log per-ID stats • Lock versioned profiles
Figure F11 — A repeatable HDR/WDR tuning workflow: bring-up baseline, run golden scenes, compute metrics, do controlled sweeps, lock a versioned HDR profile, and execute regression on every meaningful change.
Cite this figure: Validation & Tuning SOP Flow (F11) — Use: “ICNavigator — HDR/WDR Imaging Chain, Figure F11, accessed YYYY-MM-DD.”

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12. FAQs ×12 (Evidence-based, no scope creep)

Each answer follows the same engineering pattern: First 2 capturesOne discriminatorFirst fix. Every item maps back to H2-1…H2-11 so the FAQ stays inside the HDR/WDR chain (exposure scheduling, fusion, de-banding, NR, tone/gamma, validation).
1“HDR ON makes it more flickery.” Should the first suspect be row timing or lighting frequency?

First 2 captures: (1) flicker/stripe period (time or spatial), (2) row time/FPS + exposure window phase (where S/L windows land).

Discriminator: slightly change FPS/row time—if the flicker/stripe period shifts predictably, it’s scan/window coupling (beat behavior). If it stays locked, it’s mostly lighting-frequency driven.

First fix: lock the exposure scheduling cadence/phase first, then tune de-flicker/de-banding strength.

Maps to: H2-6 (De-flicker/De-banding), H2-10 (Measurements)
2“Moving edges show ghosting.” Is it fusion weights or a loose motion mask?

First 2 captures: (1) motion-mask hit-rate (frame/ROI), (2) S/L Δt on the timeline for the same output frame (how far apart exposures are).

Discriminator: reduce Δt via scheduling without changing weights—if ghosting drops sharply, Δt/scheduling is the driver. If not, mask/weight policy is the driver.

First fix: make motion regions conservative (weighting) and shrink Δt; re-test on the same motion clip.

Maps to: H2-5 (Fusion pipeline), H2-9 (Artifact library)
3“Shadows are visible but the image looks plasticky.” Is NR too strong or is tone mapping flattening it?

First 2 captures: (1) shadow ROI SNR + texture/detail retention indicator, (2) tone curve summary (shadow/mid segments; lift vs compression).

Discriminator: change only the tone curve (keep NR fixed). If the “plasticky” feel tracks the curve, tone is the root. If it doesn’t, NR policy is the root.

First fix: set a stable tone baseline first, then limit NR (especially luma) while controlling chroma noise in deep shadows.

Maps to: H2-7 (Denoise under HDR), H2-8 (Tone/gamma consistency)
4“Highlights don’t clip, but color turns yellow/green.” Is it fusion color consistency or gamma/curve?

First 2 captures: (1) gray/color chart ΔE trend vs exposure ratio sweep, (2) per-exposure-ID stats (S/L separated) to see which exposure dominates highlights.

Discriminator: if the color shift correlates strongly with L/S ratio, fusion-to-tone consistency is likely. If it correlates weakly, tone/gamma channel consistency and roll-off policy are likely.

First fix: stabilize highlight roll-off and fusion→tone consistency before “more contrast” tuning.

Maps to: H2-8 (Tone mapping & color consistency)
5“Banding appears only under certain lamps.” How to quickly confirm 50/60 Hz beat behavior?

First 2 captures: (1) stripe period (spatial or temporal), (2) row time/FPS + exposure window placement (segmented vs interleaved phase).

Discriminator: adjust window phase or FPS slightly. Beat-coupled banding will shift in a predictable way as scan timing changes.

First fix: align/lock exposure windows to a stable phase first; then apply de-banding estimation if needed.

Maps to: H2-6 (Why stripes happen), H2-3 (Exposure scheduling)
6“How to set exposure ratio? What goes wrong if S/L differs too much?”

First 2 captures: (1) highlight saturation ratio (does S protect highlights?), (2) shadow ROI SNR (does L add usable info?).

Discriminator: sweep ratio under a fixed scene. If ratio increases ghosting/banding risk and latency while adding little usable shadow information, it’s too aggressive.

First fix: choose the smallest ratio that meets highlight protection and shadow usability metrics.

Maps to: H2-3 (Scheduling & ratio), H2-2 (When multi-exposure is worth it)
7“Is interleaved always better than segmented?” What constraints decide?

First 2 captures: (1) Δt/overlap between S and L windows, (2) banding sensitivity vs phase (stripe period stability).

Discriminator: if motion ghosting is the dominant failure mode, interleaving (smaller Δt) often wins. If beat-coupled banding dominates, segmented with strong phase control may be safer.

First fix: pick a baseline mode and make window phase lockable for regression.

Maps to: H2-3 (Segmented vs interleaved)
8“HDR increases latency.” Where to suspect first?

First 2 captures: (1) per-stage timing (scheduler→fusion→NR→tone) or module counters, (2) whether temporal steps add buffering (e.g., motion handling, temporal NR).

Discriminator: downgrade one class (e.g., temporal NR or complex fusion) and re-measure. If latency drops immediately, buffering/compute is the driver; otherwise frame structure/scheduling is the driver.

First fix: create a low-latency baseline profile and compare quality loss on golden scenes.

Maps to: H2-5 (Fusion), H2-11 (Validation plan)
9“Same params look good in daytime but soft at night.” Is it parameter range or missing golden scenes?

First 2 captures: (1) confirm golden scene coverage includes night low-light + high-contrast point lights, (2) metric deltas at night (SNR, ghosting score, banding score).

Discriminator: if a missing scene can reproduce and quantify the failure consistently, the scene set is incomplete. If failures vary unpredictably, the control/tuning range may be unstable.

First fix: expand the golden scene set first, then sweep parameters under controlled conditions.

Maps to: H2-11 (Golden scenes & regression)
10“Debanding removes stripes but also kills detail.” How to find the balance?

First 2 captures: (1) banding score (stripe amplitude/period), (2) texture/detail retention indicator (edge/texture energy).

Discriminator: check diminishing returns: if extra de-banding yields little stripe improvement but linear detail loss, it’s beyond the optimal point.

First fix: lock the minimum acceptable banding score, then keep NR conservative to preserve texture.

Maps to: H2-6 (De-banding), H2-7 (NR placement/strength)
11“Brightness jumps periodically.” Is it exposure scheduling or strobe alignment?

First 2 captures: (1) Trigger/Strobe waveform, (2) Exposure-active/Frame-valid waveform (actual exposure windows).

Discriminator: if the strobe phase drift relative to the exposure window matches the brightness jump period, it’s alignment. If not, it’s scheduling/control loop instability.

First fix: align and lock strobe to the exposure window first, then re-check for residual pumping.

Maps to: H2-4 (Sync hooks), H2-10 (Measurements)
12“How to turn HDR tuning into a regression-ready versioned asset?”

First 2 captures: (1) locked golden captures per scene, (2) locked metric baselines + pass bands (banding/ghosting/ΔE/clip ratios).

Discriminator: a change (FW/sensor lot) must produce a clear pass/fail outcome on the same scene+metric set. If it does, the tuning is assetized.

First fix: define HDR_PROFILE_ID + baseline pack (scenes + metrics + key parameter summary). Reference “Calibration & NVM” only for storage mechanics.

Maps to: H2-11 (Validation & regression), plus a brief reference to Calibration & NVM (no storage details here)