123 Main Street, New York, NY 10001

Structured-Light 3D: Drivers, Pattern Timing & Depth ISP

← Back to: Imaging / Camera / Machine Vision

Core idea: Structured-Light 3D works by projecting known coded patterns and decoding them with tight camera–projector synchronization to reconstruct depth. Most failures reduce to measurable evidence—clip/contrast/confidence, PAT_ID alignment, and calibration drift—so the fastest path is “measure → isolate → fix” instead of guessing parameters.

H2-1. What Structured-Light 3D Is (and what it is not)

Definition (extractable): Structured-light 3D is active triangulation where a projector emits known coded patterns, a camera captures each pattern with deterministic timing, and a compute pipeline decodes correspondences to triangulate depth. A production-grade output is not only a depth map, but also a confidence map and an invalid mask that explain where depth is unreliable.

Active triangulation Coded patterns PAT_ID alignment Depth + Confidence + Invalid mask

Core loop (with engineering artifacts):

  • Project a known pattern set → outputs: STROBE gate, PAT_ID (pattern index), projector frame sync
  • Capture each pattern deterministically → outputs: exposure window, FRAME_VALID, timestamp (preferred)
  • Decode per-frame observations → outputs: decoded code/phase maps, invalid mask candidates
  • Correspond camera pixels to projector coordinates → outputs: correspondence map + confidence score
  • Triangulate using calibrated geometry → outputs: raw depth + confidence
  • Post-filter and finalize → outputs: depth map, confidence map, invalid mask (and optional hole fill)

Common coding variants (why they exist):

  • Binary Gray code: robust indexing, but many frames; most sensitive to motion-induced mis-decode.
  • Phase shifting: high precision; most sensitive to nonlinearity and saturation (wavy depth / banding).
  • Hybrid (Gray + Phase): Gray for coarse identity, phase for subpixel; requires consistent threshold + confidence fusion.

Rule of thumb: every additional frame increases decode robustness in static scenes, but increases vulnerability to motion and timing drift.

What it is not (scope lock):

  • Not stereo-only depth: no reliance on passive texture; structured light injects a known signal (the pattern) to create correspondences.
  • Not ToF 3D (dToF/iToF): no phase/time-of-flight TDC chain is required; the critical path is pattern identity + exposure timing + decode correctness.
  • Not an interface/protocol tutorial: transport links can vary (GigE/USB3/CoaXPress), but structured-light failures typically originate in timing, photon budget, or calibration.

Three failure archetypes (symptom → fastest discriminator):

  • Motion / frame mismatch: depth looks “patchy” or jumps between frames → fastest check: compare PAT_ID sequence vs captured frame order; look for mis-aligned pattern index.
  • Sync / rolling-shutter shear: diagonal banding or spatial “tilt” artifacts → fastest check: verify exposure window vs strobe gate overlap; check trigger jitter and rolling shutter timing.
  • Ambient / saturation: holes on shiny/black regions or near sunlight → fastest check: image histogram + clipped pixel map + confidence collapse signature.

These three archetypes map directly to later chapters: timing (H2-6), photon/robustness (H2-9), calibration (H2-8), and field evidence (H2-11).

Concept Loop Project → Capture → Decode → Triangulate → Validate Projector Pattern + Strobe Scene Coded Light Camera Exposure + TS Decode Code / Phase Depth ISP Correspond → Triangulate → Filter Output Depth + Conf Invalid Mask Key Enemies Timing Ambient Calibration ICNavigator
Figure F1. Structured-light 3D concept loop (pattern → capture → decode → depth + confidence). Cite this figure
Use this figure to explain the minimum closed loop: each capture must be aligned to a known PAT_ID, and the system must output confidence and invalid mask alongside depth.

H2-2. System Architecture Decomposition (Projector–Camera–Compute–Sync)

Purpose: A structured-light 3D system is easiest to build and debug when every block exposes a small set of observable signals. The architecture below splits responsibilities into four chains—Projector, Camera, Compute, and Sync—so later design and debug steps can reference the same template without repeating the system explanation.

Projector chain (photon emission + pattern integrity):

  • Light source & driver: constant-current or pulsed driver, microsecond-class strobe gating, current/temperature telemetry.
  • Modulator: DLP/LCoS/MEMS/galvo that turns an input pattern into a spatial output; must provide frame/blanking timing hooks.
  • Optics/DOE: forms the pattern at the scene; engineering focus is on repeatability (not full lens design).
  • Minimum observable outputs: STROBE gate, PAT_ID or frame index, FAULT, TEMP, (optional) optical feedback.

Camera chain (deterministic sampling of the pattern set):

  • Sensor + shutter model: global shutter simplifies capture; rolling shutter requires careful exposure gating and line timing awareness.
  • Exposure/gain control: exposure window must be aligned to the strobe/pattern timing; saturation behavior is a first-order depth reliability driver.
  • Pre-processing (only what affects decode): black level, normalization, clipping detection, and per-frame metadata.
  • Minimum observable outputs: TRIG_IN, FRAME_VALID/VSYNC, exposure start/end (or proxy), timestamp (preferred), gain/exposure settings.

Compute chain (decode-to-depth with confidence):

  • Decode: produce code/phase maps, invalid candidates, and a confidence score per pixel.
  • Correspondence: map camera pixels to projector coordinates; mismatches are usually visible as low-confidence islands.
  • Triangulation: uses calibrated geometry (camera/projector intrinsics + extrinsics) to compute depth.
  • Filter: remove speckle/outliers and produce final depth + confidence + invalid mask.
  • Minimum observable outputs: decoded maps, confidence map, invalid mask, stage latency counters.

Sync chain (the system’s “contract”):

  • Trigger distribution: defines the master timing (projector-master, camera-master, or external hub).
  • PAT_ID alignment: each captured frame must be tied to a known pattern index; this is the most common root cause of “looks random” depth failure.
  • Exposure gating: ensures the camera integrates during the valid pattern window (especially important for rolling shutter).
  • Timestamps: allow correlation of depth errors with temperature, current droop, jitter, or dropped frames.

Non-negotiable invariants: (1) PAT_ID ↔ captured frame order is consistent, (2) exposure window overlaps the correct strobe/pattern window, (3) critical telemetry is logged with timestamps.

Architecture Map Projector / Camera / Compute / Sync (with required observables) Projector Camera Compute Sync Driver + Light Modulator Optics / DOE Sensor + Shutter Exposure / Gain Metadata + TS Decode Triangulate Filter + Conf Trigger PAT_ID Align Timestamp / Logs TRIG PAT_ID STROBE TS TEMP FAULT ICNavigator
Figure F2. Four-chain architecture with required observables (TRIG, PAT_ID, STROBE, TS, TEMP/FAULT). Cite this figure
This map defines the integration contract: if PAT_ID, exposure window, and timestamps are measurable and aligned, most “random depth” failures become diagnosable with logs plus two waveforms.

H2-3. Pattern Coding Strategy (Gray / Phase / Hybrid) and Error Modes

Why multiple codes exist: Pattern coding is a trade between frame budget, motion tolerance, and decode robustness under saturation/low contrast. A practical structured-light system chooses a code family based on measurable failure signatures, then tunes the pattern set so each exposure has a deterministic PAT_ID and a stable decode path.

Gray: robust index Phase: high precision Hybrid: coarse+subpixel PAT_ID per exposure

Pattern types → frames → strengths → failure signatures (field-readable):

  • Gray code: many frames; strong against texture-poor scenes. Dominant failure: motion between frames. Signature: “jumping islands” of decoded index and fragmented invalid mask. Fast check: verify PAT_ID sequence matches captured frame order.
  • Phase shifting: fewer patterns for fine depth; high subpixel precision. Dominant failure: nonlinearity and saturation. Signature: wavy depth / banding, phase unwrap discontinuities, confidence collapse near clipped regions. Fast check: clipped-pixel map + phase residual histogram.
  • Hybrid (Gray + Phase): Gray for coarse identity, phase for subpixel. Dominant failure: inconsistent thresholds/weights across the two stages. Signature: correct coarse geometry but unstable fine detail (subpixel jitter). Fast check: compare coarse index stability vs phase residual consistency per PAT_ID.

Pattern-set design knobs (what to tune, what it costs):

Frame count

More frames improves decode robustness in static scenes, but increases motion sensitivity and PAT_ID misalignment risk.

Spatial frequency

Higher frequency raises precision but is more fragile under blur, defocus, MTF limits, and rolling-shutter shear.

Contrast & exposure

Higher contrast improves threshold margin, but increases saturation and nonlinearity errors (especially in phase decode).

Guard bands / sync frames

Extra blanking or guard regions reduce boundary mis-decode, at the cost of frame budget and overall cycle time.

Per-frame metadata

Logging exposure/gain and PAT_ID for every capture converts “random depth” into a diagnosable sequence problem.

Validation thresholding

Confidence and invalid-mask rules should be tuned against measurable metrics (valid %, RMSE, banding rate).

PAT_ID and sequencing (the integration contract): Every exposure must be bound to a unique PAT_ID (pattern-set ID + frame index). If dropped frames, buffering, or trigger jitter can reorder PAT_ID ↔ frame mapping, the decode stage may look like noise even when optics are fine. For a production system, PAT_ID and timestamps should be recorded alongside depth outputs so failures can be correlated to motion, saturation, or timing drift.

Pattern Timeline & Decode Path Each exposure: PAT_ID → capture → decode → correspondence Timeline P0 P1 P2 P3 P4 P5 P6 Pn PAT_ID Decode path Threshold Clip check Unwrap Phase Validate Confidence Out Corr Saturation Nonlinear Low SNR Motion ICNavigator
Figure F3. Pattern timeline (P0…Pn) and decode chain (threshold → unwrap → validate → correspondence). Cite this figure
Use this figure to explain why failures look different: motion breaks frame-to-frame consistency (timeline), saturation breaks threshold margin, and nonlinearity breaks phase unwrapping/validation.

H2-4. Projector / Laser Driver Design (Strobe, Current Control, Thermal)

Why the driver matters: In structured light, the driver is not “just a light source.” It defines the photon dose per PAT_ID and the timing overlap between strobe and exposure. If pulse shape, peak current, or temperature drift changes across frames, decode thresholds and phase residuals drift with it—showing up as holes, banding, or unstable confidence.

µs-class strobe Repeatable peak current Low droop Temp-aware logs

Driver types (selection by timing + repeatability):

  • Constant-current buck/boost: efficient for mid/high power; suitable when pulse width is moderate and thermal headroom is needed.
  • Linear current driver: low noise and fast response, but higher dissipation; useful when repeatability matters more than efficiency.
  • Pulsed strobe driver: high peak current with tight pulse control; preferred for short exposures and ambient suppression.

Structured-light priority is repeatability (frame-to-frame current and timing), not maximum brightness alone.

Key specifications (KPI → what breaks when it is out of spec):

  • Peak current & repeatability: mismatch across frames reduces contrast margin → decode islands + confidence collapse.
  • Rise/fall time: slow edges shrink the effective strobe window → pattern-dependent low SNR, especially with rolling shutter gating.
  • Pulse width accuracy & jitter: width drift changes photon dose → threshold drift (Gray) or phase residual drift (Phase).
  • Droop / regulation bandwidth: intra-pulse droop creates spatial/temporal bias → banding and “wavy” depth in phase decode.
  • Temperature coefficient: warm-up changes optical output and electronics behavior → depth drift + invalid-mask growth over time.

Protection and monitoring (as validation hooks, not a certification guide):

  • Protection: OCP/OTP, open/short detection; interlock signals should be captured in logs when a depth anomaly occurs.
  • Monitoring: sense resistor (Rsense) current waveform, diode/board temperature (NTC), and optional optical feedback (photodiode).
  • Logging minimum: PAT_ID, strobe enable, peak current estimate, temperature, fault flags.

First 2 measurements (fastest evidence, minimal tools):

  • Waveform #1 — STROBE vs Exposure: probe strobe gate and camera exposure (or frame-valid proxy). Pass condition: stable overlap and stable timing across PAT_ID frames.
  • Waveform #2 — Rsense current pulse: measure the current pulse for multiple PAT_ID values. Pass condition: repeatable peak and pulse width, minimal droop, no temperature-correlated drift.

If these two measurements are stable, most remaining depth instability originates in pattern decoding/validation or calibration, not in photon emission timing.

Driver Block Repeatable photon dose per PAT_ID with measurable timing and telemetry VIN DC/DC Buck / Boost CC / Pulse Driver Laser/LED Emitter Rsense ADC CTRL Gates & Telemetry STROBE_IN EN FAULT TEMP Opt FB PD (opt) ICNavigator
Figure F4. Driver chain with strobe gating, current sensing, and temperature/fault telemetry. Cite this figure
The most valuable observables are the strobe-to-exposure overlap and the Rsense pulse repeatability across PAT_ID frames; both determine decode margin and confidence stability.

H2-5. Projector Modulation & Pattern Generation (DLP/LCoS/MEMS timing hooks)

Bridge from “emitting photons” to “showing a known pattern at a known time”: The projector path must prove that a specific spatial pattern (P0…Pn) was actually presented during a defined valid window. Structured light becomes diagnosable only when the projector exposes timing hooks (frame/blank/valid) and a traceable PAT_ID that can be correlated with captured frames and depth outputs.

Pattern LUT → Modulator FRAME_SYNC / BLANK Settle / VALID PAT_ID trace

Pattern source chain (what must exist, regardless of DLP/LCoS/MEMS choice):

  • Pattern LUT / sequence table: defines P0…Pn with deterministic ordering and per-frame metadata.
  • Pattern scheduler: advances the sequence by trigger or an internal clock and emits a stable PAT_ID for each exposure.
  • Modulator interface stage: moves the pattern into the modulator pipeline; any buffering or retries must not reorder PAT_ID.

Key requirement: PAT_ID must be observable (pin/event/log) so missing or repeated frames are detectable.

Pattern timing budget (the windows that decide whether decoding is valid):

Exposure window

The camera’s integration window that must fall fully inside the projector’s pattern-valid interval.

Settle time

Pattern transitions require a settle interval (mirrors/LC/scan). Capturing inside settle creates mixed patterns.

Blanking interval

Blanking is a “do not capture/do not strobe” region. It prevents partial frames and transient states.

Pattern-valid window

The only interval where the spatial pattern is guaranteed stable. This is what exposure gating must target.

PAT_ID latch moment

Define when PAT_ID changes and when it becomes valid; ambiguity here creates frame-to-pattern mismatch.

Strobe gate

Defines when the driver is allowed to emit. Should coincide with pattern-valid and desired exposure overlap.

Pattern integrity (only as observed artifacts, mapped to measurable evidence):

  • Gamma / nonlinearity: increases phase residuals and banding risk. Evidence: per-PAT_ID intensity histogram drift and residual distribution changes.
  • Contrast loss / stray light: reduces threshold margin and confidence. Evidence: per-frame contrast metric and invalid-mask growth near bright backgrounds.
  • Flare / ghosting: creates secondary edges and “ghost layers.” Evidence: correlation peaks at multiple disparities / depth layers during decoding.

These checks stay within structured-light scope: they validate that the displayed pattern matches the intended code assumptions.

Projector Pattern Timing Chain From PAT_ID generation to a stable pattern-valid window Pattern Gen LUT / Scheduler Modulator DLP / LCoS / MEMS Optics Lens / DOE Timing hooks PAT_ID FRAME_SYNC BLANK PAT_VALID STROBE_GATE Window BLANK VALID BLANK Gamma Contrast Stray ICNavigator
Figure F5. Projector pattern chain with timing hooks (PAT_ID / FRAME_SYNC / BLANK / VALID / STROBE_GATE). Cite this figure
The only safe capture region is the pattern-valid window. If exposure overlaps settle/blanking, decoding becomes unstable even when the driver current looks correct.

H2-6. Capture Synchronization (Rolling/Global Shutter, Trigger, Exposure Gating)

Make sync deterministic: Structured light fails first when projector and camera are not locked. A deterministic system can answer: which PAT_ID corresponds to which exposure window, whether that exposure was fully inside the projector’s pattern-valid window, and whether timing errors are dominated by jitter (must be reduced) or fixed latency (can be calibrated).

PAT_ID ↔ Exposure Global vs Rolling Trigger topology Jitter vs latency

Global vs rolling shutter (why rolling creates shear):

  • Global shutter: all pixels integrate together; the main requirement is stable overlap between exposure and pattern-valid (plus correct PAT_ID mapping).
  • Rolling shutter: different lines integrate at different times; if the projected pattern changes during the line scan, the image contains a time-varying pattern, causing shear and decoding mismatch.

Rolling-shutter mitigation stays within structured-light scope: line-synchronized projection or exposure gating that shortens strobe to approximate a global capture moment.

Sync topologies (choose the master, then define the contract):

  • Projector-master: projector advances PAT_ID and FRAME_SYNC; camera triggers from those events. Best when the projector exposes clean timing hooks.
  • Camera-master: camera defines exposure cadence; projector is triggered to show the matching pattern. Useful when camera exposure signals are easiest to observe.
  • External timing hub: distributes TRIG and programmable delays to both sides, and logs timestamps. Best for multi-camera rigs and strict repeatability.

Minimum observable set: TRIG, PAT_ID, STROBE, EXPOSURE (or proxy), FRAME_VALID.

Hardware timestamps (turn “random depth” into an auditable sequence):

  • Per-frame record: PAT_ID + exposure start/end timestamp (or frame timestamp proxy).
  • Counters: dropped/repeated frame counters for both projector and camera pipelines.
  • Correlation: attach timestamp/PAT_ID to depth + confidence so anomalies can be grouped by pattern index and timing state.

Jitter budget checklist (what must be small vs what can be calibrated):

  • Trigger jitter: random edge timing variation directly changes overlap each frame → must be minimized.
  • Driver edge variability: rise/fall variation behaves like jitter in photon timing → must be minimized.
  • Sensor timing uncertainty: exposure start/end variation changes integration alignment → must be minimized.
  • Fixed latency: stable trigger-to-exposure delay can be measured and compensated in calibration.

If depth instability tracks frame-to-frame variance, suspect jitter; if it is a constant offset, suspect fixed latency/calibration.

Sync Timing Diagram Lock PAT_ID to exposure; control overlap; reduce jitter TRIG PAT_ID STROBE EXPOSURE FRAME_VALID ID k ID k+1 ID k+2 Global Jitter Overlap PAT lock Rolling shutter Line exposure ramps over time (shear risk) Line exposure ramp Shear risk ICNavigator
Figure F6. Timing lanes for TRIG / PAT_ID / STROBE / EXPOSURE / FRAME_VALID, plus rolling-shutter shear concept. Cite this figure
Deterministic sync means PAT_ID remains stable through the exposure window and strobe overlaps only the projector’s pattern-valid region. Rolling shutter adds a line-time ramp that can shear patterns unless gated or line-synchronized.

H2-7. Depth ISP Pipeline (Decode → Correspondence → Triangulation → Filtering)

Pipeline goal: Convert pattern frames (P0…Pn) into Depth with an auditable quality trail. A structured-light pipeline is not “a generic ISP”; it is a pattern-decoding + geometry pipeline that must output four first-class artifacts: decoded index, phase, confidence, and an invalid mask. These artifacts allow depth failures to be grouped by PAT_ID and diagnosed without guessing.

Index map Phase map Confidence Invalid mask

Pipeline stages with outputs (each stage must produce a diagnosable artifact):

  • Pre-processing (decode-critical only): black-level / offset alignment, normalization, and optional shading correction (only when it changes decode margin).
    Outputs: normalized intensity proxy, clip map, per-frame mean/contrast stats.
  • Decode: Gray thresholds / phase unwrap / hybrid validation to recover pattern index and subpixel phase.
    Outputs: index map, phase map, invalid mask, confidence (or residual score).
  • Correspondence: map each camera pixel to a projector coordinate using decoded index+phase.
    Outputs: correspondence map, ambiguity flag (multi-solution / low-margin), match validity.
  • Triangulation: use camera/projector geometry and extrinsics to convert correspondence into metric depth.
    Outputs: raw depth map, optional point export (diagnostic only).
  • Filtering: speckle removal, edge-preserving smoothing, and optional temporal stabilization for multi-frame sequences.
    Outputs: filtered depth map, stability metric (if temporal).

Engineering rule: keep raw depth and filtered depth separately so filtering cannot hide root causes.

Failure signatures → fastest evidence (use artifacts, not guesses):

  • Holes grow / invalid expands: inspect invalid mask + clip map + per-PAT_ID contrast. Common causes: low contrast, saturation, stray light, motion.
  • Striping / banding in depth: inspect phase map + confidence distribution. Common causes: nonlinearity, unwrap breaks, timing jitter.
  • Depth shifts as a constant offset: compare raw depth across a plane test with a fixed rig. Common causes: extrinsics/baseline drift, fixed latency calibration mismatch.
  • Depth “tears” near edges: compare correspondence validity vs confidence; check if filtering is over-smoothing. Common causes: ambiguity from repeated patterns, specular highlights.
Depth Pipeline with Output Artifacts Decode quality is a first-class output (confidence + invalid) Pre-process Normalize Decode Index / Phase Correspond Cam ↔ Proj Triangulate Geometry Filter Edge Output artifacts Index map Gray / coarse Phase map Subpixel Confidence Residual Invalid Mask Raw depth Filtered depth Stability ICNavigator
Figure F7. Structured-light compute pipeline with explicit output artifacts (index/phase/confidence/invalid/depth). Cite this figure
Treat confidence and invalid masks as primary outputs. They allow depth failures to be grouped by PAT_ID and separated into decode, correspondence, geometry, or filtering causes.

H2-8. Calibration Essentials (Geometry, Distortion, Projector-as-Inverse-Camera)

Calibration is the depth “scale contract”: Structured light lives or dies by geometry. Triangulation only produces metric depth when the camera intrinsics/distortion, the projector model (as an inverse camera), and the camera–projector extrinsics are consistent with the physical rig. Calibration must be measurable (quick sanity tests) and traceable (version + checksum), otherwise field issues cannot be reproduced.

Camera intrinsics Distortion Projector model Extrinsics R|t

What must be calibrated (minimum checklist):

  • Camera intrinsics: focal / principal point plus distortion so correspondence is not biased at the edges.
  • Projector as an inverse camera: projector intrinsics (equivalent “pixel rays”) and its distortion model if applicable.
  • Extrinsics / baseline: rigid transform (R|t) between camera and projector frames; this sets depth scale and tilt.
  • Pattern-to-projector mapping: the pattern coordinate system must match projector pixel coordinates, or triangulation will warp.

Rule of thumb: if a parameter affects where a projector pixel ray intersects a camera pixel ray, it belongs in calibration.

Drift triggers (when recalibration is required):

  • Thermal warm-up drift: depth plane residual changes after temperature stabilization.
  • Optics change: focus, lens swap, aperture change, or mechanical re-mounting alters intrinsics/extrinsics.
  • Rig shock/vibration: baseline or alignment shifts produce systematic tilt or curvature in plane tests.

If the effect is a stable offset, suspect extrinsics or a fixed-latency calibration mismatch; if it varies over time, suspect thermal/mechanical drift.

Quick sanity tests (fast proofs that calibration is consistent):

Flat plane

Look at: plane-fit residual + tilt/curvature trend.
Proves: extrinsics + distortion consistency.
Artifacts: raw depth + confidence + invalid mask.

Step gauge

Look at: scale correctness and edge behavior.
Proves: baseline scale + filtering impact.
Artifacts: raw vs filtered depth comparison.

Target / checkerboard

Look at: edge-region consistency and reprojected error distribution.
Proves: intrinsics/distortion quality.
Artifacts: correspondence validity + confidence.

Version & checksum

Look at: calibration ID, optics config ID, hash/checksum.
Proves: correct parameter set loaded.
Artifacts: logs bound to depth frames.

Calibration storage (traceability contract, minimal):

  • Version: calibration version/ID tied to device serial and optics configuration.
  • Checksum/hash: prevents silent mismatch and enables audit in field logs.
  • Link to outputs: attach calibration ID/hash to depth + confidence so anomalies can be traced back to a parameter set.
Calibration Geometry Graph Projector modeled as an inverse camera (pixel rays) Camera Intrinsics + Dist Projector Inverse camera Xc Zc Yc Xp Zp Yp Baseline R | t Calibration Board / Plane Camera rays Projector rays Projector = inverse camera ICNavigator
Figure F8. Camera and projector frames, baseline, extrinsics (R|t), and a calibration target used to fit geometry. Cite this figure
Calibration aligns camera rays and projector rays to a shared world reference (board/plane). Extrinsics and distortion errors typically appear as systematic plane tilt/curvature or edge-region depth bias.

H2-9. Robustness: Ambient Light, Surface Materials, Motion, Multipath Artifacts

Intent: Real scenes fail in repeatable ways. This chapter turns “messy reality” into detectable signatures and first-fix knobs. Robustness work should start from pipeline artifacts (clip map, contrast, confidence, invalid mask, PAT_ID grouping), not from guesswork.

Signature Fastest test First fix Knobs

Mini playbook (Symptom → likely cause → fastest test → first fix):

Ambient washout (sunlight / strong room lighting)

Symptom: completeness drops (% valid ↓) and confidence shifts down globally.
Likely cause: pattern contrast collapses inside the exposure window.
Fastest test: compare per-PAT_ID mean/contrast + confidence histogram across (lights off → indoor → sunlight).
First fix: narrowband optical filter + shorter exposure + stronger strobe; if using background subtraction patterns, keep raw/after-subtract artifacts for audit.

Saturation / clipping (high reflectance or too much strobe)

Symptom: phase unwrap breaks; depth shows striping/banding; specular holes appear.
Likely cause: clipped pixels destroy phase linearity and threshold margin.
Fastest test: sweep exposure at fixed strobe; plot clip% versus confidence collapse point.
First fix: reduce exposure or pulse width; cap peak current; prefer hybrid codes where coarse index can reject bad phase.

Motion artifacts (object moves during P0…Pn)

Symptom: invalid mask becomes “islands”; correspondence jumps; depth tears at edges.
Likely cause: inter-frame mismatch (PAT_ID sequence no longer represents the same surface).
Fastest test: switch to fewer frames (hybrid / reduced set) and compare islanding + edge tearing; group artifacts by PAT_ID.
First fix: reduce frames, increase strobe to shorten exposure; for rolling shutter, tighten gating so projection overlaps the valid exposure window.

Multipath / inter-reflections (concave scenes, shiny corners)

Symptom: “ghost layers” or inconsistent depth in corners; confidence patterns repeat by geometry.
Likely cause: multiple light paths create ambiguous correspondence.
Fastest test: add a light baffle / matte insert and check whether double-depth collapses; inspect ambiguity flags and confidence masks.
First fix: tighten invalid rules (reject ambiguous matches), adjust pattern frequency/guard bands to reduce multi-solution regions.

Material extremes (black + glossy; subsurface scatter)

Symptom: local holes around highlights + low SNR on dark areas; completeness becomes material-dependent.
Likely cause: simultaneous low contrast (dark) and clipping (glossy) within one scene.
Fastest test: compare clip map + contrast map on a two-material target (matte black + glossy patch).
First fix: dual-profile exposure/strobe settings per scene class; enforce confidence-based masking over “best-effort” fill-in.

Mitigation knobs (keep them measurable):

  • Exposure / gain: watch clip% + contrast stats + confidence histogram.
  • Strobe strength / pulse width: watch per-PAT_ID mean/contrast, and motion islanding reduction.
  • Code choice (Gray / Phase / Hybrid): watch invalid islanding, ambiguity flags, and phase residual.
  • Mask policy: trade completeness for correctness; report both % valid and error on valid points.
Artifact Signatures → First Fix Knobs Use icons: recognize the pattern, then pull the right knob Exposure Strobe Code Mask Evidence clip / contrast confidence / invalid Saturation clip% ↑, phase breaks Motion smear invalid islands Ambient washout contrast ↓, confidence ↓ Multipath ambiguous match ICNavigator
Figure F9. Robustness artifact gallery and the first-fix knobs (Exposure/Strobe/Code/Mask). Cite this figure
Recognize the artifact signature first (clip/contrast/confidence/invalid), then pull one knob at a time and re-check the same artifacts to confirm causality.

H2-10. Validation Metrics & Test Plan (What to Measure, How to Prove It Works)

Intent: A structured-light depth system is only “ready” when it passes a repeatable test matrix with traceable logs. Validation should produce a compact report that ties metrics to artifacts and to configuration identifiers (code profile + calibration hash + capture settings).

Metrics Fixtures Matrix Logging

Metrics (grouped so they can be measured and interpreted):

  • Accuracy: plane error / step error / RMSE versus reference geometry.
  • Precision: per-point σ under a static scene (noise on valid points).
  • Repeatability: same fixture across time/temperature; drift visibility in plane residual.
  • Completeness: % valid (from invalid mask) and its dependence on materials/lighting.
  • Edge fidelity: step-edge position bias and edge transition width (filter impact).
  • Latency (pipeline): capture-to-depth delay; keep it tied to PAT_ID/timestamps.

Report both error on valid points and % valid; a system can “look accurate” by discarding hard pixels.

Fixtures (repeatable and minimal):

  • Flat plane: baseline accuracy + distortion/extrinsics consistency.
  • Step height gauge: scale + edge fidelity + filtering impact.
  • Calibration target: intrinsics/distortion quality and correspondence consistency.
  • Optional motion stage: quantify motion sensitivity across pattern sets.

Minimum test matrix (fast → medium → full):

Fast (30–60 min)

Scope: indoor stable lighting; static scene.
Fixtures: plane + step.
Outputs: baseline accuracy/precision/completeness; raw vs filtered depth snapshots.
Gate: no striping, stable confidence distribution, completeness within expected baseline.

Medium (half day)

Scope: add temperature points (cold/warm) and one material stress (dark/glossy).
Fixtures: plane + step + material target.
Outputs: repeatability across temp; completeness sensitivity to materials.
Gate: plane residual does not drift beyond tolerance after warm-up; no ghost layers on glossy corners.

Full (1–2 days)

Scope: sunlight/strong ambient, vibration/motion, multipath scenes (concave corners).
Fixtures: plane + step + corner/multipath setup; optional motion stage.
Outputs: robustness envelope (completeness vs ambient; motion islanding vs frame count).
Gate: artifacts remain classifiable (confidence/invalid explain failures) and mitigations move the right artifact in the right direction.

Data to log (field-proof contract):

Per frame

PAT_ID / pattern profile ID
timestamps (exposure start/end or equivalent)
exposure, gain, frame index
strobe current / pulse width
clip%, mean/contrast stats
confidence stats (mean/percentiles)
invalid% and mask summary

Per session

calibration version + checksum/hash
device serial + optics config ID
code mode (Gray/Phase/Hybrid) + frame count
sync topology label (camera-master / projector-master / external)
temperature (camera/projector/board)

If calibration ID/hash and code profile are not bound to depth frames, later failures cannot be reproduced or compared.

Validation SOP Flow Setup → Capture → Compute → Compare → Report (traceable) Setup Fixtures Capture PAT_ID Compute Depth Compare Reference Report Gate Traceable inputs and outputs Log (per frame) PAT_ID, timestamps, exposure/gain strobe I/pulse, clip%, contrast confidence stats, invalid% Config Code profile Calibration ID Hash/Checksum Report RMSE / Plane error σ / Repeatability % valid / Edge ICNavigator
Figure F10. Validation SOP flow with traceable logging and a compact metric report. Cite this figure
A validation report is only meaningful when metrics are tied to artifacts (confidence/invalid/clip/contrast) and to configuration identifiers (code profile + calibration hash).

H2-11. Field Debug Playbook (Symptom → Evidence → Isolate → Fix)

Intent: Fastest path from a field complaint to root cause using minimal tools. Each symptom card is evidence-first and ends with one or two changes that can be validated by the same artifacts (clip/contrast/confidence/invalid/PAT_ID).

Clip% Contrast Confidence Invalid mask PAT_ID grouping Trigger/Strobe timing

Rule: change one knob at a time, then re-check the same artifact(s) to confirm causality.

Symptom 1 — “Depth holes / missing regions”

  • Evidence (first 2 checks): (1) clip% / histogram headroom and whether holes correlate with highlights; (2) confidence + invalid-mask spatial pattern (clustered holes vs scattered islands).
  • Isolate (discriminator): holes track bright patches → saturation/overdrive; holes expand globally with lights/sun → ambient washout; holes repeat in corners/concave geometry with stable lighting → multipath/inter-reflections; holes only on shiny patches → specular ambiguity.
  • First fix (validate by artifact change): reduce exposure or strobe pulse width until clip% drops and confidence recovers; tighten invalid-mask rules to reject ambiguous matches (prefer correctness over “best-effort fill”).
  • Hardening (prevent recurrence): always log clip% + confidence percentiles + invalid% per PAT_ID; report both “error on valid points” and “% valid” so performance cannot be inflated by discarding difficult pixels.
Example MPNs (implementation hooks):
• Current-sense amplifier for strobe/laser/LED current evidence: TI INA240
• LED/laser constant-current controller (driver architecture building block): Analog Devices LT3755 / LT3756
• High-power boost stage for headroom (if pulsed emission needs it): TI TPS61088
• ESD/TVS to reduce “random holes after events” on trigger/control lines: TI TPD4E05U06
• Accurate temperature sensor to correlate drift/holes with thermal state: TI TMP117
MPNs are examples—final choice must match peak current, pulse width, thermal limits, and optical safety constraints.

Symptom 2 — “Wavy depth / banding (striping)”

  • Evidence (first 2 checks): (1) band direction/period stability (does it align with sensor scan direction?); (2) rolling-shutter shear signature and timing evidence (trigger/strobe/exposure overlap, PAT_ID-to-exposure alignment).
  • Isolate (discriminator): bands lock to scan direction + change with speed → rolling-shutter timing overlap problem; bands remain in static scenes and move with trigger source changes → trigger jitter/clock instability; bands appear as fixed phase offsets by PAT_ID → pattern index/timing slip.
  • First fix (validate by artifact change): enforce deterministic gating (strobe only inside the valid exposure window) and reduce timing uncertainty; if needed, change sync topology (camera-master / projector-master / external hub) so PAT_ID and exposure boundaries are unambiguous.
  • Hardening (prevent recurrence): define and test a jitter budget (trigger jitter, strobe rise/fall, exposure timing); bind PAT_ID + timestamps to frames so banding can be replayed and compared across builds.
Example MPNs (timing & isolation):
• Jitter-cleaning PLL / clock tree stabilizer: Silicon Labs Si5341
• Multi-output clock generator/synthesizer (system clock distribution): TI LMK04828
• Digital isolator for trigger lines to reduce ground-bounce induced jitter: TI ISO7721
• I²C isolator for noisy sensor sidebands (if control bus couples into timing): Analog Devices ADuM1250
• ESD protection on external trigger/strobe connectors: ST USBLC6-2SC6

Symptom 3 — “Depth drifts with temperature (warm-up / ambient changes)”

  • Evidence (first 2 checks): (1) correlate depth bias/plane residual with temperature logs (camera, projector, board); (2) re-check a flat plane or step gauge at two temperatures to see if drift is global (scale/offset) or localized (distortion pattern shift).
  • Isolate (discriminator): global offset/scale drift → baseline/mechanics/geometry shift; localized residual growth → optics/distortion model mismatch; confidence collapses as temp rises → emission/driver repeatability or sensor noise rise affecting decode margin.
  • First fix (validate by artifact change): apply temperature compensation (select calibration tables by temperature bin) and enforce a recalibration threshold; stabilize emission (constant-current repeatability) so confidence distribution remains stable across temperature.
  • Hardening (prevent recurrence): store calibration with versioning + checksum and bind it to each session; include a temperature sweep in the “medium/full” matrix so drift is measured before field deployment.
Example MPNs (thermal + calibration storage):
• High-accuracy digital temperature sensor (cal + drift correlation): TI TMP117
• FRAM for calibration tables with high write endurance: Infineon/Cypress FM24CL64B
• EEPROM option for calibration + ID (traceability workflows): Microchip 24AA02E64
• Efficient buck for stable multi-rail supply (reduces thermal/supply-induced drift): TI TPS62130
• Current-sense amplifier for emission repeatability monitoring: TI INA240

Symptom 4 — “Fails only on one line speed / conveyor condition”

  • Evidence (first 2 checks): (1) bucket results by speed and track invalid% + “islanding” growth; (2) verify PAT_ID group consistency (does a frame group still represent the same surface) and whether exposure time is too long for the motion.
  • Isolate (discriminator): invalid islands explode with speed → too many frames / motion sensitivity; shear increases with speed (rolling) → gating overlap issue; failures are intermittent at fixed speed → trigger drop/jitter or strobe timing instability.
  • First fix (validate by artifact change): reduce frame count (hybrid coding / shorter pattern set) and increase strobe strength to shorten exposure; confirm that islanding decreases and confidence stabilizes at the target speed.
  • Hardening (prevent recurrence): create an acceptance curve: speed vs (frame count, exposure, invalid%); lock a “line-speed profile” and log it by profile ID to prevent silent parameter drift.
Example MPNs (pulsed emission + evidence):
• Pulsed LED strobe driver (µs-class pulses in compact designs): TI LM3644 / TI LM3642
• Constant-current controller for higher-power emitter stages: Analog Devices LT3756
• Current-sense amplifier for verifying pulse repeatability: TI INA240
• Jitter-cleaning clock (speed-sensitive timing margins): Silicon Labs Si5341
F11 — Field Debug Decision Tree Start from first measurements, then isolate one branch Bad Depth Sync / Timing Exposure / Strobe Pattern / Decode Calibration PAT_ID mismatch Trigger jitter Clip% high Contrast low Unwrap errors Ambiguous match Plane residual Temp correlation First measurement clip% confidence PAT_ID timestamps ICNavigator
Figure F11. Decision tree: “Bad depth” → branch by first measurements (clip%, confidence, PAT_ID, timestamps). Cite this figure

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12. FAQs (Accordion ×12)

How to use: Each answer starts with the fastest evidence check, then a discriminator, then one or two changes that can be validated by the same artifacts (clip%, contrast, confidence/invalid mask, PAT_ID, timestamps, plane residual, temperature logs).

Tip: change one knob at a time; re-check the same artifact to confirm causality.

1) Depth has holes on shiny parts — decode issue or saturation?

Start with a clip% / histogram headroom check and overlay holes against bright pixels. If holes track highlights and clip% is high, treat it as saturation (reduce exposure or strobe pulse width). If clip% stays low but holes follow viewing angle on specular regions, treat it as multipath/ambiguity and tighten confidence/invalid-mask rules. (Current evidence hook: TI INA240.)

See H2-9 and H2-11.
2) Works indoors but fails near sunlight — what’s the quickest suppression check?

Compare contrast and confidence histograms in three captures: indoor lights-off, indoor normal, near sunlight. If contrast collapses globally and invalid% rises across most PAT_ID frames, it is ambient washout. First fix is shorter exposure plus stronger strobe (confirm contrast recovery), then add a narrowband optical filter and background-subtraction patterns if needed. Log the same metrics for acceptance.

See H2-9 and H2-10.
3) Banding appears only with rolling shutter — what timing signal proves it?

Prove rolling timing by showing exposure overlap: capture EXPOSURE/FRAME_VALID together with STROBE_GATE and PAT_ID events (or timestamp them). If band direction matches the sensor scan direction and improves when gating is tightened, it is an exposure–projection overlap issue. If banding persists in static scenes and changes with trigger source, suspect trigger jitter instead. (Isolation hook: TI ISO7721.)

See H2-6 and H2-11.
4) Gray code needs too many frames — how to cut frames without losing robustness?

Measure motion sensitivity first: bucket invalid% and “islanding” by line speed. If failures scale with speed, reduce frame count using a hybrid set (coarse Gray + fine phase) or fewer Gray bits, then shorten exposure using stronger strobe to preserve SNR. Validate by stable confidence distribution at target speed. Keep the pattern set ID in logs so field traces are comparable.

See H2-3 and H2-9.
5) Phase shifting gives wavy surfaces — linearity issue or motion?

Check whether “waves” correlate with brightness (near saturation) or with motion (changes with speed or frame-to-frame inconsistency). If waves strengthen near clipping or with higher strobe current, suspect nonlinearity/saturation—restore headroom and reduce pulse width. If waves worsen with motion and improve with fewer frames, it is motion sensitivity—use hybrid coding or shorten exposure. (Driver example: ADI LT3756.)

See H2-3 and H2-9.
6) Depth shifts after warm-up — calibration drift or mechanical movement?

Run a plane fit (or step gauge) at cold and warm states and correlate bias with temperature logs. If the shift is mostly global (offset/scale), suspect baseline/mechanical movement; if residuals grow locally near edges, suspect distortion/calibration model mismatch. First fix is temperature-binned calibration tables and a recalibration threshold; then stabilize mechanics/optics. (Temp sensor example: TI TMP117.)

See H2-8 and H2-11.
7) Projector strobe looks correct but decode fails — what must be logged per frame?

Require a per-frame logging contract: PAT_ID, exposure start/end timestamps, exposure/gain, strobe current or pulse width, clip%/contrast, confidence percentiles, and invalid%. If PAT_ID or exposure timestamps are missing, alignment cannot be proven in the field. First fix is to bind PAT_ID to timestamps and store them with the frame payload so any failure can be replayed and compared across builds.

See H2-5 and H2-10.
8) Edges look fat/thin — filtering issue or triangulation geometry?

Compare edge width before and after filtering: if raw depth already shows biased edge geometry, suspect calibration/extrinsics; if the edge becomes thicker only after post-filtering, suspect an over-aggressive smoother or temporal stabilizer. First fix is to reduce the filter radius/strength and keep an edge-preserving mode; validate by step-gauge edge fidelity while holding %valid constant.

See H2-7 and H2-8.
9) Some materials return “salt-and-pepper” depth — how to use confidence masks?

Overlay noise speckles with the confidence map. If speckles cluster where confidence is low, use confidence-gated invalid masks and speckle removal that does not inflate edges; validate by lower island count and stable %valid. If speckles occur even at high confidence, suspect decode thresholds or phase unwrap stability, then re-check headroom (clip%) and pattern contrast. Keep the same statistics per PAT_ID.

See H2-7 and H2-9.
10) Trigger jitter spec is unclear — what jitter actually matters?

What matters is relative jitter (frame-to-frame or line-to-line variation), not fixed latency. Fixed latency can be calibrated out; random jitter becomes banding and phase noise in structured-light decode. Prove it with timestamp deltas or scope measurements of TRIG→STROBE overlap stability. First fix is to define a jitter budget and enforce it; if needed, add a jitter-cleaning PLL. (Example: SiLabs Si5341.)

See H2-6.
11) How to prove pattern index alignment is correct in the field?

Bind PAT_ID to exposure timestamps and verify group consistency: frames with the same PAT_ID should show consistent contrast/phase statistics for a static target. Any PAT_ID skips, duplicates, or timestamp overlap violations are alignment failures. First fix is to add a “PAT_ID monitor” that flags mismatch and captures a short ring buffer for forensic replay. Keep a pattern-set version ID in logs for traceability.

See H2-6 and H2-10.
12) Latency is too high — where are the real bottlenecks?

Split latency into capture → decode/unwrap → correspondence → triangulation → filtering → output, and timestamp each boundary. If decode dominates, reduce frame count or simplify the pattern set; if filtering dominates, reduce temporal smoothing and confirm edge fidelity remains acceptable. If IO dominates, inspect buffering and copy paths. Validate improvements with the same end-to-end timestamp definition and a fixed test fixture.

See H2-7 and H2-10.