AR/VR Headset Hardware: Display, Tracking Sensors & USB-C Power
← Back to: Consumer Electronics
A stable AR/VR headset experience is an evidence-driven synchronization problem across four coupled chains: display frame timing, camera/IMU/ToF timestamps, power transients, and thermal drift. When any chain slips, the field symptoms show up as dizziness, drift, tracking loss, flicker, or brief black screens.
Definition & Boundary: what this page solves
Core answer (reader-facing): A stable AR/VR experience is governed by four synchronized hardware chains—display frame timing, camera/IMU timestamps, power transients, and thermal derating. Drift in any chain commonly surfaces as nausea, tracking drift, intermittent black screens, or visible jitter.
Extractable definition (45–55 words)
An AR/VR headset is a tightly coupled system that combines a near-eye display, low-latency 6DoF tracking, and portable power delivery. It relies on MIPI-DSI for panel driving, MIPI-CSI cameras plus ISP for inside-out tracking, IMU/ToF sensors for motion/depth, and a USB-C PD/PPS power tree that must remain time-aligned and stable.
Boundary statement (what is in / out)
In scope: hardware chains, interfaces, power rails & sequencing, EMC/ESD risk points, thermal constraints, validation methods, and field debug evidence (waveforms, counters, logs, thermal traces). Out of scope: app/OS/content ecosystem, rendering engine tutorials, and SLAM/VIO math derivations.
- What readers get: selection dimensions by block (display/ISP/IMU/ToF/power), a validation test matrix, and a field debug “evidence-first” playbook.
- How conclusions are justified: every claim ties back to at least one measurable artifact—timing events, timestamp alignment error, rail transient/UV flags, or thermal/throttling traces.
System Context: why the headset is a multi-domain synchronization system
An AR/VR headset behaves like a multi-domain synchronization problem rather than a set of independent parts. Display timing, camera exposure timing, IMU sampling, and power/thermal constraints continuously influence each other. The fastest route to root cause is to align time (events/timestamps) with power (rail transients) and temperature (throttling).
Key domains and the “most sensitive coupling point”
- Compute (SoC/DDR/accelerators): load steps increase rail stress and raise tail latency (p99/p999), directly impacting comfort.
- Display (MIPI-DSI/TCON/bias rails): frame events (TE/VSync) and bias stability determine flicker/black-screen risk.
- Tracking cameras + ISP (MIPI-CSI): exposure/gain transitions and CSI integrity correlate with tracking dropouts.
- IMU/ToF sensors: timestamp alignment and temperature drift dominate 6DoF stability more often than “noise” alone.
- Power (USB-C PD/PPS, battery power-path, PMIC): negotiation events, sequencing, and transient response decide resets and intermittent failures.
- Thermal/comfort constraints: throttling reshapes latency distribution; gradients and stress shift IMU bias and display bias behavior.
Typical interfaces (hardware-level only)
MIPI-DSI (SoC → display chain), MIPI-CSI (tracking cameras → ISP/SoC), I²C/SPI (sensor configuration + data-ready signaling), USB-C (PD/PPS power, cable/connector variability). These interfaces define where measurable errors (events, counters, dips) can be observed.
Symptom → domain mapping (with first evidence to grab)
- Black screen / brief flicker: display chain + power transient + ESD/EMC. First evidence: display bias rails + PD event log + reset reason.
- Drift / nausea / jitter over time: timestamp alignment + IMU drift + thermal derating. First evidence: p99 latency trace + IMU bias vs temperature.
- Tracking loss (scene-dependent): exposure/ISP + CSI integrity + interference. First evidence: exposure/gain log + CSI error counters + frame timing.
Measurement points that will be reused later
Power probes: VBUS, main buck output, display bias rails, camera/IMU rails (look for dips, UV/OC flags).
Timing points: TE/VSync, camera exposure timestamp/frame sync, IMU data-ready + timestamp (align error distribution).
Logs/counters: PD negotiation events (PDO/PPS changes), reset reasons, link/CSI/DSI counters when available, throttling state.
Near-Eye Display Chain: where “flicker / artifacts / black screen” most often originate
Display failures in headsets are best sorted into four testable buckets: link margin (MIPI-DSI integrity), panel bias stability (AVDD/VGH/VGL/VCOM or OLED bias), frame timing (TE/VSync & mode switches), and disturbance coupling (EMI/ESD/transient injection). Each bucket maps to a specific evidence type that can be captured and time-aligned.
Architecture layers (hardware chain only)
SoC DSI → DSI bridge / TCON → panel driver → micro-OLED / LCD. The chain is sensitive at lane rate transitions, board-to-flex interfaces, and bias-rail sequencing windows.
Key bias rails (domain naming, not optical deep-dive)
LCD commonly relies on AVDD / VGH / VGL / VCOM stability; OLED implementations rely on OLED bias and panel core rails. Bias ripple or step response issues can present as flicker, gray-level instability, or short blackouts during load changes.
How to capture each evidence type (practical minimum)
- Counters: log time-stamped snapshots around failures; correlate with mode switches and resets.
- Waveforms: probe one bias rail + one “activity proxy” (TE/VSync or display enable) to prove causality.
- Timing: record TE/VSync edges during refresh changes; identify jitter bursts or missing events.
What this section intentionally does not expand
Optical stack details, render pipeline, and OS-level behavior are out of scope here. The focus stays on measurable link/bias/timing/disturbance evidence that can be reproduced on the bench.
Motion-to-Photon (M2P): how to measure the hardware KPI behind comfort and stability
Motion-to-photon is not a single number; it is a latency distribution. Headset comfort is usually decided by the tail (p99/p999), not the average. A practical timing budget must identify where jitter accumulates—timestamp drift, mode switches (VRR/power-save), and thermal derating that stretches processing time.
Budget segments (end-to-end chain)
IMU sample → timestamp → fusion input → render/compose → DSI send → panel response. Each segment contributes both a baseline delay and a jitter component that must be reported with p99/p999.
Tail-latency generators (why p99 grows)
- Timestamp drift: IMU clock vs SoC time base causes alignment error to wander over time.
- Frame timing jumps: VRR or power-save switches shift VBlank boundaries and add burst jitter.
- Thermal derating: throttling increases compute time and widens the latency distribution.
Method A — End-to-end (photodiode / high-FPS)
Place a photodiode on a known pixel region and drive a controlled luminance toggle. Use a synchronized trigger aligned to a motion event or a deterministic input step. Report median + p95 + p99 + p999; store mode switch and reset events on the same timeline.
Method B — Alignment error (IMU ↔ VBlank)
Record IMU data-ready timestamps and display VBlank (TE/VSync) events. Compute alignment error per frame and report its distribution (p99/p999) and drift rate over time/temperature. This is a common direct path to explain “slowly worsening” comfort and tracking stability.
Acceptance template (fill-in, product-specific)
M2P (median / p99 / p999): ____ / ____ / ____ ms
Alignment error (p99): ____ ms; drift: ____ ms/hour
Thermal: Tskin ____°C; hotspot ____°C; throttling state ____
Mode switches (VRR/power-save): ____ events; worst jitter burst ____ ms
Inside-Out tracking: why stability depends on camera input consistency and CSI/ISP evidence
Tracking issues that feel like “lost / drifting / jumping” frequently map to three measurable buckets: image input quality (exposure, blur, flicker), transport reliability (MIPI-CSI drops/errors), and ISP consistency (AE/AWB/denoise/distortion changing the feature-visible content). The fastest path is to time-align exposure/gain logs, CSI counters, and power-rail waveforms with failure events.
Camera selection levers (hardware constraints only)
- Rolling vs global shutter: motion distortion and effective latency; impacts feature consistency during fast head motion.
- Low-light + indoor flicker (50/60 Hz LED): AE gain/exposure hunting can create repeated lock-loss windows.
- IR interference: IR emitters/ToF/ambient IR can collapse contrast or create banding; validate via A/B IR enable tests.
ISP blocks (engineering view: what can change the input distribution)
- AE: exposure time & gain steps change blur/noise balance and feature persistence.
- AWB: channel gain changes contrast gradients, especially in low light.
- Denoise: can suppress fine texture; feature count becomes unstable across scenes.
- Distortion correction: resampling/edge stretch must remain consistent; mismatches can look like “jump.”
Practical capture plan (minimum viable)
- Windowed logging: store ±2–5 s around each failure event; include AE state changes and scene tags (lighting/IR).
- CSI counters: record time-stamped deltas, not only cumulative totals; detect burst errors.
- Waveform + image: capture a rail transient with a synchronized image artifact (banding/tear/frame break) to prove causality.
Out-of-scope guardrail
No SLAM/VIO math or feature algorithm walkthroughs. This section stays on camera/CSI/ISP inputs and measurable evidence that explains stability.
6DoF stability: practical levers—drift sources, calibration proof, and timestamp alignment
Stable 6DoF behavior is usually gated by three engineering levers: IMU drift control (bias/scale/temp and mounting stress), ToF/depth robustness (ambient light, reflections, multipath, and IR cross-talk), and the first-principle constraint—all sensors must share a time base or a correctable mapping to keep alignment error small in the tail (p99/p999).
IMU drift sources (engineering, not math)
- Bias / scale factor: temperature-dependent drift; quantify under a temperature sweep.
- Mounting stress: assembly stress shifts offsets; verify by stress/fixture A/B experiments.
- Thermal gradient: local hotspots and uneven heating change bias; correlate drift with thermal state.
ToF/depth failure map (evidence-first)
- Strong ambient light: window/near-outdoor tests; track failure rate vs illuminance.
- Reflective surfaces: glass/white wall multipath; reproduce with controlled scenes.
- Cross-talk: IR emitters/ToF/camera interaction; validate via IR enable A/B and timing offsets.
Evidence chain (what to show to prove stability)
- Temperature sweep: drift curves before/after calibration (bias vs temperature; drift rate).
- Alignment distribution: histogram + p95/p99/p999 of timestamp misalignment; track drift over time.
- ToF reproducibility: strong-light / reflective / glass scenes with failure-rate and anomaly markers.
Acceptance template (fill-in)
IMU drift rate (before/after): ____ / ____ (units)
Alignment error (p99 / p999): ____ / ____ ms; drift: ____ ms/hour
ToF fail rate (bright / reflective / glass): ____ / ____ / ____ %
IR cross-talk delta (IR off vs on): ____ %
Power stability comes from PD contract + transient immunity + domain sequencing
AR/VR headsets behave like multi-domain synchronous systems. Black flashes, reboots, or tracking drops often follow a single root cause: the USB-C power contract (PD/PPS), the power tree transient (bus dips/flags), or a broken sequencing window (display bias, sensor reset, I²C availability) during load steps.
Two input topologies (choose the right failure bucket first)
- USB-C only (PD/PPS): most sensitive to cable drop, PPS steps, and fast load transients.
- USB-C + battery (power-path): sensitive to path switchover, current limiting, and low-SOC behavior.
Key blocks (responsibility → symptom → evidence)
- PD/PPS controller: contract changes and renegotiation → correlate with reboot/black events via PD logs.
- Buck/boost + main bus: load-step dip → capture bus waveform + PG/UV flags.
- Load switch / eFuse / OVP: domain isolation and protection → read UV/OC/OT latch flags around failures.
Evidence chain (time-aligned, minimum viable)
- PD negotiation log: PDO/PPS/RDO step changes + renegotiation count aligned to reboot/black timestamps.
- Rails + flags: main bus + one sensitive rail (SoC or display bias) with UV/OC/PG flags and a triggered waveform capture.
- Cable/contact A/B: short-thick cable vs long cable; connector micro-movement; compare failure rate and VBUS dip depth.
Acceptance template (fill-in)
Main bus dip (min): ____ V; duration: ____ µs/ms
SoC rail UV flag count per minute: ____
PD renegotiations per hour (steady load / burst): ____ / ____
Cable A vs B reboot rate: ____% vs ____%
MIPI on flex: avoid margin collapse from return-path breaks, stress, and latent ESD damage
In compact AR/VR stacks, MIPI-DSI/CSI often crosses flex cables and dense connectors. Most intermittent “sparkles / frame drops / sudden link loss” reduce to three physical mechanisms: return-path discontinuity, mechanical/thermal sensitivity, and ESD-driven latent degradation. The fastest proof comes from time-aligned counters plus correlation tests (press/bend/temp).
Failure buckets (mechanism → symptom → evidence)
- Edge-rate + crosstalk: errors spike in high-rate modes → compare counter slope across modes.
- Return-path breaks / ground discontinuity: press/hold posture changes errors → press/bend correlation is strong evidence.
- Common-mode return issues: ESD/plug events trigger later intermittents → pre/post ESD counter statistics and hotspot checks.
Connector + ESD (why “not instantly dead” is common)
ESD and plug transients frequently reduce link margin without immediate failure. The practical signature is intermittent errors that appear only under stress (flex bend, pressure, temperature, or specific link modes).
Evidence chain (correlation tests that converge fast)
- Press/bend/temperature correlation: fixed test pattern + fixed mode; record counters while applying controlled stress.
- Post-ESD intermittent mapping: compare counter growth rate before/after ESD; look for new stress-sensitive points.
- A/B validation: protection/layout change A vs B; compare failure rate under the same stress and mode set.
Acceptance template (fill-in)
Error rate vs stress (none / press / bend / hot): ____ / ____ / ____ / ____
Mode sensitivity (low-rate vs high-rate): ____× increase
Pre-ESD vs post-ESD counter slope: ____ vs ____ (per minute)
TVS A vs B error delta (high-rate mode): ____ %
Thermal coupling: comfort limits, tail latency, sensor drift, and bias drift
In AR/VR headsets, heat is not only a heatsink problem. It is a system-coupling variable that simultaneously impacts comfort constraints, performance tail latency, IMU drift, camera noise, and display-bias stability. The fastest path to root cause is a time-aligned evidence chain: IR thermals + performance counters + drift/tracking/display events.
Comfort constraints (hardware-only)
- Skin-contact temperature rise: define fixed surface points and track ΔT vs time.
- Fan noise & vibration: tach steps can couple into perceived noise and IMU noise floor.
- Condensation / sweat exposure: treat as electrical risk (leakage, connector contamination) with measurable signatures.
Three thermal failure chains (link back to earlier chapters)
- SoC throttling → worse p99/p999 frame time (ties to motion-to-photon budget in H2-4).
- IMU temp/stress drift → 6DoF drift growth (ties to IMU/ToF engineering handles in H2-6).
- Display bias / brightness drift → color shift, flicker, black flashes (ties to display chain in H2-3).
Minimum measurement set (fast convergence)
- IR map points: SoC, PMIC, battery, IMU vicinity, camera vicinity, display-bias area, face-contact zone.
- Counters: SoC frequency/throttle flags, p99 frame time proxy, fan tach state.
- Stability metrics: IMU drift vs temperature, camera AE/gain logs vs tracking drops, bias rail ripple vs events.
Acceptance template (fill-in)
Face-contact ΔT (10 min): ____ °C
SoC throttle onset temperature: ____ °C
p99 frame time (cool → warm): ____ ms → ____ ms
IMU drift metric (cool → warm): ____ → ____
Black-flash / tracking-drop rate (cool → warm): ____ / min → ____ / min
Checklist-based IC selection: dimensions, risks, and evidence hooks (not a parts encyclopedia)
Component selection for AR/VR headsets should be evaluated by functional blocks and graded as Must / Should / Nice-to-have. Each dimension should map to a real risk (black flashes, tracking drops, drift growth, tail latency spikes) and include an evidence hook (counter, log, flag, waveform, or IR map) so field failures remain diagnosable.
Display (DSI bridge/TCON + panel-bias PMIC)
| Tier | Selection dimension | Why it matters | Evidence hook |
|---|---|---|---|
| Must | Lane count / max rate headroom; stable low-power mode switching | Mode switching often triggers black flashes or intermittent link loss when margin is small | Mode-based error counters + event correlation (black flash / link drop) |
| Must | Visibility of link status (CRC/error counters or readable state flags) | Without observability, field issues become “software guesses” | Counter slope under stress (temp / bend / mode) |
| Should | Panel-bias ripple and transient response under load steps | Bias dips/ripple can manifest as flicker, gray-level errors, or sudden black flashes | Waveform capture on bias rail + alignment to display events |
Tracking camera / ISP (hardware constraints)
| Tier | Selection dimension | Why it matters | Evidence hook |
|---|---|---|---|
| Must | CSI bandwidth headroom and packet/error statistics availability | Hidden drops look like algorithm failure but are often transport margin issues | CSI error counters aligned to tracking drop timestamps |
| Should | Low-light behavior and flicker resilience (hardware-visible behavior) | Exposure/gain swings can degrade feature quality and increase tracking loss rates | AE/gain logs correlated with tracking drop events |
| Nice | External sync/trigger or clocking options (when system timing requires it) | Reduces cross-sensor timing uncertainty and improves repeatability | Timestamp alignment histogram before/after enabling sync |
IMU
| Tier | Selection dimension | Why it matters | Evidence hook |
|---|---|---|---|
| Must | Noise density / bias stability / temperature drift characterization | Thermal drift and long-term bias instability drive 6DoF drift growth | Temp sweep drift curve (pre/post calibration) |
| Must | Timestamping or synchronization capability (direct or via sensor hub) | Misaligned timing creates “jitter” even when raw accuracy looks fine | Timestamp alignment distribution (p99/p999) |
ToF / Depth
| Tier | Selection dimension | Why it matters | Evidence hook |
|---|---|---|---|
| Must | Ambient-light and crosstalk suppression behavior | Strong light and reflective surfaces cause depth dropouts and unstable tracking | Repro tests (sunlight / glass / white wall) with failure statistics |
| Should | Trigger/sync modes for interference management | Reduces mutual interference with camera/IR emitters in dense stacks | Before/after error-rate comparison under the same scene |
USB-C PD + Power tree (PD/PPS, buck/boost, eFuse, load switches)
| Tier | Selection dimension | Why it matters | Evidence hook |
|---|---|---|---|
| Must | PPS support (if dynamic VBUS is used) + negotiation/event visibility | Contract steps often coincide with reboots and black events under bursts | PDO/PPS/RDO log aligned to event time |
| Must | Transient response and UV/OC flag accessibility across key rails | Without flags/waveforms, brownouts appear random and unreproducible | Bus + sensitive rail waveform + UV/OC latch readout |
| Should | Load switch slew control and domain sequencing support | Sequencing breaks can cause sensor loss, display bias instability, or intermittent resets | Reset/I²C window integrity under load-step scripts |
Clock / Timing (only as it affects display/camera/IMU alignment)
| Tier | Selection dimension | Why it matters | Evidence hook |
|---|---|---|---|
| Must | Clock stability and distribution suitable for cross-domain timing | Timing drift increases alignment error and can appear as motion jitter | Alignment error distribution stability across temp/load states |
H2-11|Validation Test Plan: turn “experience” into a reproducible test matrix
Execution rule: every test must bind Stimulus → Metrics → Evidence
Symptoms like drift, nausea, black flashes, or tracking loss must map to measurable hardware signals: DSI/CSI error counters, timestamp alignment quantiles (p50/p99/p999), rail/bias transients, and temperature-to-performance coupling. A test is complete only when the raw evidence package can replay the event timeline.
1) Preflight: standardize timebase and event hooks
- Timebase alignment: IMU timestamp, camera exposure timestamp, and display VBlank/TE must be comparable (offset mapping allowed, but recorded).
- Event hooks: DSI/CSI counters, PD contract events, eFuse UV/OC/OT flags, reset reason, and SoC throttling/perf counters must be sampled with timestamps.
- File naming: include firmware build, refresh mode, brightness mode, PDO/PPS state, cable ID, charger ID, ambient lighting ID, and temperature points.
2) Test matrix (by domain)
| Domain | Stimulus (repro scene) | Metrics (quantiles first) | Raw evidence (must export) |
|---|---|---|---|
| Display DSI + bias rails |
Refresh/VRR switching; brightness/gray sweep; thermal ramp; gentle press/vibration | Artifact rate; TE/VBlank stability; DSI errors = 0; bias ripple & dip vs load steps | DSI counters export; TE/VBlank waveform; AVDD/VGH/VGL/VCOM (or OLED bias) waveforms; thermal snapshots |
| Tracking camera CSI + ISP logs |
50/60 Hz lighting; low light tiers; fast head motion script; fixed vs auto exposure | Tracking-drop rate; exposure/gain correlation; CSI drops/errors; frame-time jitter distribution | Exposure/gain logs; CSI counters; frame timestamp series; optional scene video |
| IMU / ToF 6DoF inputs |
Temperature sweep; vibration/shock; bright/reflective/glass scenes; IR on/off A/B | IMU bias drift curve; alignment error p99/p999; ToF failure rate vs scene conditions | IMU raw + calibrated logs; alignment stats dump; ToF distance output series; temperature trace |
| Power USB-C / battery |
Cable/charger matrix; PPS step script; plug jiggle; compute/display peak load steps | Reset reason; PD contract changes vs events; rail dip amplitude/duration; protection flags | PD negotiation logs (PDO/PPS); rail waveforms; eFuse flags; current/voltage telemetry |
| EMC / ESD robustness |
Contact discharge points; near-field injection; flex bend/press; humidity/sweat (hardware only) | Functional degradation (not just “alive”); intermittent error recurrence; counter drift slope | Discharge records; recurrence script; counters; optional hotspot thermal evidence |
3) Concrete example MPN anchors (for BOM review & debug hooks)
These are representative parts to anchor “function block → evidence hook → swap/A-B isolation.” Final selection must match panel, power, package, availability, and compliance constraints.
- USB-C PD controller: TI TPS65987D (PD policy + event visibility); Infineon/Cypress CYPD3177 family (EZ-PD CCG3PA series).
- USB-C sink (autonomous): ST STUSB4500 (sink negotiation without host).
- Charger + power-path: TI BQ25798 (buck-boost charger with power-path behavior).
- Fuel gauge: ADI/Maxim MAX17055 (single-cell gauge for correlation to droops/events).
- eFuse / protection: TI TPS25982 (smart eFuse); TI TPS25947 (eFuse with reverse current blocking behavior).
- Power telemetry: TI INA238 (shunt monitor for rail current/voltage evidence).
- Panel bias (example): TI TPS65132 (dual outputs, often used to generate panel bias rails in display subsystems).
- MIPI bridge (only if needed): TI SN65DSI84 (DSI-to-LVDS bridge as a debug/architecture anchor).
- Global shutter tracking camera sensors (examples): onsemi AR0144CS; OMNIVISION OV9281.
- IMU (examples): TDK InvenSense ICM-42688-P; Bosch BMI270.
- ToF/Depth (examples): ST VL53L5CX; ams OSRAM TMF8801.
H2-12|Field Debug Playbook: grab two evidence types first, then isolate the domain
Triage rule: two evidence types, but the most discriminative ones
Each symptom must start with: (1) an internal hard signal (counters/rails/temp/alignment stats), and (2) a scene trigger descriptor (lighting/pose/cable/press/temperature). First bucket the problem into a domain; then change one variable per run (minimal action) for clean A/B evidence.
High-frequency symptom cards (Symptom → 2 Evidence → Domain → Minimal action)
A) Drift / nausea builds over time (warm-up dependent)
- Capture (2): temperature trace + latency p99/p999; IMU bias/scale drift + alignment error distribution.
- Likely domain: thermal throttling; IMU temp/stress drift; timebase alignment drift.
- Minimal action: lock refresh + performance mode; hold fan policy; compare cold-start vs hot-steady alignment p99.
B) Black flash then recovery (may not reboot)
- Capture (2): PD contract log (PDO/PPS changes) + reset reason; display bias rail dip/ripple at the event time.
- Likely domain: USB-C transient; eFuse trip/limit; display bias instability.
- Minimal action: swap cable/charger A/B; lock fixed PDO (disable PPS steps); scope bias rails + VBUS together.
C) Tracking drops in specific rooms / lamps
- Capture (2): exposure/gain + frame timestamp logs; CSI error counters aligned to drop events.
- Likely domain: lighting flicker drives AE instability; low-light noise; CSI bandwidth/margin drops.
- Minimal action: fix exposure/GAIN; change lighting source (DC lamp vs PWM lamp); reduce resolution/FPS for CSI stress A/B.
D) Pressing/strap twist triggers artifacts (flex sensitive)
- Capture (2): DSI error counter delta during press; repeatable press location correlation.
- Likely domain: flex/connector SI margin; return-path discontinuity; post-ESD soft degradation.
- Minimal action: temporarily lower DSI lane rate; reinforce the bend point A/B; log counters vs mechanical action.
E) Plug/unplug or cable jiggle triggers stutter / drops / reboot
- Capture (2): PD events timeline; critical rail dips + eFuse UV/OC flags.
- Likely domain: PPS step transient; cable IR/contact resistance; power-path switching non-smoothness.
- Minimal action: disable PPS (fixed PDO); swap to low-IR cable; compare “external-only” vs “battery-parallel” modes.
F) Depth/ToF unstable in bright / reflective / glass scenes
- Capture (2): ToF output series (fail frames) + scene descriptor; alignment error stats (IMU/cam/ToF).
- Likely domain: multipath & ambient saturation; IR cross-talk; alignment error amplification.
- Minimal action: IR on/off A/B; lower ToF FPS or change ROI; replay the same scene with identical logging.
Two-evidence quick guide (what to grab first)
- First: event-chain signals — PD contract changes, eFuse flags, reset reason, DSI/CSI counter deltas, throttling markers.
- Then: analog proof — VBUS/rail dips, panel bias ripple, TE/VBlank timing, plug transient waveforms.
- One variable per run — cable swap OR mode lock OR exposure lock OR lane-rate reduction OR shielding A/B.
H2-13|FAQs (evidence-first, within this page’s hardware boundary)
These FAQs are engineered as “symptom → two evidence captures → domain isolation → minimal A/B action”. Answers stay inside the hardware evidence chain (display, tracking camera/ISP, IMU/ToF, power, thermal, SI/EMC/ESD), without algorithm derivations or content ecosystem discussion.
Q1) The image never goes black, but dizziness worsens over time. Should average latency or p99/p999 be checked first? (→ H2-4 / H2-9)
Start with p99/p999 motion-to-photon, not the average, because user discomfort is driven by tail latency and jitter. Capture (1) a p99/p999 latency trace aligned to (2) temperature and throttling markers. If tail latency grows with heat, isolate thermal policy and power limits before blaming tracking quality.
Q2) Brief flicker happens when brightness or refresh rate switches. Is it TE/VSync or a bias transient? Which two waveforms should be captured first? (→ H2-3 / H2-4)
Capture (1) TE/VBlank (or VSync) timing and (2) panel bias rails (VCOM or relevant bias outputs) during the exact switch moment. If TE/VBlank shows phase jumps without rail disturbance, prioritize timing/handshake stability. If bias rails dip/ripple spikes coincide with flicker, prioritize bias transient response and sequencing.
Q3) Tracking drops more under certain LED lighting. Is it exposure strategy or CSI packet loss? How to capture decisive evidence in one run? (→ H2-5 / H2-11)
In a single run, log (1) exposure time + gain + frame timestamps and (2) MIPI-CSI error/drop counters, both time-aligned to the tracking-drop events. Strong correlation to exposure/gain oscillation indicates lighting flicker/low-light noise sensitivity. Strong correlation to CSI counter deltas indicates bandwidth/margin or signal integrity instability.
Q4) Drift is worse during fast head turns. Is it IMU saturation/noise or timestamp alignment drift? (→ H2-6 / H2-4)
Capture (1) raw IMU ranges (look for clipping/saturation and noise inflation) and (2) alignment error distribution between IMU sampling time and display/camera time references (p99/p999). If drift spikes when IMU clips, input quality is the limiter. If drift spikes without IMU clipping but alignment error widens, timebase mapping and mode-switch timing are the limiter.
Q5) ToF distance jumps near glass or reflective walls. How to separate multipath from ambient-light interference? (→ H2-6 / H2-11)
Separate by controlled A/B: capture (1) ToF output series with fail-frame patterns and (2) scene + illumination tags (bright sunlight, reflective wall, glass). If failures correlate mainly with geometry (glass/reflectors) under stable light, multipath/reflections dominate. If failures correlate with strong light changes regardless of geometry, ambient saturation dominates.
Q6) An occasional black screen recovers by itself. Check PD negotiation events first, or capture display bias rails first? (→ H2-7 / H2-12)
Do both, but prioritize the most discriminative pair: capture (1) USB-C PD contract events + reset reason and (2) display bias rail dip/ripple at the same timestamp. PD event spikes without bias disturbance suggest source/cable negotiation or transient contract changes. Bias dips (even without reboot) suggest local power integrity or protection triggering in the display domain.
Q7) The issue disappears after swapping a USB-C cable. How can cable drop trigger brownout or retraining, and which logs matter? (→ H2-7 / H2-12)
A higher cable resistance increases VBUS droop during load steps, which can trigger brownout, protection flags, or repeated renegotiation/retraining. Capture (1) PD/PPS event timeline (contract changes, PPS steps) and (2) critical rail minima + UV flags + reset reason. If droop aligns to renegotiation, prioritize cable IR and PPS step response; if droop aligns to rail UV, prioritize local PMIC transient response.
Q8) ESD compliance tests pass, but field shows occasional artifacts. Is it protection capacitance or return-path/layout? (→ H2-8 / H2-12)
Passing ESD can still leave marginal SI or intermittent coupling. Capture (1) DSI/CSI counter deltas during touch/handling at the suspect points and (2) a repeatable trigger map (exact contact location and posture that reproduces the issue). If counters jump with specific touch points, return-path discontinuity or coupling dominates. If artifacts increase after adding protection parts, excessive capacitance or placement is likely degrading high-speed margins.
Q9) Pressing the head strap or bending to a certain angle causes artifacts. Is it flex SI margin or connector contact resistance? (→ H2-8 / H2-12)
Distinguish by evidence pairing: capture (1) DSI/CSI error counters synchronized to the press/bend action and (2) rail/bias transients (VBUS or local rails) at the same moment. If counters spike without rail dips, SI margin on the flex/connector is likely. If rail dips or UV flags appear, contact resistance or power integrity is likely.
Q10) Tracking degrades as temperature rises. Is it SoC throttling or IMU temperature drift? What evidence decides? (→ H2-9 / H2-4 / H2-6)
Capture (1) throttling/performance markers + latency p99/p999 and (2) IMU bias drift vs temperature. If latency tails expand with throttling while IMU bias remains stable, compute/thermal throttling dominates. If IMU bias drifts strongly with temperature and alignment error expands, sensor drift and timebase mapping dominate. Time-align all traces to the same event clock to avoid false correlation.
Q11) Drift is worse in low-power mode. Is it sampling-rate change, clock switching, or sparse inputs? Which counters should be captured first? (→ H2-4 / H2-6)
Capture (1) mode-switch events (sampling rate, clock source, frame cadence changes) and (2) alignment error quantiles (p99/p999) across the transition. If alignment error widens immediately after mode switch, timebase or cadence change is the trigger. If alignment stays stable but drift grows later, reduced sampling density or power-domain wake timing is likely.
Q12) Same hardware design, but different batches feel very different. Is it sensor calibration, assembly stress, or power component lot variation? (→ H2-6 / H2-9 / H2-7)
Treat it as a distribution problem. Capture (1) calibration parameters + IMU drift-vs-temp curves across batches and (2) power transient fingerprints (rail dip depth/width, UV/OC flag rates, PD event rate) under the same scripted load. If drift distributions shift by batch, calibration/assembly stress dominates. If transient fingerprints shift by batch, power component or layout variance dominates.