Programmable Digital LED Driver (I2C/PMBus, Telemetry & Logs)
← Back to: Lighting & LED Drivers
A programmable digital LED driver is “worth it” when lighting behavior must be defined by data (register maps, curves, and policies) and proven by evidence (telemetry, counters, CRC, and event logs), not by fixed analog defaults. This page shows how to configure safely (shadow→apply→commit), deliver smooth dimming, and make faults diagnosable using measurable fields and a repeatable validation flow.
What it is, and when a “programmable” driver is worth it
A programmable digital LED driver is defined by a register image (what the product “is”), a dimming engine (how brightness changes over time), and read-back evidence (telemetry + fault logs) that makes field issues diagnosable.
Decision rule: “Programmable” is worth it when the product must be configured (SKU flexibility / calibration), controlled (curves & fades without artifacts), and observed (telemetry + logs for traceability) with measurable pass/fail evidence.
Three capability layers that must be delivered (not just claimed)
- Config (write): the device becomes a specific product by writing a stable image (channel mapping, rated current limits, default behavior). Evidence must include
revision/idandconfig check(CRC/PEC/readback). - Control (curves & fades): brightness is produced by a dimming engine (LUT/segments + fade timing). Evidence must include the effective target (what the IC is actually enforcing) to avoid “write succeeded but nothing changed”.
- Observability (telemetry + fault logs): health and failures are exportable as structured data (status bytes/words, fault flags, event logs with snapshots). Evidence must include “stale/valid” and a way to correlate events (timestamp or monotonic counter).
Typical system roles (who owns what)
- Host MCU / controller: discovers devices, writes the register image safely, issues runtime dimming commands, and records evidence (readback + error counters).
- Driver IC: stores the image (shadow/active and optional NVM), executes curves/fades, and exports telemetry/status/log data.
- Sensors (temperature / current sense): provide inputs for derating and diagnostics (used as evidence fields, not as a topology discussion).
- Factory calibration tool: programs immutable defaults (factory image), writes calibration coefficients, and records version/CRC for traceability.
Evidence fields to lock at project start (minimum set)
- Address plan: fixed strap vs soft address; collision avoidance on multi-drop buses.
- Bus speed & margin: target bitrate + pull-up/capacitance budget (rise-time criterion).
- Integrity: PEC/CRC enabled? mandatory read-after-write verification? retry policy for NACK/bus error.
- NVM commit policy: when commits happen, how atomicity is ensured, and how brownout is detected/recovered.
System architecture: separating the power stage from the digital plane
This page focuses on the digital plane (bus + registers + telemetry/logs). The power stage can be treated as a separate layer, as long as the digital plane remains measurable, recoverable, and noise-resilient.
Digital-plane signals and what they mean (practical semantics)
- SCL / SDA: configuration writes, runtime control commands, and read-back evidence. Failure signatures include NACK bursts, glitches, and SDA stuck-low.
- ALERT / INT (if present): event-driven indication that new status/log data is available, reducing blind polling and improving diagnosability.
- EN / RESET: controlled bring-up and last-resort recovery when the bus hangs or the device enters an unknown state (useful for field robustness).
- FAULT (if present): hardware-level fault indicator; it must be consistent with status flags/logs to support root-cause analysis.
Key architecture rule: every “digital plane” failure must be observable at a small set of measurement points (SCL/SDA + one event/reset line) and recoverable without power-cycling the entire luminaire.
Isolation and reference domains (interface-only impact)
- Propagation delay and edge shaping can reduce timing margin at higher bus speeds; “looks fine on paper” can still fail under EMI.
- Pull-up placement becomes non-trivial across an isolator; poor placement often shows up as slow rise time, NACK bursts, or unstable logic thresholds.
- Bidirectional behavior can interact with clock stretching and bus recovery, making “stuck bus” incidents harder to clear unless reset/recovery is designed in.
Minimum measurement set (2 + 1) for fast diagnosis
- TP_SCL: verify rise/fall time, glitches, and continuous clocking during transactions.
- TP_SDA: verify ACK/NACK behavior and ensure SDA is not held low after an error.
- TP_INT (or TP_RESET/TP_EN): confirm the device provides an observable event path and a deterministic recovery path.
I²C / SMBus / PMBus essentials that actually break products
This chapter is written as a deliverable: it defines address policy, timing margin, and integrity/recovery rules that keep bus transactions stable under real wiring, capacitance, and noise.
Failure signatures to design against: NACK bursts → retry storms, “write OK but behavior unchanged”, bus stuck-low (hung bus), device not-ready windows, and long-cable variance across batches.
Address planning: strap vs soft address (and multi-device collision rules)
- Strap address: predictable and production-safe; preferred when multiple identical devices share one bus and collisions must be structurally impossible.
- Soft address: flexible but must include a deterministic source of truth (when it is written, who is allowed to write it, and how it is restored after reset).
- Collision rule: if two devices respond to one address, the system must enter a no-write safety mode (stop configuration writes) until the conflict is resolved.
- Scan policy: scanning is for discovery and inventory (read-only); configuration writes should require an explicit match on revision ID (and optional variant ID) to prevent writing the wrong target.
Timing margin: pull-up + bus capacitance + rise time + clock stretching
- Rise-time budget is the real limiter in field wiring. Treat the bus as Rpullup + Cb: as Cb grows (cables, connectors, ESD parts), the edge slows and margin collapses.
- Deliverable requirement: specify a target bus speed together with a maximum allowed Cb (or verified tr) and a recommended Rpullup range.
- Waveform pass/fail: verify tr at TP_SCL/TP_SDA and ensure the HIGH level stays above the input threshold with margin (glitches near VIH/VIL are the common “works on bench, fails in product” cause).
- Clock stretching: treat long SCL-low periods as a first-class spec item. Define host tolerance (timeout), record the worst-case stretch, and design retry/backoff so stretching cannot trigger retry storms.
Integrity and recovery: PEC/CRC, repeated start, ACK/NACK semantics, retries
- PEC/CRC: enable when wiring/noise is not tightly controlled. Integrity must be measurable via counters (PEC failures, retry counters) rather than “seems stable”.
- Repeated start: define whether the target supports it and how the host behaves if a repeated-start sequence fails mid-transaction (abort + recovery path).
- ACK/NACK semantics: distinguish “not-present/not-ready” from “data rejected/busy” from “integrity failure”; each category must map to a different recovery action.
- Retry policy: specify max retries, backoff timing, and a forced escape hatch (device reset or bus recovery) to prevent infinite retry loops.
Evidence checklist (loggable and testable)
- Electrical: measured
tron SCL/SDA, estimatedCb, observed glitch count (if any), and worst-case clock-stretch duration. - Statistics:
NACK rate,retry counter, PEC/CRC failure counter, and “stuck-bus incidents”. - Policy: address table, scan mode vs write mode boundary, and a defined recovery path for each failure class.
Register map strategy: pages, atomicity, and “safe writes”
Treat configuration as a transaction. A write is not complete until the intended image is verified and the active behavior is proven to match (via effective targets and status).
Pages / banks: scalability with explicit targeting
- PAGE/BANK is an implicit state. If host and device disagree on the active page, writes silently land in the wrong place.
- Rule: every configuration transaction must begin with an explicit target page selection and must record that page in the host log.
- Group writes: when multiple channels must be consistent, use a group mechanism (page-wide apply or group commit) rather than sequential live edits.
Atomic update: Shadow → Verify → Apply → Active
- Shadow: a staging area to build a coherent image (multiple fields, multiple registers) without exposing intermediate states.
- Verify: read-back and/or checksum/PEC validation catches “half writes”, bus noise, and addressing mistakes before any change becomes live.
- Apply: a single action that transfers shadow to active, enforcing “all-or-nothing” behavior. A busy/lock flag must gate apply to prevent partial transitions.
- Active: the only truth for runtime behavior; reading effective targets and status must confirm that active equals the intended image.
Safe write transaction template (repeatable steps)
- Select target: set
PAGE/BANKand confirm device identity (revision ID). - Write payload: write fields to shadow (prefer block writes where supported) and record a transaction ID on the host.
- Read-back verify: read critical fields (or config CRC if available); count errors and enforce a retry/backoff policy.
- Apply: assert apply/group-commit; confirm completion (busy cleared) before proceeding.
- Prove active: read effective targets + status; if mismatch, record last-error code and stop further writes (prevent cascading failure).
Evidence fields (minimum)
- Transaction fields: target page, payload length, checksum/PEC status, apply/commit flag, and host-side retry count.
- Read-back fields: revision ID, config CRC (or equivalent), effective targets, and last-error code / busy flag.
NVM/OTP: committing defaults without bricking units
Non-volatile storage turns “configuration” into a product promise. The goal is consistent defaults across production lots, safe field updates, and a provable rollback path when writes fail.
Core rule: a commit is never “done” until the target image is validated (CRC/version/valid bit) and the boot selection logic is proven to choose a safe image after resets and brownouts.
Image model: STORE / RESTORE / DEFAULT (without vendor-specific commands)
- DEFAULT image (Factory baseline): the known-good configuration that must remain recoverable under all failure modes. Treat as read-mostly and immutable in the field.
- USER image (Field configuration): optional customer/runtime preferences that may be updated, but must never override the ability to boot safely.
- RUN image (Active/shadow): the working set used during normal operation. RUN may change frequently, but must not trigger frequent NVM writes.
- STORE persists a selected image; RESTORE loads an image into RUN; DEFAULT is the emergency fallback when validation fails.
Brownout risk: why partial writes brick units (or create “silent corruption”)
- Failure mode: power loss during commit can leave metadata updated but payload incomplete (or vice versa). The result is an image that “exists” but fails validation.
- Silent corruption is worse than a hard fail: behavior changes after reboot with no obvious error unless CRC/version/valid bits are checked.
- Minimum safeguards: commit must be gated by a “safe-to-write” condition (no brownout, stable reset cause), and must end with mandatory validation before switching the active image pointer.
- After reset: boot selection must prefer the newest valid image; if CRC fails, it must roll back deterministically to a valid prior image (or DEFAULT).
Endurance and write-rate limits: factory vs field partitions
- NVM is not a cache: frequent commits for dimming behavior, telemetry, or runtime tweaks will consume endurance and raise failure probability.
- Partition strategy: separate “factory baseline” from “field preferences”. Factory partition should be write-protected outside production; field partition must be rate-limited and validated.
- Commit policy: define allowed commit triggers (e.g., commissioning only), and enforce minimum intervals and maximum lifetime commit counts per partition.
- Rollback discipline: never overwrite the only known-good image. Always keep at least one prior valid image in reserve.
Evidence checklist (must be readable via host or diagnostics)
- Commit trace:
commit counter,last commit status(OK/FAIL/INCOMPLETE), and a timestamp or sequence number if available. - Validation:
image CRCper image (DEFAULT/USER A/USER B) plusversionandvalid bit. - Power event proof:
brownout flagandreset causearound commits.
Dimming engine: curves, fades, and what “linear” really means
Curves and fades are a pipeline problem: commands are interpreted by an engine, converted into a current target, then constrained by clamps and derating. “Linear” must be defined in the domain that matters.
Curve representations: LUT vs segmented linear vs polynomial (concept-level tradeoffs)
- LUT: predictable and calibratable; low-level control can be dense where the eye is most sensitive. Cost is points and storage.
- Segmented linear: compact and stable; good when register space is limited while still allowing “denser low end”.
- Polynomial: few parameters but sensitive to edge behavior and numeric stability. Requires careful bounding to avoid overshoot near endpoints.
- Key engineering point: the curve defines low-light resolution and step visibility, not just a mapping from “percent” to “current”.
Fades: timebase, step strategy, and avoiding visible stair-steps
- Timebase: an internal engine tick is deterministic; host-timed updates can jitter with scheduling and bus latency.
- Step strategy: a constant step size often looks “steppy” at low light. Better strategies densify steps near low levels or use time-normalized interpolation.
- Interpolation: define how intermediate points are computed (nearest / linear between points). Even a LUT needs an interpolation policy for smooth fades.
- Deep dimming stability: use a minimum current clamp to avoid dropouts and a controlled transition through the lowest region where quantization dominates.
Perceptual consistency: current-linear is not visually-linear
- Gamma/log curves are practical tools for “equal perceived steps”. The point is not the math, but the measurable outcome: fewer visible jumps at low levels.
- Define “linear” explicitly: linear in current, linear in perceived brightness, or linear in command scale. The chosen definition must match product expectations.
- Stability knobs: clamp, optional dither, and derating must be placed in the pipeline to avoid unexpected jumps when constraints engage.
Evidence checklist (configuration + verification)
- Config:
curve ID, LUT points (or segment params),fade time,step rate,min current clamp, anddither enable(if supported). - Output: ripple vs dim level (trend), plus a low-level stability metric such as
jitter/jump countper time window.
Runtime control vs protection overrides: who wins
A dimming command is not the output. The output is the result of a priority resolver that combines runtime intent with protection and derating constraints. When “writes do nothing” in the field, the missing piece is usually the winning layer and its recovery rules.
Practical model: the resolver produces an effective current target. If any hard shutdown condition is active (UVLO/OTP/critical latch), the effective target becomes zero regardless of the runtime command.
Priority ladder (from highest to lowest)
- Hard shutdown: UVLO / OTP / critical fault → effective target forced to 0 (safe state).
- Fault latch: latched short/open/overcurrent → blocks output until clear rules are satisfied.
- Thermal derating: scales down the target (derating factor) to avoid reaching a shutdown threshold.
- Soft constraints: min clamp, slew limit, fade step-rate caps → reshape the target without declaring a fault.
- Runtime intent: manual dim target / fade engine output (the “requested” target).
Soft derate vs hard off: recovery rules decide field behavior
- Soft derate keeps output on but reduces brightness. It must include hysteresis to prevent oscillation near thresholds.
- Hard off forces output to zero. It must define clear conditions (cool-down timer, retry budget) to avoid “never recovers” failures.
- Recovery triggers typically combine: temperature below a release threshold, a minimum time window, and a stable input condition (no UVLO/reset churn).
- Command consistency: after recovery, the resolver should return to the latest valid runtime intent, not an undefined intermediate value.
Debug method: prove the winning layer using four fields
- effective current target: what is actually applied after all overrides.
- derating factor: the scaling applied by thermal or other derate logic (explains “why dimmer”).
- fault latch bit: indicates “blocked until cleared” conditions (explains “why stuck off”).
- retry timer: remaining cool-down / retry delay (explains “when it may recover”).
A runtime write that “does not work” is diagnosable when the effective target is visible alongside derating and latch state.
Telemetry: what to measure, how to trust it, how to use it
Telemetry is a signal chain, not a list of numbers. Trust depends on sampling path, calibration, filtering, update timing, and explicit freshness/range flags. Use depends on thresholds and trends that can be executed locally by the host.
What to measure: categories that explain behavior
- Supply health: VIN/VOUT indicates margin to UVLO and identifies sag events that change effective output.
- Output proof: ILED (and duty/target indicators if available) proves whether the effective target is being met or constrained.
- Thermal context: temperature explains derating engagement and proximity to shutdown thresholds.
- Energy estimate: power estimation supports trend-based warnings and detects abnormal load or thermal drift patterns.
How to trust it: raw vs scaled, calibration, filtering, and freshness
- Raw vs scaled: raw codes (ADC counts) expose clipping and offset; scaled values apply units and calibration coefficients.
- Calibration coefficients: define absolute accuracy; coefficients should have an identifier or revision for traceability.
- Filter window: reduces noise but adds latency; the host must interpret values in the context of the filter and update period.
- Update period and stale flag: a fresh-but-late value is different from a stale value; both must be detectable.
- Range/clip flags: if a channel is saturated or out-of-range, scaled values may be misleading even if they look stable.
How to use it locally: threshold alerts and trend warnings (no cloud required)
- Threshold alerts: VIN near UVLO, temperature near derate/shutdown, ILED deviation from effective target beyond tolerance.
- Trend warnings: rising temperature slope, repeated VIN dips, or increasing power estimate over time (indicates cooling degradation or load change).
- Reliability checks: ignore updates marked stale; down-rank channels with clip flags; require persistence across multiple fresh samples.
Evidence checklist (minimum telemetry packet for field reproducibility)
- telemetry raw and telemetry scaled captured together (same sample time).
- cal coefficients (or coefficient set ID / revision).
- update period and latency context (filter window implied by configuration).
- stale flag to prevent using old values as proof.
- range/clip flags to identify saturation and out-of-range behavior.
Fault flags & event logs: making failures diagnosable
A fault that cannot be explained becomes a product failure. The goal is traceability: a compact status view for “what is wrong now,” plus a durable history of “what happened first” and “what the system looked like at that moment.”
Traceability chain: Flags (state) + Time (counter) + Snapshot (context) + Log (history) → host decode.
Flags: transient vs latched (and why both matter)
- Transient (“seen”) indicates an event occurred at least once. It is essential for intermittent issues that self-recover before anyone reads status.
- Latched (“blocked”) indicates recovery is intentionally prevented until clear conditions are met. It protects safety and prevents rapid re-trigger cycles.
- Interpretation rule: a clean “current state” without history cannot explain field reports. A durable “seen” bit without a current latch cannot explain whether the device is still in a faulted condition.
Multi-fault concurrency: avoid losing the root cause
- Status word/byte as a full bitfield: preserves the “many things can be true” reality.
- Fault code as a quick classifier: points to the dominant category for fast triage.
- First-fault pointer: captures the earliest trigger so later secondary effects do not overwrite the root cause.
- Event log (ring): records fault ordering so concurrency can be replayed rather than guessed.
Event logs: ring buffer + snapshot fields that actually diagnose
An event log should not only store “what fault happened,” but also “what the system looked like when it happened.” A minimal snapshot typically includes input margin (VIN/VOUT), thermal context (temperature), and control context (requested target vs effective target). These three are usually enough to distinguish override-driven behavior from genuine electrical faults.
Clear strategy: preserve evidence without blocking recovery
- Clear-on-read: simple, but risky for diagnostics because evidence disappears when polled. Best limited to counters or non-critical transient summaries.
- Explicit clear: preserves evidence until a deliberate action clears it. Best for latched faults and first-fault capture.
- Power-cycle clear: can mask repeating issues if evidence vanishes on every restart. Use carefully, and prefer retaining first-fault/log history across resets when possible.
Evidence fields (minimum set for reproducible fault analysis)
- status word/byte (full bitfield)
- fault code (primary classifier)
- first-fault pointer (root-cause anchor)
- log index (ring buffer position)
- timestamp/counter (ordering without requiring real time)
Robustness: bus integrity under EMI, isolation, and hot-plug
A robust control bus must remain recoverable in noisy environments. The engineering goal is not “never errors,” but “errors are detectable, counted, and recoverable without human intervention,” including isolation boundaries and hot-plug disturbances.
EMI failure signatures that break products
- SCL glitches: narrow pulses or spikes that look like extra clocks to state machines.
- SDA bit flips: unintended data transitions causing corrupted bytes or false start/stop interpretation.
- Hung bus: SCL or SDA held low (often SDA) so no new transaction can begin.
Recovery ladder: detect → free the bus → re-sync → escalate if needed
A practical recovery sequence starts with stuck-low detection (line low beyond a safe time budget), then applies clock-pulse recovery to release a device stuck mid-bit, and finally issues a STOP to force the bus back to idle. If the bus remains hung, escalation uses a device reset line or watchdog strategy.
- Step 1 — detect: SDA (or SCL) low longer than a defined threshold → declare “hung.”
- Step 2 — recover: drive 9 clock pulses to advance a stuck receiver through remaining bits.
- Step 3 — re-sync: generate a STOP condition (SCL high while SDA rises) to return to idle.
- Step 4 — escalate: if still hung, apply device reset or rely on watchdog to prevent permanent deadlock.
Isolation boundary effects (interface-level only)
- Propagation delay reduces timing margin and can reshape edges seen by the bus.
- Pull-up placement matters across the boundary; the “bus” can behave like two segments with different rise behavior.
- Bidirectional limits can affect edge cases (including recovery pulses and any stretching-like behavior), so recovery must be validated across the isolation path.
Evidence fields (make robustness measurable)
- stuck-low detection (count and/or duration)
- bus error counters (NACK/timeouts/corruption proxy counters)
- reset cause (bus-driven vs other sources)
- watchdog reset count (deadlock prevention indicator)
Validation plan: bring-up → programming → dimming quality → telemetry/log verification
This gate-based plan validates a programmable digital LED driver from first contact to diagnosable failures. Each gate defines what must be proven, the two waveform groups to capture, and pass/fail criteria that can be reused in R&D bring-up and production.
Waveform rule (per gate): always capture Bus (SCL/SDA) + One proof signal (ILED or INT/ALERT). If channels are limited, prefer SCL + ILED for dimming gates and SCL + INT for fault/log gates.
Gate 0 — Bench sanity (avoid false failures)
Prove the test setup is not generating bus faults: stable idle levels, no stuck-low, and predictable reset/INT behavior before any configuration writes.
- Bus: SCL/SDA idle level + first transaction edge quality
- Proof: INT/ALERT at power-up (or ILED if INT not available)
- PASS: no sustained stuck-low; clean edges without repeated unintended pulses
- FAIL: SDA (or SCL) held low beyond a defined window; recurring glitches at idle
Gate 1 — Interface connectivity (scan + read ID)
Confirm the device is discoverable at the intended address plan and returns stable identity/capability fields across repeated reads.
- Bus: one full scan burst + ID read transaction
- Proof: INT/ALERT (if present) for comm-error signaling
- PASS: address is stable; ID/revision reads match every time; NACK/retry remains near-zero in the test window
- FAIL: address intermittently disappears; ID varies; repeated NACK bursts or bus lockups
Gate 2 — Programming safety (shadow → apply → readback)
Prove that configuration updates are atomic and auditable: write staging registers, apply in a single step, then read-back to verify.
- Bus: full write sequence (page/select + payload + apply)
- Proof: INT/ALERT pulse timing around apply or error
- PASS: readback matches written values; config CRC/PEC passes; last-error remains clear
- FAIL: partial writes, mismatched readback, repeated retries, or apply produces inconsistent active behavior
Gate 3 — NVM commit robustness (including brownout drill)
Prove non-volatile defaults can be stored without bricking units. Validate both “clean commit” and “power-loss during commit” behaviors.
- Normal commit → power-cycle → re-scan + readback
- Brownout drill: interrupt power within the commit window → power-cycle → verify recovery path (safe image / invalid flag)
- Bus: commit transaction + post-reset recovery reads
- Proof: INT/ALERT (commit status / failure indication) or ILED (if commit affects output mode)
- PASS: after any drill, the device is discoverable and identity is readable; image CRC indicates valid/invalid deterministically; rollback behavior is predictable
- FAIL: address disappears permanently; persistent stuck bus; CRC/state becomes non-deterministic across retries
Gate 4 — Dimming quality (curve consistency + fade smoothness + deep-dim stability)
Validate the dimming engine output as a measurable signal: consistent mapping from dim command to ILED, smooth fades without visible steps, and stable behavior at very low targets.
- Curve: select curve ID → sweep a defined set of dim levels (low/mid/high)
- Fade: execute up/down fades with fixed fade time and step policy
- Deep dim: hold at minimum clamp for a dwell window and observe stability
- Bus: dim command + fade programming sequence
- Proof: ILED waveform (ripple, steps, dropouts, monotonicity)
- PASS: ILED is monotonic with dim code; fade has bounded step amplitude; deep-dim dwell shows no periodic drop-to-zero or uncontrolled jumps
- FAIL: non-monotonic points, repeated step discontinuities, or deep-dim instability events above an allowed count
Gate 5 — Telemetry consistency (accuracy + latency + filter behavior)
Prove telemetry is trustworthy: raw-to-scaled mapping is consistent, update period behaves as specified, and stale/clip flags correctly describe data validity.
- Read raw + scaled pairs repeatedly at a fixed rate
- Apply a controlled change (e.g., dim step) and measure telemetry latency and settling
- Validate stale/clip behavior by pausing reads or pushing ranges intentionally
- Bus: telemetry polling burst (to correlate with update period)
- Proof: INT/ALERT (threshold/abnormal indication) or ILED (to correlate telemetry vs output)
- PASS: update period stays within a bounded tolerance; scaled values track reference trends; stale flag asserts only when appropriate; clip flags align with forced range conditions
- FAIL: update period jitter beyond tolerance, inconsistent scaling vs calibration, stale/clip flags unreliable
Gate 6 — Fault injection (flags → snapshot → log → controlled clear)
Make failures diagnosable on purpose. Inject a controlled fault, verify a log entry is created with the right context, then confirm clearing behavior is deliberate and does not erase evidence unintentionally.
- Bus disturbance: create a short hung-bus condition (stuck-low) and verify recovery ladder
- Threshold fault: trigger a defined alarm/limit crossing (without redesigning the power stage)
- Then: read status → read snapshot fields → read log index/entry → apply explicit clear and confirm post-clear state
- Bus: injection moment + recovery pulses + STOP re-sync
- Proof: INT/ALERT timing (fault set/clear), plus ILED if output behavior is relevant
- PASS: flags set with correct transient/latched semantics; log index advances; snapshot contains VIN/temp/targets; explicit clear is controllable and does not erase first-fault/history unexpectedly
- FAIL: no log entry, missing snapshot context, uncontrolled auto-clear, or recovery enters a reset storm
Request a Quote
FAQs (Programmable Digital LED Driver)
Each answer stays inside this page boundary and points back to measurable evidence fields (readback, counters, flags, CRC, effective targets, and log indices).
Write succeeds but behavior doesn’t change — shadow/apply issue or priority override?
Units drift apart after production — did NVM commit differ or calibration scaling differ?
Dimming looks stepped at low levels — LUT resolution, step rate, or minimum clamp?
Fade sometimes stutters — bus retries or engine tick starvation?
Telemetry numbers “freeze” — stale flag, update period, or bus hung?
Fault clears but returns instantly — latch vs retry policy vs unresolved root cause?
Random NACK bursts in field — pull-up sizing, capacitance growth, or EMI spikes?
Bus stuck low after hot-plug — recovery sequence or reset cause chain?
Two drivers respond to same address — strap conflict or soft-address mis-write?
Config corruption after brownout — commit atomicity or missing image CRC?
Logs exist but are not useful — missing snapshot fields or wrong trigger design?
How to structure factory vs field updates safely — what must be immutable?
I²C isolation boundary: ADuM1250, ADuM1251, ISO1540-Q1
Logic analyzer (transaction evidence): Saleae Logic Pro 8 (SAL-00113)