Automotive ASIL ADCs: BIST, Redundancy, and Diagnostics
← Back to:Analog-to-Digital Converters (ADCs)
An ASIL-relevant ADC chain is not “a high-spec ADC” — it is an evidence-closed loop: each chain-level fault is mapped to a diagnostic mechanism, verified by reproducible tests, and backed by logged artifacts with context, margins, and traceability.
This page provides the fault→mechanism→evidence tables, redundancy and compare-window rules, scheduling and nuisance-trip controls, plus an RFQ and evaluation plan that can be copied into real projects.
Definition: what “ASIL-ready ADC chain” means in practice
An ASIL-ready ADC chain is a measurement path that turns hardware faults into testable evidence and safe reactions. Performance specs describe how accurately signals are converted; safety evidence proves the chain can detect faults before actuation becomes unsafe.
What is safety-relevant in an ADC chain
In automotive safety systems, the ADC is only one part of the safety-relevant measurement chain. Safety relevance spans the full path from sensing and analog conditioning to digital transport, software decisions, and the final actuation gate that enforces a safe state.
- Sensor: failures must be observable (open/short, implausible ranges).
- AFE / protection: saturation and clamp events must not hide faults.
- ADC core: conversion health must be checkable (self-test and validity flags).
- Transport: corrupted frames must be detectable (CRC, counters, timeout).
- MCU decision: faults must map to defined reactions (degrade/limit/shutdown).
- Actuation gate: unsafe output must be blocked when measurement trust is lost.
Safety terms to operationalize (only what is needed)
Random hardware fault
An unpredictable hardware failure (for example: stuck codes, missing clock edges, reference collapse) that must be detected by defined mechanisms.
Latent fault
A hidden fault that may not show up during normal operation unless periodic tests are scheduled (for example: a broken monitor path).
Diagnostic mechanism
A mechanism that converts faults into observable evidence: BIST signatures, monitors and thresholds, redundant comparisons, or link integrity checks.
Evidence artifact
A recordable, traceable result (flags, counters, signatures, compare deltas) captured with context (temperature, voltage, configuration, version).
What this page delivers
- Fault → Mechanism → Evidence mapping that keeps safety claims testable.
- Redundancy selection for primary/monitor paths and compare windows.
- V&V checklist from requirements to tests to evidence artifacts.
- RFQ template with parameter fields and safety questions for suppliers.
Principle: turn “safety” into testable, table-based evidence
A safety argument becomes strong when each fault category maps to a specific diagnostic mechanism and produces recordable evidence. Use a consistent Fault → Mechanism → Evidence structure to avoid assumed coverage and to keep claims verifiable.
ADC-chain fault taxonomy (8 chain-level categories)
List faults at the measurement-chain level (input, conversion, transport, and operating conditions). Each category should be observable in software through flags, counters, signatures, or comparison margins.
Input integrity
Open/short, saturation, or implausible motion/range that can be detected with limits and plausibility rules.
ADC core integrity
Stuck/missing codes or gain-offset drift that must be detectable by BIST, code checks, or redundant comparisons.
Reference / supply integrity
Reference collapse or drift, brownout events, and “conversion invalid” windows during rail instability.
Clock & timing integrity
Missing edges, timeouts, or timing anomalies that can freeze sampling or break compare windows.
Digital transport integrity
CRC errors, frame loss, out-of-sequence data, and timeout events on SPI/LVDS/JESD-style links.
Thermal & aging drift
Drift beyond budget across temperature and life that erodes margins unless monitored and logged with context.
Configuration faults
Wrong mode/range/reference selection that can create stable but incorrect measurements without explicit configuration evidence.
Common-cause candidates
Shared clock/reference/layout risks that can defeat redundancy if both paths fail in the same way.
Mechanism types (classified by evidence shape)
Mechanisms are most useful when classified by the kind of evidence they output. Evidence shapes should be stable across designs and easy to validate.
- Signature (BIST): pass/fail plus a signature value that can be logged and traced.
- Threshold flag (monitor): ref/supply/clock/input monitors that raise deterministic flags at boundaries.
- Counter (frame/timeout): counters and watchdogs that reveal loss of data freshness or missing frames.
- Delta compare (redundancy window): compare margin (Δ) against a defined window to detect path divergence.
- Plausibility (physics window): application-independent limits that detect impossible values or transitions.
Evidence rules (minimum fields for audit and test)
Evidence is only useful when it is reproducible and traceable. Record the operating context, report pass/fail with margin, and attach identifiers that allow the exact scenario to be replayed.
| Evidence group | Must include | Why it matters |
|---|---|---|
| Context | Voltage, temperature, clock, configuration, firmware version | Enables reproduction and prevents “pass in one condition, fail in another” ambiguity |
| Result | Pass/fail + margin (Δ window, threshold distance), counters where applicable | Margins show robustness; counters quantify rates and persistence |
| Traceability | Timestamp, sequence/frame ID, test-mode ID | Supports audit trails and links evidence to specific runs and data frames |
Design: build a repeatable chain with redundancy, diagnostics, and evidence
A robust ASIL measurement chain is designed as a closed loop: select a redundancy topology, define a compare window with margin, attach BIST and monitoring hooks, schedule diagnostics to cover latent faults, and control nuisance trips without masking real faults.
Redundancy topology decision tree
Redundancy works only when the second path is meaningfully independent and when the comparison produces evidence (Δ and margin) that can be validated in tests. Choose the topology based on bandwidth, latency, independence strength, cost, and false-trip risk.
| Topology | Best for | Independence strength | False-trip risk | Common-cause exposure | Typical evidence |
|---|---|---|---|---|---|
| Primary + Monitor | High BW primary path with an independent check | Strong if ref/clock/routing/config are separated | Medium (compare tuning required) | Medium (shared resources must be audited) | Δ margin + monitor flags + counters |
| 1oo2 Compare | Aligned sampling and direct value comparison | Medium–strong (depends on alignment + diversity) | High if window too tight | Medium–high if shared ref/clock/layout | Δ window pass/fail + Δ margin |
| 2oo2 / Voting | Avoiding nuisance trips in noisy environments | Strong with independent paths and independent monitors | Lower (vote reduces single-sample trips) | Medium (complexity can hide latent faults) | Vote outcome + per-path evidence |
| Diversity overlay | Reducing shared-cause failures | Improves independence when applied to ref/clock/routing/vendor | Neutral (affects architecture, not thresholds) | Lower (shared exposure reduced) | Independence checklist + config evidence |
Compare window design (Δ window)
A compare window should be budgeted rather than guessed. Δ is shaped by random noise, thermal drift, quantization effects, and timing misalignment. A too-tight window creates nuisance trips; a too-loose window hides real divergence.
| Window term | Cause | Estimate method | Temp / aging note | Evidence |
|---|---|---|---|---|
| Noise term | Random noise and interference | Measure Δ distribution; set n-sigma margin | Repeat across temperature points | Δ histogram + chosen limit |
| Drift term | Thermal and long-term drift | Corner tests; track margin vs T/time | Model aging reserve where needed | Temp-tagged Δ margin logs |
| Quantization term | Code step and rounding differences | Worst-case code boundary analysis | Usually stable; include guard band | Limit + rationale note |
| Timing term | Sampling misalignment / latency mismatch | Alignment tests; compare at controlled edges | Review under temperature/clock drift | Δ margin under alignment corners |
BIST strategy (power-up and in-field)
BIST is most effective when its signature, stimulus method, and schedule are defined up front. Power-up BIST focuses on structural health. In-field BIST focuses on latent faults and must be isolated from actuation to avoid unintended behavior.
| BIST mode | Stimulus | Coverage focus | Pass criteria | Evidence fields |
|---|---|---|---|---|
| Power-up BIST | Internal pattern / test mux | Code path, digital path, mux path | Signature match + margin | V/T/clock/config + signature + timestamp |
| In-field BIST | Internal DAC / safe external stimulus | Latent faults and drift-sensitive paths | Signature match + allowed Δ margin | Mode ID + evidence counters + sequence ID |
Monitoring hooks (Ref / Supply / Clock / Input / Transport)
Monitoring hooks should be chosen for observability and testability. Each hook should be triggerable in validation and should produce evidence that can be recorded with context and traceability.
| Hook | Detects what | How to test | Evidence |
|---|---|---|---|
| Ref threshold | Reference collapse / out-of-range | Sweep ref / inject offset in validation mode | Flag + threshold margin + timestamp |
| Conversion-valid gating | Unstable startup / invalid conversions | Power sequencing tests; cold start corners | Validity flag + stable-window ID |
| Brownout / PS flags | Supply drop and reset-risk windows | Rail dip tests and restart behavior checks | Flag + voltage tag + counter |
| Clock missing / timeout | Sampling freeze or time-base loss | Clock stop tests; timeout threshold sweep | Timeout events + counter + timestamp |
| Input saturation / plausibility | Open/short, rails, impossible transitions | Boundary stimuli + open/short injection rigs | Flags + margin + event rate |
| CRC / frame / sequence / timeout | Corrupted, lost, or stale data frames | Link fault injection; noise and disconnection tests | CRC fail rate + counters + IDs |
Diagnostic scheduling (power-up / periodic / on-demand)
Scheduling should cover both immediate structural issues and latent faults. Power-up diagnostics establish trust. Periodic diagnostics prevent silent degradation. On-demand diagnostics can be used before critical actuation transitions.
| Diagnostic item | Layer | Trigger / frequency | Evidence | Action on fail |
|---|---|---|---|---|
| Startup integrity checks | Power-up | Every boot / wake | Flags + stable-window ID + timestamp | Hold actuation until trusted |
| BIST signature | Power-up / Periodic | Boot + defined interval | Signature + mode ID + context | Degrade or safe state |
| Link integrity | Periodic | Continuous counters | CRC rate + counters + IDs | Limit output or safe state |
| Compare window check | On-demand / Periodic | Before critical transitions | Δ margin + timestamp + config | Degrade or inhibit actuation |
When diagnostics run in special modes, actuation should be gated. Evidence should remain traceable: faults should lead to a defined degraded mode or safe state, and evidence logs should retain timestamps and identifiers.
False positive & nuisance trip control
Nuisance trips commonly come from noise, temperature drift, transients, EMI, and startup behavior. Suppression methods should be mechanism-specific and should declare their maximum delay to avoid masking real faults.
| Mechanism | Typical false-positive source | Suppression strategy | Safety caution |
|---|---|---|---|
| Threshold monitor | Startup transients and rail settling | Stable-window gating + hysteresis | Avoid excessive delay that hides real brownouts |
| CRC / counters | Burst EMI on links | Rate threshold + short debounce window | Do not ignore persistent CRC trends |
| Δ compare window | Noise and alignment corners | Window budgeting + multi-sample confirm | Confirm delay must not exceed safe reaction time |
| Plausibility rules | Legitimate transients that look impossible | State-machine gating + context-based limits | Over-filtering can mask real open/short events |
Engineering checklist: turn the safety case into tasks, tests, and evidence
A practical checklist converts safety requirements into a fault worksheet, a reproducible V&V plan, and an evidence log that remains traceable from production to field operation.
Requirements intake (inputs to collect)
Evidence quality depends on input completeness. Missing requirements typically lead to untestable diagnostic claims or compare windows that cannot be justified.
| Field | Why it matters | Example values | Evidence impact |
|---|---|---|---|
| ASIL target | Sets diagnostic depth and required robustness | ASIL A–D | Affects scheduling, logging, and pass margins |
| Signal type | Defines plausibility rules and input fault patterns | Current / voltage / bridge / position | Affects fault worksheet and test stimuli |
| Bandwidth & latency | Constrains topology and compare timing alignment | BW (Hz) + latency budget (ms/µs) | Affects Δ window budget and scheduling |
| Environment | Defines temperature points and EMI stress | Temp range + EMI severity + vibration | Affects pass margins and nuisance-trip tuning |
| Allowed degraded modes | Defines acceptable behavior when faults occur | Limit torque / limp-home / safe stop | Drives action-on-fail in V&V tables |
| Logging constraints | Limits how much evidence can be stored in field | Ring buffer / event-only / upload policy | Defines minimum evidence schema |
Fault model worksheet (chain-level mapping)
A worksheet row is a testable object: a defined fault, a detectable manifestation, a diagnostic mechanism, a reproducible test method, and an evidence artifact with traceable fields.
| Fault (category + example) | Effect on system | Detectability | Mechanism | Test method | Pass criteria (margin) | Evidence artifact | Owner |
|---|---|---|---|---|---|---|---|
| Input open/short | Implausible measurement leads to unsafe decision | Detected via saturation + plausibility window | Plausibility + threshold flags | Stimulus injection + open/short rig | Trip within time budget; margin logged | Event + context (V/T/clock/config) + timestamp | HW + FW |
| ADC core stuck code | Measurement freezes, hides real changes | Detected via BIST signature or compare divergence | BIST signature + Δ compare | Software injection / test mode | Signature match + Δ margin threshold | Mode ID + signature + sequence ID | FW + Test |
| Transport corruption | MCU consumes wrong data without awareness | Detected via CRC + frame/sequence counters | CRC + counters + timeout | Hardware injection / link noise / disconnect | CRC event rate within defined limits | CRC rate + counters + timestamp | HW + FW |
| Ref drift beyond budget | Systematic bias erodes safe margins | Detected via ref monitor + temp-tagged trend | Threshold + trend evidence | Stimulus injection + temperature corners | Margin preserved across T corners | Temp-tagged margin + config version | HW + Test |
V&V plan (reproducible tests and defensible coverage)
Coverage claims should map to reproducible test cases. Each test must define context (V/T/clock/config), expected detection, and pass criteria with margin at temperature and aging corners as applicable.
| Injection type | What it emulates | Best for mechanisms | Evidence expected |
|---|---|---|---|
| Software injection | Stuck values, timeouts, bad config paths | Counters, scheduling, config evidence | Events + mode IDs + timestamps |
| Hardware injection | Link noise, rail dips, clock loss | Threshold monitors, CRC/counters, timeouts | Flag rate + counters + context tags |
| Stimulus injection | Known inputs, boundary ramps, step edges | Δ compare windows, plausibility, drift margins | Δ margins + pass criteria logs |
| Test case ID | Mapped worksheet rows | Setup context | Expected detection | Pass criteria (margin) | Evidence fields to log |
|---|---|---|---|---|---|
| VV-001 | Input open/short | Cold / hot corners; nominal clock; released FW | Plausibility + saturation flags | Trip time < budget; margin captured | V/T/clock/config + timestamp + event counter |
| VV-002 | Transport corruption | EMI injection; cable stress; nominal temperature | CRC fail events + frame/sequence counters | Rate thresholds + persistence rules | CRC rate + counters + timestamp + IDs |
Production & field strategy (close the loop)
Production screens structural defects. Field diagnostics detect random faults and drift. Logging must preserve context, margins, and traceability so events can be replayed and correlated with versions and configurations.
| Stage | Goal | Test item | Pass criteria (margin) | Evidence stored |
|---|---|---|---|---|
| Production | Screen structural defects | Power-up BIST + basic monitors | Signature match + threshold margins | Context + signature + counters |
| Field | Detect random faults and drift | Periodic monitors + counters + on-demand checks | Event rates + Δ margins within budget | Event log with timestamps and IDs |
| Evidence logging minimum schema | Must include |
|---|---|
| Context | Voltage, temperature, clock, configuration, firmware version |
| Result | Pass/fail + margin, counters and rates where applicable |
| Traceability | Timestamp, sequence/frame ID, test-mode ID |
Applications: translate safety signals into ADC-chain constraints
This section maps automotive safety-relevant signals to measurable constraints on the ADC chain. It focuses on signal risk, diagnostic observability, and hard constraints such as latency, synchronization, plausibility windows, drift budgets, and transport integrity.
Domain → Signal → Constraint matrix (deliverable)
| Domain | Signal | ADC-chain constraints (short) |
|---|---|---|
| EV / Drivetrain | Phase current | Simultaneous sampling; low latency; open/short & saturation detect; Δ window with drift/noise budget. |
| EV / Drivetrain | DC link | Input transient robustness; ref/supply integrity monitors; startup stable-window gating; threshold evidence with margin. |
| EV / Drivetrain | Position feedback | Plausibility windows; drift budget with temp tags; transport integrity (CRC/sequence/timeout); config traceability. |
| Chassis | Torque | Range coverage; drift evidence across temperature; plausibility + debounce rules; compare margin tied to safe reaction time. |
| Chassis | Pressure | Open/short & saturation detect; stable-window gating on startup; periodic diagnostic scheduling; evidence counters for events. |
| Chassis | Travel sensor | Plausibility + boundary checks; open/short detection; transport IDs for stale-data prevention; configuration evidence (mode/range). |
| Body / Thermal / Battery | Voltage sense | Drift budget; ref integrity monitoring; periodic self-check with logged margin; brownout/stability gating. |
| Body / Thermal / Battery | Temperature sense | Long-term drift evidence; periodic diagnostics; timestamped logs with version/config; plausibility limits for rate-of-change. |
EV / drivetrain signals (risk → constraints)
Phase current
- Risk: wrong measurement can drive incorrect safety-relevant decisions based on current.
- Constraints: simultaneous sampling, low latency, and robust input fault detection (open/short/saturation).
- Evidence focus: Δ margin logs, alignment checks, and time-bounded fault reactions.
DC link
- Risk: incorrect thresholds can hide over/under-voltage events.
- Constraints: ref/supply integrity monitors, startup stable-window gating, and transient-robust front end.
- Evidence focus: threshold events with margin and context tags (V/T/clock/config).
Position feedback
- Risk: drift or stale data can cause decisions based on incorrect position signals.
- Constraints: plausibility windows, drift budgets across temperature, and transport IDs to prevent stale frames.
- Evidence focus: CRC/sequence/timeout logs and configuration traceability for mode/range changes.
Chassis signals (plausibility, open/short, drift)
Torque
- Risk: bias or discontinuities reduce available safety margins.
- Constraints: drift evidence across temperature corners and plausibility rules with bounded confirm delays.
- Evidence focus: margin tracking and nuisance-trip controls that declare maximum delay.
Pressure
- Risk: open/short or saturation can mimic valid values without explicit detection.
- Constraints: boundary detection, stable-window gating, and periodic diagnostics for latent faults.
- Evidence focus: event counters and timestamped logs for trend analysis.
Travel sensor
- Risk: implausible transitions or stale frames can corrupt safety decisions.
- Constraints: plausibility limits, open/short detection, and transport traceability (sequence ID + timeout).
- Evidence focus: configuration evidence when mode/range changes occur.
Body / thermal / battery sensing (drift and periodic tests)
Voltage sense
- Risk: drift shifts thresholds and erodes protection margins.
- Constraints: ref integrity monitoring and periodic self-checks with logged margins.
- Evidence focus: brownout/stability events with context tags.
Temperature sense
- Risk: long-term bias breaks assumptions used by diagnostics and margins.
- Constraints: periodic diagnostics and timestamped logs with version/config identifiers.
- Evidence focus: drift trends with temperature tags and bounded plausibility limits.
IC selection logic: fields → risk → verification → RFQ template
Selection should not start from a part number. Start from must-ask fields, map each field to a chain-level fault, define a reproducible verification step, and require evidence artifacts that include context, margin, and traceability.
Example shortlist part numbers (for RFQ and evaluation planning)
| Category | Example parts | What to validate (chain view) |
|---|---|---|
| Automotive ADC | TI ADS7138-Q1, TI ADS7038-Q1, ADI AD7124-8W, TI ADS131B26-Q1 | CRC/transport integrity, watchdog & event reporting, drift budgets, conversion-valid gating, configuration traceability. |
| Isolated ΔΣ modulator | TI AMC1306M25-Q1, ADI ADuM7703, ADI AD7403 | Isolation integrity requirements, modulator clocking assumptions, bitstream transport robustness, decimation chain evidence. |
| Safety-support peripherals | TI TPS653850A-Q1, TI TPS653852A-Q1 | Watchdog and clock monitoring behavior, reset causes, error pin handling, logging hooks and traceability. |
The list is intentionally short and chain-focused. Add or replace candidates based on required bandwidth, latency, resolution, channel count, and interface.
Parameter fields (must-ask questions)
Diagnostics / BIST
- Power-up BIST vs in-field BIST support and scheduling constraints.
- BIST coverage statements: ADC core, reference path, digital path, MUX/config path.
- Signature form: status bits, registers, event codes, and the required readout sequence.
- Pass criteria: pass/fail plus a measurable margin (not pass/fail only).
- Isolation of test modes: whether test activity can perturb normal conversions.
Require vendor artifacts: diagnostic description, register map, signature definition, and evidence field recommendations.
Transport integrity
- CRC support: data frames, register reads/writes, configuration integrity (if available).
- Frame/sequence counters and stale-data prevention strategy.
- Timeout definition: conversion timeout, bus timeout, and error escalation behavior.
- Failure output behavior: hold-last, clamp-to-code, invalid flag, dedicated alert pin.
Acceptance rule: every integrity claim must map to a loggable counter/rate and a reproducible link-stress test.
Clock / timeout
- DRDY / conversion-valid timing specification and corner conditions.
- Missing-clock detection options (internal or external monitor requirements).
- Startup stability window: clock/ref/supply settling requirements before conversions are trusted.
- Cross-domain timing: how timestamps/sequence IDs remain coherent across the chain.
Reference / supply monitoring
- Brownout and supply monitor events: thresholds, hysteresis, and reporting method.
- Reference integrity: detection of collapse, out-of-range, and drift beyond budget.
- Conversion gating behavior during unstable power/reference conditions.
Evidence requirement: events must include context tags (V/T/config/version) and a margin to threshold where applicable.
Temperature / aging hooks
- Temperature readback or temperature tagging method for evidence logs.
- Drift budgets: how gain/offset drift is specified across temperature and time.
- Periodic verification plan: what can be checked in-field without impacting conversions.
AEC-Q / PCN / lifecycle
- AEC-Q qualification scope and temperature grade of the ordered variant.
- Functional safety collateral availability (safety manual, diagnostic details, assumptions).
- PCN/PDN policy, lead time of change notifications, and traceability to lot/date codes.
Risk mapping table (Field → mitigates which fault → how to verify → evidence expected)
Each row must produce an evidence artifact with: context (V/T/clock/config/version), result (pass/fail + margin or counters/rates), and traceability (timestamp + sequence/frame IDs).
| Field (must ask) | Mitigates which fault | How to verify (reproducible) | Evidence expected |
|---|---|---|---|
| BIST signature + margin | ADC core faults (stuck/missing code), config path faults, latent faults | Run power-up and in-field BIST across temperature corners; repeat with forced failure modes where supported | Mode ID, signature, pass margin, timestamp, version/config hash |
| CRC on data/config | Transport corruption, silent data errors, misconfiguration | Stress the link (EMI/noise, disconnect/reconnect, clock perturbation); verify CRC triggers and recovery actions | CRC event rate, frame/sequence ID, timeout counters, timestamped context |
| Sequence ID + timeout policy | Stale data consumption, missing conversions, missing edges | Force conversion delays and bus stalls; validate stale-frame rejection and fault escalation within time budget | Sequence mismatch logs, timeout counters, fault-to-action latency measurement |
| Ref/supply monitor events | Supply collapse, reference drift, brownout-induced invalid conversions | Inject rail steps/dips and reference perturbations; verify gating behavior and event capture across temperature | Threshold crossings with margin-to-threshold, timestamps, V/T tags, reset cause where applicable |
| Plausibility / window rules | Input open/short, saturation, implausible transitions, drift beyond budget | Stimulus injection (steps/ramps/boundaries); open/short fixtures; temperature corners for drift windows | Δ margins to window, debounce state, event counters, corner-condition tags |
| PCN/traceability fields | Untracked changes, undocumented behavior drift over lifetime | Require lot/date tracking; verify evidence logs include version/config; run regression tests on new lots | Lot/date code, firmware/config hash, regression report links, change-notice records |
RFQ template (copy/paste to distributor or vendor)
Subject: RFQ – Automotive safety-relevant ADC chain (ASIL) – evidence-based evaluation
Project context
- Automotive domain: EV/Drivetrain / Chassis / Body-Thermal (select applicable)
- Safety target: ASIL [A/B/C/D]
- Signals to measure: [phase current / DC link / position / torque / pressure / travel / voltage / temperature]
- Bandwidth + latency constraints: [BW] / [latency budget]
- Environment: temperature range [ ] ; EMI severity [ ] ; supply range [ ]
- Allowed degraded modes: [limit / limp-home / safe stop] ; safe reaction time budget: [ ]
Required mechanisms (must-have)
- Transport integrity: CRC on data frames and/or configuration; frame/sequence ID; timeout policy
- Diagnostics: power-up BIST and/or in-field BIST; signature definition; pass criteria includes margin
- Monitoring hooks: ref/supply events; conversion-valid gating during unstable conditions
- Plausibility/windows: saturation/open/short detection; plausibility limits; debounced fault declaration
- Traceability: version/config identifiers in logs; lot/date code traceability; PCN policy
Evidence artifact requirements (minimum)
- Context tags: V/T/clock/config + firmware version
- Results: pass/fail + margin where applicable; counters/rates for CRC/timeouts/events
- Traceability: timestamp + sequence/frame IDs; test-mode IDs for any diagnostics
Candidate parts (examples for quotation and collateral)
- Automotive ADC candidates: TI ADS7138-Q1; TI ADS7038-Q1; ADI AD7124-8W; TI ADS131B26-Q1
- Isolated ΔΣ modulator candidates (if used): TI AMC1306M25-Q1; ADI ADuM7703; ADI AD7403
- Safety-support peripherals (if used): TI TPS653850A-Q1; TI TPS653852A-Q1
Requested deliverables with quotation
1) Pricing and availability for the selected candidates and relevant package/grade variants
2) Documentation: datasheets + register maps + diagnostic descriptions + safety collateral availability
3) PCN/PDN policy and long-term supply statement
4) Recommended evaluation guidance for CRC/event reporting and diagnostic scheduling
Sample evaluation plan (to be aligned)
- Link integrity: EMI/noise stress; CRC event rate; timeout behavior; recovery action timing
- BIST/diagnostics: run across temperature corners; verify signature and margin reporting
- Power/reference: rail steps/dips; ref perturbation; conversion-valid gating evidence
- Plausibility/windows: boundary/step/ramp stimuli; open/short fixtures; Δ margin logging
- Evidence logging: verify required fields present; validate traceability and repeatability
Please respond with: part variant recommendations, collateral availability, and any assumptions needed for diagnostic claims.
FAQ: safety-capable ADC chains (data-driven answers)
These answers are structured as reproducible verification steps and evidence fields, so each claim can be tested, logged, and traced.
Mechanisms & evidence
Redundancy & compare windows
Scheduling & latent faults
False positives & EMI
RFQ & evaluation