123 Main Street, New York, NY 10001

Automotive ASIL ADCs: BIST, Redundancy, and Diagnostics

← Back to:Analog-to-Digital Converters (ADCs)

An ASIL-relevant ADC chain is not “a high-spec ADC” — it is an evidence-closed loop: each chain-level fault is mapped to a diagnostic mechanism, verified by reproducible tests, and backed by logged artifacts with context, margins, and traceability.

This page provides the fault→mechanism→evidence tables, redundancy and compare-window rules, scheduling and nuisance-trip controls, plus an RFQ and evaluation plan that can be copied into real projects.

Definition: what “ASIL-ready ADC chain” means in practice

An ASIL-ready ADC chain is a measurement path that turns hardware faults into testable evidence and safe reactions. Performance specs describe how accurately signals are converted; safety evidence proves the chain can detect faults before actuation becomes unsafe.

What is safety-relevant in an ADC chain

In automotive safety systems, the ADC is only one part of the safety-relevant measurement chain. Safety relevance spans the full path from sensing and analog conditioning to digital transport, software decisions, and the final actuation gate that enforces a safe state.

  • Sensor: failures must be observable (open/short, implausible ranges).
  • AFE / protection: saturation and clamp events must not hide faults.
  • ADC core: conversion health must be checkable (self-test and validity flags).
  • Transport: corrupted frames must be detectable (CRC, counters, timeout).
  • MCU decision: faults must map to defined reactions (degrade/limit/shutdown).
  • Actuation gate: unsafe output must be blocked when measurement trust is lost.

Safety terms to operationalize (only what is needed)

Random hardware fault

An unpredictable hardware failure (for example: stuck codes, missing clock edges, reference collapse) that must be detected by defined mechanisms.

Latent fault

A hidden fault that may not show up during normal operation unless periodic tests are scheduled (for example: a broken monitor path).

Diagnostic mechanism

A mechanism that converts faults into observable evidence: BIST signatures, monitors and thresholds, redundant comparisons, or link integrity checks.

Evidence artifact

A recordable, traceable result (flags, counters, signatures, compare deltas) captured with context (temperature, voltage, configuration, version).

What this page delivers

  1. Fault → Mechanism → Evidence mapping that keeps safety claims testable.
  2. Redundancy selection for primary/monitor paths and compare windows.
  3. V&V checklist from requirements to tests to evidence artifacts.
  4. RFQ template with parameter fields and safety questions for suppliers.
Safety measurement chain with evidence loop for automotive ASIL ADC systems Block diagram showing sensor to AFE to ADC to transport to MCU and actuation gate, plus a safety mechanisms block producing evidence and fault status. ASIL-ready ADC chain = diagnostic evidence + safe reaction across the whole measurement path. Sensor open / short AFE AA / clamps ADC validity Transport CRC / counters MCU safety logic Actuation safe gate Safety mechanisms (evidence) • BIST signature • monitors (ref / supply / clock) • redundancy compare (Δ window) fault status Evidence

Principle: turn “safety” into testable, table-based evidence

A safety argument becomes strong when each fault category maps to a specific diagnostic mechanism and produces recordable evidence. Use a consistent Fault → Mechanism → Evidence structure to avoid assumed coverage and to keep claims verifiable.

ADC-chain fault taxonomy (8 chain-level categories)

List faults at the measurement-chain level (input, conversion, transport, and operating conditions). Each category should be observable in software through flags, counters, signatures, or comparison margins.

Input integrity

Open/short, saturation, or implausible motion/range that can be detected with limits and plausibility rules.

ADC core integrity

Stuck/missing codes or gain-offset drift that must be detectable by BIST, code checks, or redundant comparisons.

Reference / supply integrity

Reference collapse or drift, brownout events, and “conversion invalid” windows during rail instability.

Clock & timing integrity

Missing edges, timeouts, or timing anomalies that can freeze sampling or break compare windows.

Digital transport integrity

CRC errors, frame loss, out-of-sequence data, and timeout events on SPI/LVDS/JESD-style links.

Thermal & aging drift

Drift beyond budget across temperature and life that erodes margins unless monitored and logged with context.

Configuration faults

Wrong mode/range/reference selection that can create stable but incorrect measurements without explicit configuration evidence.

Common-cause candidates

Shared clock/reference/layout risks that can defeat redundancy if both paths fail in the same way.

Mechanism types (classified by evidence shape)

Mechanisms are most useful when classified by the kind of evidence they output. Evidence shapes should be stable across designs and easy to validate.

  • Signature (BIST): pass/fail plus a signature value that can be logged and traced.
  • Threshold flag (monitor): ref/supply/clock/input monitors that raise deterministic flags at boundaries.
  • Counter (frame/timeout): counters and watchdogs that reveal loss of data freshness or missing frames.
  • Delta compare (redundancy window): compare margin (Δ) against a defined window to detect path divergence.
  • Plausibility (physics window): application-independent limits that detect impossible values or transitions.

Evidence rules (minimum fields for audit and test)

Evidence is only useful when it is reproducible and traceable. Record the operating context, report pass/fail with margin, and attach identifiers that allow the exact scenario to be replayed.

Evidence group Must include Why it matters
Context Voltage, temperature, clock, configuration, firmware version Enables reproduction and prevents “pass in one condition, fail in another” ambiguity
Result Pass/fail + margin (Δ window, threshold distance), counters where applicable Margins show robustness; counters quantify rates and persistence
Traceability Timestamp, sequence/frame ID, test-mode ID Supports audit trails and links evidence to specific runs and data frames
Fault to mechanism to evidence map for automotive ASIL ADC chains Three-column diagram listing common ADC-chain fault categories, diagnostic mechanism types, and evidence artifacts to record. Keep safety claims verifiable: map faults to mechanisms and log evidence with context and margins. Faults Mechanisms Evidence • Input integrity • ADC core • Ref / supply • Clock / timing • Transport • Thermal drift • Config fault • Common-cause • BIST signature • Threshold monitor • CRC + counters • Timeout watchdog • Δ compare window • Plausibility rules • Validity gating • Signature value • Threshold flags • CRC fail rate • Frame counter • Timeout events • Δ margin • Plausibility flags • Timestamp + config

Design: build a repeatable chain with redundancy, diagnostics, and evidence

A robust ASIL measurement chain is designed as a closed loop: select a redundancy topology, define a compare window with margin, attach BIST and monitoring hooks, schedule diagnostics to cover latent faults, and control nuisance trips without masking real faults.

Redundancy topology decision tree

Redundancy works only when the second path is meaningfully independent and when the comparison produces evidence (Δ and margin) that can be validated in tests. Choose the topology based on bandwidth, latency, independence strength, cost, and false-trip risk.

Topology Best for Independence strength False-trip risk Common-cause exposure Typical evidence
Primary + Monitor High BW primary path with an independent check Strong if ref/clock/routing/config are separated Medium (compare tuning required) Medium (shared resources must be audited) Δ margin + monitor flags + counters
1oo2 Compare Aligned sampling and direct value comparison Medium–strong (depends on alignment + diversity) High if window too tight Medium–high if shared ref/clock/layout Δ window pass/fail + Δ margin
2oo2 / Voting Avoiding nuisance trips in noisy environments Strong with independent paths and independent monitors Lower (vote reduces single-sample trips) Medium (complexity can hide latent faults) Vote outcome + per-path evidence
Diversity overlay Reducing shared-cause failures Improves independence when applied to ref/clock/routing/vendor Neutral (affects architecture, not thresholds) Lower (shared exposure reduced) Independence checklist + config evidence

Compare window design (Δ window)

A compare window should be budgeted rather than guessed. Δ is shaped by random noise, thermal drift, quantization effects, and timing misalignment. A too-tight window creates nuisance trips; a too-loose window hides real divergence.

Window term Cause Estimate method Temp / aging note Evidence
Noise term Random noise and interference Measure Δ distribution; set n-sigma margin Repeat across temperature points Δ histogram + chosen limit
Drift term Thermal and long-term drift Corner tests; track margin vs T/time Model aging reserve where needed Temp-tagged Δ margin logs
Quantization term Code step and rounding differences Worst-case code boundary analysis Usually stable; include guard band Limit + rationale note
Timing term Sampling misalignment / latency mismatch Alignment tests; compare at controlled edges Review under temperature/clock drift Δ margin under alignment corners

BIST strategy (power-up and in-field)

BIST is most effective when its signature, stimulus method, and schedule are defined up front. Power-up BIST focuses on structural health. In-field BIST focuses on latent faults and must be isolated from actuation to avoid unintended behavior.

BIST mode Stimulus Coverage focus Pass criteria Evidence fields
Power-up BIST Internal pattern / test mux Code path, digital path, mux path Signature match + margin V/T/clock/config + signature + timestamp
In-field BIST Internal DAC / safe external stimulus Latent faults and drift-sensitive paths Signature match + allowed Δ margin Mode ID + evidence counters + sequence ID

Monitoring hooks (Ref / Supply / Clock / Input / Transport)

Monitoring hooks should be chosen for observability and testability. Each hook should be triggerable in validation and should produce evidence that can be recorded with context and traceability.

Hook Detects what How to test Evidence
Ref threshold Reference collapse / out-of-range Sweep ref / inject offset in validation mode Flag + threshold margin + timestamp
Conversion-valid gating Unstable startup / invalid conversions Power sequencing tests; cold start corners Validity flag + stable-window ID
Brownout / PS flags Supply drop and reset-risk windows Rail dip tests and restart behavior checks Flag + voltage tag + counter
Clock missing / timeout Sampling freeze or time-base loss Clock stop tests; timeout threshold sweep Timeout events + counter + timestamp
Input saturation / plausibility Open/short, rails, impossible transitions Boundary stimuli + open/short injection rigs Flags + margin + event rate
CRC / frame / sequence / timeout Corrupted, lost, or stale data frames Link fault injection; noise and disconnection tests CRC fail rate + counters + IDs

Diagnostic scheduling (power-up / periodic / on-demand)

Scheduling should cover both immediate structural issues and latent faults. Power-up diagnostics establish trust. Periodic diagnostics prevent silent degradation. On-demand diagnostics can be used before critical actuation transitions.

Diagnostic item Layer Trigger / frequency Evidence Action on fail
Startup integrity checks Power-up Every boot / wake Flags + stable-window ID + timestamp Hold actuation until trusted
BIST signature Power-up / Periodic Boot + defined interval Signature + mode ID + context Degrade or safe state
Link integrity Periodic Continuous counters CRC rate + counters + IDs Limit output or safe state
Compare window check On-demand / Periodic Before critical transitions Δ margin + timestamp + config Degrade or inhibit actuation

When diagnostics run in special modes, actuation should be gated. Evidence should remain traceable: faults should lead to a defined degraded mode or safe state, and evidence logs should retain timestamps and identifiers.

False positive & nuisance trip control

Nuisance trips commonly come from noise, temperature drift, transients, EMI, and startup behavior. Suppression methods should be mechanism-specific and should declare their maximum delay to avoid masking real faults.

Mechanism Typical false-positive source Suppression strategy Safety caution
Threshold monitor Startup transients and rail settling Stable-window gating + hysteresis Avoid excessive delay that hides real brownouts
CRC / counters Burst EMI on links Rate threshold + short debounce window Do not ignore persistent CRC trends
Δ compare window Noise and alignment corners Window budgeting + multi-sample confirm Confirm delay must not exceed safe reaction time
Plausibility rules Legitimate transients that look impossible State-machine gating + context-based limits Over-filtering can mask real open/short events
Redundant ADC topology with compare window, diagnostics, and safe reaction Block diagram showing sensor and AFE feeding primary and monitor ADC paths, a compare window block, a diagnostics block producing evidence, and a safe reaction gate controlled by MCU safety logic. Redundancy + compare window + diagnostics evidence → fault status → safe reaction gate. Sensor signal AFE AA / clamp Primary ADC low latency Monitor ADC independent Compare Δ ≤ window margin Diagnostics • BIST signature • ref / supply / clock • CRC / counters • plausibility MCU safety logic Safe reaction gate fault status Independence

Engineering checklist: turn the safety case into tasks, tests, and evidence

A practical checklist converts safety requirements into a fault worksheet, a reproducible V&V plan, and an evidence log that remains traceable from production to field operation.

Requirements intake (inputs to collect)

Evidence quality depends on input completeness. Missing requirements typically lead to untestable diagnostic claims or compare windows that cannot be justified.

Field Why it matters Example values Evidence impact
ASIL target Sets diagnostic depth and required robustness ASIL A–D Affects scheduling, logging, and pass margins
Signal type Defines plausibility rules and input fault patterns Current / voltage / bridge / position Affects fault worksheet and test stimuli
Bandwidth & latency Constrains topology and compare timing alignment BW (Hz) + latency budget (ms/µs) Affects Δ window budget and scheduling
Environment Defines temperature points and EMI stress Temp range + EMI severity + vibration Affects pass margins and nuisance-trip tuning
Allowed degraded modes Defines acceptable behavior when faults occur Limit torque / limp-home / safe stop Drives action-on-fail in V&V tables
Logging constraints Limits how much evidence can be stored in field Ring buffer / event-only / upload policy Defines minimum evidence schema

Fault model worksheet (chain-level mapping)

A worksheet row is a testable object: a defined fault, a detectable manifestation, a diagnostic mechanism, a reproducible test method, and an evidence artifact with traceable fields.

Fault (category + example) Effect on system Detectability Mechanism Test method Pass criteria (margin) Evidence artifact Owner
Input open/short Implausible measurement leads to unsafe decision Detected via saturation + plausibility window Plausibility + threshold flags Stimulus injection + open/short rig Trip within time budget; margin logged Event + context (V/T/clock/config) + timestamp HW + FW
ADC core stuck code Measurement freezes, hides real changes Detected via BIST signature or compare divergence BIST signature + Δ compare Software injection / test mode Signature match + Δ margin threshold Mode ID + signature + sequence ID FW + Test
Transport corruption MCU consumes wrong data without awareness Detected via CRC + frame/sequence counters CRC + counters + timeout Hardware injection / link noise / disconnect CRC event rate within defined limits CRC rate + counters + timestamp HW + FW
Ref drift beyond budget Systematic bias erodes safe margins Detected via ref monitor + temp-tagged trend Threshold + trend evidence Stimulus injection + temperature corners Margin preserved across T corners Temp-tagged margin + config version HW + Test

V&V plan (reproducible tests and defensible coverage)

Coverage claims should map to reproducible test cases. Each test must define context (V/T/clock/config), expected detection, and pass criteria with margin at temperature and aging corners as applicable.

Injection type What it emulates Best for mechanisms Evidence expected
Software injection Stuck values, timeouts, bad config paths Counters, scheduling, config evidence Events + mode IDs + timestamps
Hardware injection Link noise, rail dips, clock loss Threshold monitors, CRC/counters, timeouts Flag rate + counters + context tags
Stimulus injection Known inputs, boundary ramps, step edges Δ compare windows, plausibility, drift margins Δ margins + pass criteria logs
Test case ID Mapped worksheet rows Setup context Expected detection Pass criteria (margin) Evidence fields to log
VV-001 Input open/short Cold / hot corners; nominal clock; released FW Plausibility + saturation flags Trip time < budget; margin captured V/T/clock/config + timestamp + event counter
VV-002 Transport corruption EMI injection; cable stress; nominal temperature CRC fail events + frame/sequence counters Rate thresholds + persistence rules CRC rate + counters + timestamp + IDs

Production & field strategy (close the loop)

Production screens structural defects. Field diagnostics detect random faults and drift. Logging must preserve context, margins, and traceability so events can be replayed and correlated with versions and configurations.

Stage Goal Test item Pass criteria (margin) Evidence stored
Production Screen structural defects Power-up BIST + basic monitors Signature match + threshold margins Context + signature + counters
Field Detect random faults and drift Periodic monitors + counters + on-demand checks Event rates + Δ margins within budget Event log with timestamps and IDs
Evidence logging minimum schema Must include
Context Voltage, temperature, clock, configuration, firmware version
Result Pass/fail + margin, counters and rates where applicable
Traceability Timestamp, sequence/frame ID, test-mode ID
Evidence flow from requirements to faults, mechanisms, tests, and evidence artifacts Pipeline diagram showing requirements intake feeding a fault worksheet, which maps to mechanisms, validation tests, and evidence artifacts, with a production and field feedback loop. Build a traceable flow: requirements → mapping → tests → evidence artifacts → production/field loop. Requirements ASIL BW / latency env / degrade Fault worksheet effect detectability mapping Mechanisms BIST monitors Δ / counters V&V tests injection corners pass margin Evidence context margin timestamp + ID Production + Field loop • Production: screen defects • Field: detect random faults • Logs: version / config • Trace: timestamp / counters feedback

Applications: translate safety signals into ADC-chain constraints

This section maps automotive safety-relevant signals to measurable constraints on the ADC chain. It focuses on signal risk, diagnostic observability, and hard constraints such as latency, synchronization, plausibility windows, drift budgets, and transport integrity.

Domain → Signal → Constraint matrix (deliverable)

Domain Signal ADC-chain constraints (short)
EV / Drivetrain Phase current Simultaneous sampling; low latency; open/short & saturation detect; Δ window with drift/noise budget.
EV / Drivetrain DC link Input transient robustness; ref/supply integrity monitors; startup stable-window gating; threshold evidence with margin.
EV / Drivetrain Position feedback Plausibility windows; drift budget with temp tags; transport integrity (CRC/sequence/timeout); config traceability.
Chassis Torque Range coverage; drift evidence across temperature; plausibility + debounce rules; compare margin tied to safe reaction time.
Chassis Pressure Open/short & saturation detect; stable-window gating on startup; periodic diagnostic scheduling; evidence counters for events.
Chassis Travel sensor Plausibility + boundary checks; open/short detection; transport IDs for stale-data prevention; configuration evidence (mode/range).
Body / Thermal / Battery Voltage sense Drift budget; ref integrity monitoring; periodic self-check with logged margin; brownout/stability gating.
Body / Thermal / Battery Temperature sense Long-term drift evidence; periodic diagnostics; timestamped logs with version/config; plausibility limits for rate-of-change.

EV / drivetrain signals (risk → constraints)

Phase current

  • Risk: wrong measurement can drive incorrect safety-relevant decisions based on current.
  • Constraints: simultaneous sampling, low latency, and robust input fault detection (open/short/saturation).
  • Evidence focus: Δ margin logs, alignment checks, and time-bounded fault reactions.

DC link

  • Risk: incorrect thresholds can hide over/under-voltage events.
  • Constraints: ref/supply integrity monitors, startup stable-window gating, and transient-robust front end.
  • Evidence focus: threshold events with margin and context tags (V/T/clock/config).

Position feedback

  • Risk: drift or stale data can cause decisions based on incorrect position signals.
  • Constraints: plausibility windows, drift budgets across temperature, and transport IDs to prevent stale frames.
  • Evidence focus: CRC/sequence/timeout logs and configuration traceability for mode/range changes.

Chassis signals (plausibility, open/short, drift)

Torque

  • Risk: bias or discontinuities reduce available safety margins.
  • Constraints: drift evidence across temperature corners and plausibility rules with bounded confirm delays.
  • Evidence focus: margin tracking and nuisance-trip controls that declare maximum delay.

Pressure

  • Risk: open/short or saturation can mimic valid values without explicit detection.
  • Constraints: boundary detection, stable-window gating, and periodic diagnostics for latent faults.
  • Evidence focus: event counters and timestamped logs for trend analysis.

Travel sensor

  • Risk: implausible transitions or stale frames can corrupt safety decisions.
  • Constraints: plausibility limits, open/short detection, and transport traceability (sequence ID + timeout).
  • Evidence focus: configuration evidence when mode/range changes occur.

Body / thermal / battery sensing (drift and periodic tests)

Voltage sense

  • Risk: drift shifts thresholds and erodes protection margins.
  • Constraints: ref integrity monitoring and periodic self-checks with logged margins.
  • Evidence focus: brownout/stability events with context tags.

Temperature sense

  • Risk: long-term bias breaks assumptions used by diagnostics and margins.
  • Constraints: periodic diagnostics and timestamped logs with version/config identifiers.
  • Evidence focus: drift trends with temperature tags and bounded plausibility limits.
Application matrix from domain to signals to ADC-chain constraints Matrix-style block diagram with three columns: domains, signals, and constraints, connected by arrows to show mapping without system-level vehicle drawings. Application mapping: Domain → Signals → Constraints (short, chain-focused). Domain Signals Constraints EV / Drivetrain Chassis Body / Thermal Battery (if used) phase current DC link position feedback torque pressure travel sensor sync + low latency open/short + saturation plausibility windows drift budget + temp tags ref / supply integrity CRC + seq + timeout Keep mappings short; constraints feed diagnostics and evidence logging.

IC selection logic: fields → risk → verification → RFQ template

Selection should not start from a part number. Start from must-ask fields, map each field to a chain-level fault, define a reproducible verification step, and require evidence artifacts that include context, margin, and traceability.

Example shortlist part numbers (for RFQ and evaluation planning)

Category Example parts What to validate (chain view)
Automotive ADC TI ADS7138-Q1, TI ADS7038-Q1, ADI AD7124-8W, TI ADS131B26-Q1 CRC/transport integrity, watchdog & event reporting, drift budgets, conversion-valid gating, configuration traceability.
Isolated ΔΣ modulator TI AMC1306M25-Q1, ADI ADuM7703, ADI AD7403 Isolation integrity requirements, modulator clocking assumptions, bitstream transport robustness, decimation chain evidence.
Safety-support peripherals TI TPS653850A-Q1, TI TPS653852A-Q1 Watchdog and clock monitoring behavior, reset causes, error pin handling, logging hooks and traceability.

The list is intentionally short and chain-focused. Add or replace candidates based on required bandwidth, latency, resolution, channel count, and interface.

Parameter fields (must-ask questions)

Diagnostics / BIST

  • Power-up BIST vs in-field BIST support and scheduling constraints.
  • BIST coverage statements: ADC core, reference path, digital path, MUX/config path.
  • Signature form: status bits, registers, event codes, and the required readout sequence.
  • Pass criteria: pass/fail plus a measurable margin (not pass/fail only).
  • Isolation of test modes: whether test activity can perturb normal conversions.

Require vendor artifacts: diagnostic description, register map, signature definition, and evidence field recommendations.

Transport integrity

  • CRC support: data frames, register reads/writes, configuration integrity (if available).
  • Frame/sequence counters and stale-data prevention strategy.
  • Timeout definition: conversion timeout, bus timeout, and error escalation behavior.
  • Failure output behavior: hold-last, clamp-to-code, invalid flag, dedicated alert pin.

Acceptance rule: every integrity claim must map to a loggable counter/rate and a reproducible link-stress test.

Clock / timeout

  • DRDY / conversion-valid timing specification and corner conditions.
  • Missing-clock detection options (internal or external monitor requirements).
  • Startup stability window: clock/ref/supply settling requirements before conversions are trusted.
  • Cross-domain timing: how timestamps/sequence IDs remain coherent across the chain.

Reference / supply monitoring

  • Brownout and supply monitor events: thresholds, hysteresis, and reporting method.
  • Reference integrity: detection of collapse, out-of-range, and drift beyond budget.
  • Conversion gating behavior during unstable power/reference conditions.

Evidence requirement: events must include context tags (V/T/config/version) and a margin to threshold where applicable.

Temperature / aging hooks

  • Temperature readback or temperature tagging method for evidence logs.
  • Drift budgets: how gain/offset drift is specified across temperature and time.
  • Periodic verification plan: what can be checked in-field without impacting conversions.

AEC-Q / PCN / lifecycle

  • AEC-Q qualification scope and temperature grade of the ordered variant.
  • Functional safety collateral availability (safety manual, diagnostic details, assumptions).
  • PCN/PDN policy, lead time of change notifications, and traceability to lot/date codes.

Risk mapping table (Field → mitigates which fault → how to verify → evidence expected)

Each row must produce an evidence artifact with: context (V/T/clock/config/version), result (pass/fail + margin or counters/rates), and traceability (timestamp + sequence/frame IDs).

Field (must ask) Mitigates which fault How to verify (reproducible) Evidence expected
BIST signature + margin ADC core faults (stuck/missing code), config path faults, latent faults Run power-up and in-field BIST across temperature corners; repeat with forced failure modes where supported Mode ID, signature, pass margin, timestamp, version/config hash
CRC on data/config Transport corruption, silent data errors, misconfiguration Stress the link (EMI/noise, disconnect/reconnect, clock perturbation); verify CRC triggers and recovery actions CRC event rate, frame/sequence ID, timeout counters, timestamped context
Sequence ID + timeout policy Stale data consumption, missing conversions, missing edges Force conversion delays and bus stalls; validate stale-frame rejection and fault escalation within time budget Sequence mismatch logs, timeout counters, fault-to-action latency measurement
Ref/supply monitor events Supply collapse, reference drift, brownout-induced invalid conversions Inject rail steps/dips and reference perturbations; verify gating behavior and event capture across temperature Threshold crossings with margin-to-threshold, timestamps, V/T tags, reset cause where applicable
Plausibility / window rules Input open/short, saturation, implausible transitions, drift beyond budget Stimulus injection (steps/ramps/boundaries); open/short fixtures; temperature corners for drift windows Δ margins to window, debounce state, event counters, corner-condition tags
PCN/traceability fields Untracked changes, undocumented behavior drift over lifetime Require lot/date tracking; verify evidence logs include version/config; run regression tests on new lots Lot/date code, firmware/config hash, regression report links, change-notice records

RFQ template (copy/paste to distributor or vendor)

Subject: RFQ – Automotive safety-relevant ADC chain (ASIL) – evidence-based evaluation

Project context
- Automotive domain: EV/Drivetrain / Chassis / Body-Thermal (select applicable)
- Safety target: ASIL [A/B/C/D]
- Signals to measure: [phase current / DC link / position / torque / pressure / travel / voltage / temperature]
- Bandwidth + latency constraints: [BW] / [latency budget]
- Environment: temperature range [ ] ; EMI severity [ ] ; supply range [ ]
- Allowed degraded modes: [limit / limp-home / safe stop] ; safe reaction time budget: [ ]

Required mechanisms (must-have)
- Transport integrity: CRC on data frames and/or configuration; frame/sequence ID; timeout policy
- Diagnostics: power-up BIST and/or in-field BIST; signature definition; pass criteria includes margin
- Monitoring hooks: ref/supply events; conversion-valid gating during unstable conditions
- Plausibility/windows: saturation/open/short detection; plausibility limits; debounced fault declaration
- Traceability: version/config identifiers in logs; lot/date code traceability; PCN policy

Evidence artifact requirements (minimum)
- Context tags: V/T/clock/config + firmware version
- Results: pass/fail + margin where applicable; counters/rates for CRC/timeouts/events
- Traceability: timestamp + sequence/frame IDs; test-mode IDs for any diagnostics

Candidate parts (examples for quotation and collateral)
- Automotive ADC candidates: TI ADS7138-Q1; TI ADS7038-Q1; ADI AD7124-8W; TI ADS131B26-Q1
- Isolated ΔΣ modulator candidates (if used): TI AMC1306M25-Q1; ADI ADuM7703; ADI AD7403
- Safety-support peripherals (if used): TI TPS653850A-Q1; TI TPS653852A-Q1

Requested deliverables with quotation
1) Pricing and availability for the selected candidates and relevant package/grade variants
2) Documentation: datasheets + register maps + diagnostic descriptions + safety collateral availability
3) PCN/PDN policy and long-term supply statement
4) Recommended evaluation guidance for CRC/event reporting and diagnostic scheduling

Sample evaluation plan (to be aligned)
- Link integrity: EMI/noise stress; CRC event rate; timeout behavior; recovery action timing
- BIST/diagnostics: run across temperature corners; verify signature and margin reporting
- Power/reference: rail steps/dips; ref perturbation; conversion-valid gating evidence
- Plausibility/windows: boundary/step/ramp stimuli; open/short fixtures; Δ margin logging
- Evidence logging: verify required fields present; validate traceability and repeatability

Please respond with: part variant recommendations, collateral availability, and any assumptions needed for diagnostic claims.
      
Selection flow for safety-capable ADC chains Flow diagram showing requirements mapped to risks, then to must-ask fields, then shortlist parts, and finally a reproducible evaluation plan with evidence artifacts. Requirement → Risk → Questions → Shortlist → Evaluation plan (evidence-based). Requirement ASIL BW / latency env / modes Risk fault taxonomy common-cause time budget Questions CRC / timeout BIST / monitors drift hooks Shortlist candidates collateral availability Eval tests evidence margins Evidence artifact minimum • Context: V / T / clock / config / version • Result: pass/fail + margin OR counters/rates • Traceability: timestamp + frame/sequence IDs • Test-mode IDs for any BIST/diagnostics

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQ: safety-capable ADC chains (data-driven answers)

These answers are structured as reproducible verification steps and evidence fields, so each claim can be tested, logged, and traced.

Mechanisms & evidence

1) What can BIST realistically cover in an ADC chain?

One-line answer: BIST can validate structured paths (code, digital, reference selection, mux/config), but it does not prove noise/jitter floors or long-term drift.

Affected chain segments

ADC core, configuration path, reference path, digital transport (if included), test-mode control.

Verification steps

  • Run power-up BIST at cold/room/hot corners; repeat N times to confirm repeatability.
  • Run in-field BIST while conversions continue (or during a gated window) and log conversion impact.
  • Force at least one controlled failure (invalid config or known stimulus) to confirm the reporting path.
Evidence fields (minimum) Examples
Context V, T, clock state, config hash, firmware version, test-mode ID
Result pass/fail + margin (or signature distance), error code
Traceability timestamp, sequence ID, run index

Pass criteria

Pass = correct signature + positive margin at all required corners, and no unbounded disturbance to conversions during (or after) the test window.

Common mistakes

  • Treating BIST as proof of noise, jitter, or drift performance.
  • Logging only pass/fail without margin and context tags.
  • Running BIST without demonstrating controlled-failure reporting.
2) What evidence must be recorded for safety diagnostic claims?

One-line answer: Evidence must include context, margin-based results, and traceability, so another lab can reproduce the same claim.

Verification steps

  • Define a per-mechanism logging schema before testing (fields + units + timing).
  • Confirm logs remain valid across resets, power events, and configuration changes.
  • Verify that every diagnostic event can be traced to a test case ID and a time window.
Group Required fields
Context V, T, clock source/state, operating mode, range/gain, reference selection, firmware version, config hash
Result pass/fail, margin (to threshold/window/signature), counters/rates, error codes, recovery action taken
Traceability timestamp, sequence/frame ID, test-case ID, run index, lot/date code (when applicable)

Common mistakes

  • Recording pass/fail without margins or conditions.
  • Missing config versioning (results cannot be reproduced after a firmware update).
  • Not logging sequence IDs (stale-data faults remain invisible).
3) How to prove a diagnostic mechanism is repeatable (not a one-off)?

One-line answer: Repeatability requires multiple runs across corners and controlled-failure injection, with stable margins and consistent evidence fields.

Verification steps

  • Run N times per corner: cold/room/hot, min/nom/max supply, each with a fixed configuration hash.
  • Inject at least one failure per fault class (transport, ref/supply, plausibility, config) and confirm detection path.
  • Re-run after power cycling and after a firmware/config change; confirm evidence schema is unchanged.

Evidence fields

run index, corner ID, config hash, margin statistics (min/mean/max), event counters, timestamps, and recovery actions.

Pass criteria

Margins remain above the declared minimum at all corners, with stable distributions and no unexplained corner-dependent failures.

Redundancy & compare windows

4) Primary+monitor vs 1oo2 compare: how to choose?

One-line answer: Use 1oo2 compare when fast detection and symmetric confidence are needed; use primary+monitor when cost/latency constraints favor a lighter independent check.

Decision input Prefer 1oo2 compare Prefer primary+monitor
Latency budget Very tight and deterministic Moderate; monitor can be slower
Independence needs High; both paths must be comparable Independent “reasonableness” check
Cost / complexity Higher (two equivalent channels) Lower (lighter monitor path)

Evidence fields

aligned timestamps, sequence IDs, Δ value, Δ margin to window, per-path status, and declared fault-to-action time.

5) What goes into a compare window (Δwindow) budget?

One-line answer: Δwindow must cover noise, drift, quantization, timing skew, and reference/supply effects under the declared conditions.

Budget components

  • Noise (RMS and peak/percentile)
  • Drift vs temperature and time (gain/offset)
  • Quantization and code uncertainty
  • Timing skew / sampling misalignment (effective error)
  • Reference and supply sensitivity (PSRR/Ref drift impacts)

Verification steps

  • Measure each component separately where possible; fill a window worksheet with measured values.
  • Validate the combined window on real hardware across temperature corners and supply extremes.
  • Record Δ distribution statistics (min/mean/max or percentiles) and confirm the selected window keeps false trips bounded.
6) How to reduce common-cause failures in redundant measurement paths?

One-line answer: Reduce shared dependencies (clock, reference, routing, firmware assumptions) and prove independence with evidence tags and stress tests.

Independence checklist

  • Separate or monitored clocks; define a missing-clock/timeout policy per path.
  • Separate references or monitored reference selection; avoid silent shared ref drift.
  • Physical separation of sensitive routing; avoid shared single-point connectors where possible.
  • Diversity options: different vendor/architecture, or independent conversion modes.

Evidence fields

per-path clock/ref IDs, per-path config hash, per-path error counters, and stress-test identifiers that show faults do not synchronize across paths.

Scheduling & latent faults

7) How to set periodic diagnostics to cover latent faults?

One-line answer: Use a 3-layer schedule (power-up, periodic, on-demand) driven by fault exposure time and allowable detection delay.

Verification steps

  • Define maximum detection delay per fault class (input, ref/supply, clock, transport, config).
  • Assign each mechanism to one layer: power-up (startup), periodic (latent), on-demand (triggered by monitors).
  • Validate scheduling does not violate conversion availability or safety reaction timing.

Evidence fields

schedule ID, trigger reason, run timestamps, pass margins, skipped-run counters, and fault-to-action latency measurements.

8) How to run diagnostics without disturbing real-time operation?

One-line answer: Use explicit gating windows, state machines, and bounded confirm delays, and measure the maximum disturbance to conversion timing.

Verification steps

  • Define when tests may run (allowed windows) and when tests are forbidden (control-critical windows).
  • Implement a diagnostic state machine with bounded confirm delay and explicit exit criteria.
  • Measure conversion latency jitter and data availability during test runs; enforce a maximum allowed impact.

Pass criteria

Diagnostic actions remain inside declared windows, and the worst-case disturbance stays within the declared time budget.

False positives & EMI

9) What typically causes nuisance trips in ADC-chain diagnostics?

One-line answer: Nuisance trips usually come from noise, transients, startup instability, EMI coupling, or overly tight thresholds/windows.

Verification steps

  • Characterize noise and transient behavior during normal operation (not only in the lab’s quiet setup).
  • Exercise power-up sequences and mode transitions; measure false-event rates vs time since startup.
  • Run link and input stress (EMI injection or worst-case cable routing) while logging event counters.

Evidence fields

event counters per mechanism, time-since-startup, operating mode, and a “cause tag” field for triage (noise/transient/EMI/startup).

10) How to suppress false positives without masking real faults?

One-line answer: Use bounded debounce/hysteresis and multi-sample confirmation, and declare the maximum added detection delay.

Verification steps

  • Set debounce and hysteresis based on measured noise/transient statistics, not guesswork.
  • Validate detection delay does not exceed the safety reaction time budget.
  • Run controlled fault injections to confirm true faults still trigger within the declared maximum delay.

Pass criteria

False-event rate is bounded under stress, and true faults remain detectable within a declared (and measured) maximum delay.

RFQ & evaluation

11) What are the must-ask RFQ fields for an ASIL ADC chain?

One-line answer: RFQ must demand mechanisms (CRC/timeout/BIST/monitors), how to verify them, and what evidence artifacts will be provided.

RFQ topic Must-ask fields
Integrity CRC support, sequence ID, timeout policy, error reporting behavior
Diagnostics BIST modes, signature definition, pass margin, test-mode isolation
Monitoring ref/supply events, conversion-valid gating, clock/edge monitoring assumptions
Lifecycle AEC-Q variant, safety collateral, PCN/traceability policy

Evidence required from vendors

datasheet + register map, diagnostic description, timing/error behavior notes, and any safety collateral availability statement.

12) What is a minimal evaluation plan to validate safety-capable claims?

One-line answer: A minimal plan stresses transport, power/reference, timing, and plausibility, and produces logs with context, margins, and traceability.

Evaluation matrix (minimum)

  • Transport integrity: EMI/link stress, disconnect/reconnect, CRC event rate, timeout recovery timing.
  • Power/reference: rail dips/steps, reference perturbation, conversion-valid gating behavior and event evidence.
  • Timing: missing-edge/timeout scenarios, stale-frame rejection using sequence IDs.
  • Plausibility/windows: boundary/step/ramp stimuli, open/short fixtures, Δ margin distributions.
  • Corners: cold/room/hot and min/nom/max supply, with fixed configuration hashes.

Pass criteria

Each mechanism triggers as specified, evidence fields are complete, and the measured fault-to-action timing stays within the declared safety budget.