Air Data Computer: Pressure AFE, 24-bit ADC & BIT
← Back to: Avionics & Mission Systems
An Air Data Computer turns pitot/static pressures into usable air data (airspeed, altitude, vertical speed, Mach) with proven accuracy, controlled latency, and continuous health status. It is designed to detect and isolate pneumatic faults (leaks, blockage, icing) and electronics drift through calibration, monitoring, and traceable event logs.
What the Air Data Computer really does (scope & boundary)
An Air Data Computer turns pitot/static pressures into airspeed and altitude signals that are trustworthy in flight. It does this by conditioning pressure-sensor outputs, digitizing them with high resolution, compensating temperature drift, and continuously checking plausibility so leaks, blockages, icing, and sensor drift can be detected and isolated.
This page focuses on the pressure measurement chain and its integrity. It does not cover flight-control laws or navigation fusion.
Inputs, outputs, and the 3 must-have metrics
- Inputs: total pressure (Pt), static pressure (Ps), differential pressure (q = Pt−Ps), plus temperature.
- Outputs: IAS/CAS, baro altitude, vertical speed, and Mach (as required), plus health/status flags and event records.
- Accuracy: bias + scale errors are controlled across pressure and temperature.
- Dynamic response: latency + bandwidth are managed so air-data signals remain responsive (not “filtered away”).
- Integrity: faults are detectable and isolatable (pneumatic faults vs sensor/AFE/ADC faults).
Boundary reminder: air data is treated as a measured signal with confidence, not as a control-law or navigation topic.
Air data signals: from pressure to IAS/altitude (without flight control)
Air data starts as pressures—total pressure (Pt) and static pressure (Ps). Their difference, dynamic pressure q = Pt − Ps, is the primary driver for indicated airspeed, while static pressure is the primary input for barometric altitude.
Why “pressure → air data” amplifies some errors
- Low-q region is fragile: when dynamic pressure is small (low speed or low density), a Pa-level offset or drift becomes a large airspeed error.
- Bias dominates earlier than noise: improving “bits” helps only if offset and temperature drift are already controlled.
- Dynamics matter: pneumatic volumes, restrictors, and digital filters introduce delay and phase lag—critical for vertical speed and transients.
Input → output mapping (engineering view)
| Signal | Represents | Most sensitive error | Design lever |
|---|---|---|---|
| Pt | Total pressure input (port + line dynamics) | Leak/blockage symptoms + delay | Pneumatic-aware plausibility + rate checks |
| Ps | Static pressure for altitude reference | Offset/drift (temperature, aging) | Temp compensation + calibration versioning |
| q = Pt−Ps | Dynamic pressure for IAS/CAS | Pa-level bias amplified at low q | Low-drift AFE + ratiometric strategy |
| T | Compensation axis for sensor + electronics | Gain/offset drift across envelope | 2D tables / segmentation, verified in production |
Scope boundary: this section covers sensitivity and measurement dynamics only; how flight control uses the signals is out of scope.
Pneumatic interface & failure modes (the part most pages ignore)
Many “air data errors” are not electronic at all—they are pneumatic. Leaks, blockages, water ingress, icing, and line dynamics can shift offsets, distort transients, and create false noise signatures. A strong Air Data Computer treats the pneumatic path as part of the measurement system and makes these faults observable through consistency and dynamic checks.
Typical pneumatic topology (what changes the signal)
- Pitot & static lines: length, fittings, and junctions add volume and potential leak points (affects delay and drift-like behavior).
- Restrictor / orifice: adds damping (reduces high-frequency pressure chatter) but increases phase lag and settling time.
- Drain / water trap: prevents water accumulation near the sensor cavity (water can raise noise and slow response).
- Sensor cavity: trapped volume + compliance can create hysteresis-like effects in dynamics and recovery.
Failure mode → symptom → detection → mitigation (engineering checklist)
| Failure mode | Typical symptom (what is seen) | Detection idea (observable) | Mitigation |
|---|---|---|---|
| Leak | Slow drift, reduced dynamic gain, inconsistent Pt/Ps/q relationships | Correlation drops between channels; step response residual increases | Fitting inspection; plausibility flags; maintenance-trigger thresholds |
| Blockage | Severe lag, “stuck” pressure during transients, wrong vertical-speed dynamics | Abnormal phase lag; rate-of-change limits violated vs expected envelope | Port protection; restrictor placement review; fault isolation to pneumatic path |
| Water ingress | Noise floor rises + response slows; strong temperature dependence | Noise-shape change; temperature-correlated anomalies; recovery hysteresis | Water trap + drain; routing to avoid low points; service procedure |
| Icing | Step-like blockage behavior; intermittent changes; temperature-linked onset | Abrupt dynamic change; cold-soak correlation; inconsistent Pt vs Ps evolution | Environmental controls; conservative plausibility; event logging for traceability |
| Port mismatch / static source error | Channel bias that changes with flight regime; altitude/IAS disagreement patterns | Cross-check trend vs regime; long-term bias signature differs from electronics drift | Installation verification; calibration offsets with traceable configuration control |
Practical rule: when a problem looks like “random drift,” check for pneumatic causes first—then use dynamics and consistency tests to isolate it.
Pressure sensor choices & excitation (bridge vs capacitive, ratiometric strategy)
Air Data Computers commonly use bridge-type or capacitive pressure sensors because their outputs can be conditioned reliably and calibrated over temperature. A key design decision is how the sensor is excited and how the ADC reference is generated. A ratiometric approach ties excitation and ADC reference together so supply changes have far less impact on the final reading.
Sensor types (readout-relevant differences only)
- Bridge sensors: small differential output (often proportional to excitation), strong dependence on low-drift INA/PGA and reference stability.
- Capacitive sensors: output is capacitance change; readout is typically switched/charge-based and can be robust, but needs careful linearization.
- Common reality: temperature drives offset and gain changes—so calibration tables and stable excitation/reference matter as much as raw ADC bits.
Excitation strategies (what they fight)
- Constant voltage: simple, but excitation drift can appear as scale drift unless referenced out.
- Constant current: can reduce some sensitivity, but may complicate protection and stability for certain sensor structures.
- Ratiometric (recommended concept): use the same reference for excitation and ADC full-scale so supply/rail changes cancel in the ratio.
Ratiometric does not eliminate all error; it mainly removes excitation/reference drift from dominating the budget, shifting attention to sensor physics and temperature compensation.
Selection criteria (usable for engineering & procurement)
- Range & overload: ensure Pt/Ps/q extremes do not force long recovery from saturation.
- Accuracy target: separate offset and gain vs temperature—avoid judging only by a single headline number.
- Temperature envelope: wider envelope usually means more compensation complexity (table size and validation effort).
- Dynamics: match sensor response to pneumatic damping and digital filtering so latency stays acceptable.
- Availability & packaging: stable sourcing and mechanical integration reduce hidden variability.
Analog Front-End design for differential/absolute pressure (low noise + stability)
The AFE determines whether pressure signals remain measurable at low dynamic pressure. Input bias, 1/f noise, drift, and CMRR errors can look like “real” air-data changes when q is small. A robust AFE keeps the differential path balanced, filters interference without destroying dynamics, and stays stable with a ΣΔ ADC input network.
INA/PGA: the four error sources that dominate low-q performance
- Input bias & leakage paths: bias current through source impedance and protection networks becomes an effective offset.
- 1/f noise: low-frequency noise is easily mistaken for slow pressure drift; it sets the floor for long-window air data.
- Offset/gain drift: temperature and aging shape the error budget more than raw ADC bit depth in many regimes.
- CMRR under imbalance: common-mode interference turns into differential error if the two input paths are not symmetric.
RC / EMI / anti-alias: suppress interference without “filtering away” dynamics
- Differential filtering: sets the measurement bandwidth and anti-alias corner; too aggressive increases phase lag.
- Common-mode filtering: targets injected interference; it must not create input imbalance that reduces effective CMRR.
- Placement logic: protection and series resistance should keep the two input impedances matched; anti-alias should be predictable for stability.
The goal is not maximum filtering; the goal is controlled bandwidth with stable phase and repeatable recovery.
Differential vs absolute pressure (same AFE skills, different priorities)
| Chain | Most sensitive to | What to protect | Common pitfall |
|---|---|---|---|
| Differential (q) | offset, 1/f noise, CMRR under imbalance | balanced input impedances and low-drift INA/PGA | protection/RC mismatch converting CM to DM error |
| Absolute (Pt/Ps) | overrange events and recovery, reference stability | input clamps + headroom so saturation does not linger | long recovery mistaken as “slow pressure” |
Reference & return (local measurement view only)
- Ratiometric reference: tie excitation and ADC reference to reduce sensitivity to supply drift in the final ratio.
- Reference noise: low-frequency reference/return noise often becomes low-frequency reading noise.
- Return integrity: keep high-current and noisy returns away from the AFE/reference sense path (local board-level rule).
AFE design checklist (10–14 executable items)
- Verify protection + series elements keep both input impedances matched (avoid CMRR collapse).
- Confirm clamp/protection does not create long saturation recovery that looks like slow pressure change.
- Compute input bias current × source impedance to bound effective offset at the INA inputs.
- Check 1/f noise contribution in the intended low-frequency window (avoid “drift-like” noise).
- Budget offset/gain drift across the full temperature envelope (drift usually beats extra ADC bits).
- Choose anti-alias corner aligned to sample/OSR targets; avoid undefined corners from parasitics.
- Separate common-mode suppression from differential bandwidth setting (filter the right thing).
- Keep differential routing symmetric; avoid asymmetry that converts CM interference to DM error.
- Maintain adequate headroom for absolute-pressure transients; reduce overrange exposure.
- Validate AFE stability with the ADC input network and chosen RC values (no marginal phase).
- Implement ratiometric tie if excitation drift is a dominant risk (tie must be explicit and verifiable).
- Reserve a self-test hook (input short / reference point / small excitation perturb) for later BIT.
24-bit ADC & digital filtering (resolution vs accuracy vs latency)
“24-bit” describes the converter output format and potential resolution, not guaranteed system accuracy. In air-data pressure chains, accuracy is usually limited by offset, drift, reference quality, and AFE interference paths. ADC choice and digital filtering mainly decide noise, bandwidth, and group delay—which directly affects the dynamic behavior of air data signals.
Why ΣΔ ADCs are common here (and what they cost)
- Pros: strong noise performance at low bandwidth, robust digital filtering, and practical rejection of periodic interference.
- Costs: digital filter group delay and longer settling after configuration changes; latency must be managed explicitly.
SAR vs ΣΔ (pressure-chain view only)
- SAR: low latency and strong transient response, but relies more on analog anti-alias and tighter noise budgeting in the AFE.
- ΣΔ: excellent resolution after filtering, but adds group delay; the system must accept and account for that lag.
The decision is rarely “which is better”; it is “which meets the required dynamics while keeping noise and drift under control.”
OSR and digital filters: the noise–latency–bandwidth triangle
- Higher OSR: lower noise, but more group delay and slower output update dynamics.
- Stronger filtering: better interference suppression, but slower response and longer settling.
- Wider bandwidth: faster dynamics, but higher noise unless the AFE and reference are strong enough.
Self-test hooks (preparing for BIT without going off-scope)
- Reference sanity point: verify the ADC/reference chain is not drifting beyond expected limits.
- Input short / known input: check AFE + ADC offset behavior using a controlled internal condition.
- Small excitation perturb: apply a tiny, known excitation change and confirm proportional response (helps isolate sensor vs electronics).
Temperature compensation & calibration workflow (factory + in-field)
Temperature compensation turns pressure-chain drift and nonlinearity into a controlled, traceable process: capture data, fit a model, store a versioned calibration, verify it on every power-up, and track in-field drift to decide when re-calibration is required.
Error sources that must be separated (so they can be corrected)
- Offset (zero): dominates low-q performance; appears as a constant bias that grows into large IAS error when q is small.
- Gain (scale): shows as proportional error over the range; typically corrected with multi-point pressure steps.
- Nonlinearity: bends the transfer curve; often corrected with piecewise segments or a 2D table.
- Temperature drift: offset/gain change with temperature; requires data across the full operating envelope.
- Hysteresis: different readings for rising vs falling pressure/temperature; must be checked during verification.
- Aging: slow long-term drift; handled by trend tracking and maintenance thresholds (not by “more ADC bits”).
Compensation strategies (choose by controllability and data size)
- Piecewise linear: small parameter set and easy verification; best when the curve is mostly linear with mild bends.
- 2D LUT (pressure × temperature): practical balance of accuracy and validation; common when temperature coupling is strong.
- Polynomial (limited use): compact for smooth surfaces but harder to bound at edges; use only with strong verification coverage.
Calibration data management (traceability is part of accuracy)
- CalVersionID: a unique identifier tied to parameters, test coverage, and build revision.
- Integrity: CRC/Hash over parameter blocks; invalid blocks must be detected on boot.
- Validity checks: boot-time verification of calibration state, bounds, and reference consistency.
- Locking rules: parameters are locked after verification; updates require an explicit maintenance path.
In-field re-calibration triggers (when to stop trusting yesterday’s fit)
- Drift threshold exceeded: trend of offset/gain indicators crosses a configured limit.
- Maintenance interval: scheduled service event forces a calibration validity review.
- BIT evidence: repeated plausibility failures suggest sensor/AFE changes that compensation can no longer absorb.
Scope note: this section describes triggers and data handling only; it does not expand into system safety standards.
Calibration pipeline (factory steps with “what to measure”)
- Setup & stabilize: record ambient temperature, excitation/reference levels, and sensor warm-up state.
- Raw capture grid: measure raw counts at multiple pressure points across multiple temperature points.
- Model build: fit offset/gain + nonlinearity + temperature terms (choose piecewise or 2D LUT).
- Write parameters: store the parameter block to NVM with CRC and boundary metadata.
- Checkpoint verify: re-measure at verification points (including rising/falling pressure) and compute residuals.
- Lock & tag: assign CalVersionID, lock parameters, and store a calibration timestamp/counter.
- Power-cycle self-check: verify the boot-time validity checks and confirm parameters are applied correctly.
- Enable drift tracking: initialize trend counters and thresholds used to request maintenance or re-calibration.
Health monitoring & BIT/BITE (detect leaks, blockage, sensor drift)
Health monitoring proves the air data output is trustworthy by building an evidence chain: raw pressures → derived air data → consistency checks and electrical self-tests → fault classification → status words and logs that separate pneumatic faults from sensor/electronics issues.
Plausibility checks (physics, dynamics, and cross-channel consistency)
- Physics bounds: clamp impossible Pt/Ps/q combinations and prevent out-of-range values from silently propagating.
- Rate-of-change: detect steps or ramps that exceed feasible dynamics; separate real transients from measurement artifacts.
- Consistency: compare correlated channels (Pt vs Ps trends, redundant sensors if present) to detect incoherent behavior.
Sensor/AFE self-test evidence (electrical layer)
- Excitation monitor: detect excitation droop, overrange, or mismatch that changes sensor sensitivity.
- Reference monitor: validate ADC/reference sanity so “pressure changes” are not reference drift.
- Open/short detect: detect wiring or sensor failures with explicit fault flags.
- Noise-floor anomaly: rising noise or spectral shape changes often indicate water ingress, loose connections, or interference.
Pneumatic fault signatures (use dynamics to distinguish faults)
- Blockage: response becomes sluggish and phase-lag increases; changes look like “extra filtering” that was never configured.
- Leak: inability to sustain pressure difference; steady-state bias and time-dependent decay become visible.
- Water ingress: noise + drift + sudden steps with slow recovery; behavior often correlates with temperature changes.
- Icing: abrupt dynamics changes tied to temperature conditions; may appear as intermittent blockage-like patterns.
Alert grading (principles only)
- Advisory: suspicious evidence, but data may remain usable; prioritize logging and trend tracking.
- Caution: reduced confidence or degraded dynamics; recommend maintenance or degraded use modes.
- Warning: strong evidence of invalid air data; require immediate system handling (details are out of scope here).
Scope note: this section explains evidence and grading logic only; system-level actions and standards are not expanded here.
Fault → Observable → Test method → Action (field-usable checklist)
| Fault | Observable | Test method | Action |
|---|---|---|---|
| Blockage | slow response, increased phase lag | step/impulse response signature; compare to baseline | raise confidence flag; maintenance recommendation |
| Leak | cannot sustain q; time-dependent decay | hold test / decay metric from logged segments | log event; caution if persistent |
| Water ingress | noise jumps, drift, step-like glitches | noise-floor monitor + recovery time tracking | caution; maintenance; increase sampling/logging |
| Icing pattern | intermittent blockage-like dynamics vs temperature | condition correlation (temp window) + dynamics signature | caution/warning by evidence strength |
| Sensor drift | slow bias growth; residuals increase | trend counters + checkpoint residual tests | request re-cal; advisory→caution if persistent |
| Excitation anomaly | sensitivity change; correlated channel shifts | excitation monitor + plausibility mismatch | caution; isolate electrical root cause |
| Reference drift | multiple channels shift together | reference sanity point + internal checks | warning if unbounded; force maintenance |
| Open/short | hard rail/zero, stuck behavior | explicit open/short detection flags | warning; invalidate affected channel |
| Noise-floor jump | SNR drop; unstable derived outputs | noise metric + consistency failure count | advisory/caution; log evidence for root cause |
EMC/Lightning/DO-160 considerations for the pressure measurement chain
The goal is not only “no damage,” but stable air data integrity: transients must not create false spikes, false BIT alarms, or long recovery tails. Protection, filtering, and digital guards must be designed together with the latency budget and evidence logging.
How disturbances enter the chain (entry → target → symptom)
- Harness coupling: transient couples into sensor leads → AFE input saturates / slow recovery → short spikes, step errors, or “stuck” segments.
- Power rail disturbance: excitation/reference droops or spikes → ratiometric assumption breaks → multiple channels shift together.
- Ground bounce: reference point moves during high di/dt → equivalent input bias appears → low-q regions show amplified IAS error.
- PCB loop pickup: high-impedance nodes collect RF → noise floor rises → BIT noise and consistency checks trip more often.
Protection and filtering side effects (common failure mode: “over-protect”)
- TVS parasitic capacitance: can load sensitive differential nodes → bandwidth loss and phase lag → slower air data response.
- RC too large: improves immunity but “filters away” valid dynamics → real changes look like outliers or implausible behavior.
- Series resistance too large: helps with surges but increases thermal noise and bias error → hurts low-q accuracy.
- Wrong CM/DM placement: can convert common-mode disturbance into differential error → larger readout jitter.
Engineering rule: define a response/latency budget first, then choose protection values that meet both immunity and dynamics.
Readout resilience (prevent false spikes and false alarms)
- Use defined update windows to avoid publishing during known transient recovery intervals.
- Require stability timers before “healthy” status is reasserted after a disturbance.
- Clamp impossible values at the physics boundary so spikes cannot propagate into derived air data.
- Apply “rate limits” to reject non-physical jumps while preserving evidence for logs.
- Reject isolated points only when cross-check evidence supports “measurement artifact.”
- Never “silently clean” data; attach confidence and fault flags.
- Freeze outputs on severe transients and explicitly publish degraded confidence.
- Recover using a defined re-entry condition (stable time + plausibility + self-test OK).
Evidence logging (turn EMI into a maintainable asset)
- Event count: how often disturbances occur, optionally by severity band.
- Max deviation: largest pressure/air-data offset observed during an event window.
- Recovery time: time to return to stable, plausible behavior after the event.
Do / Don’t (short, field-usable)
- Place protection by node function (input, reference, excitation), not “wherever there is space.”
- Monitor excitation and reference so multi-channel shifts are detectable.
- Choose RC with a defined dynamics/latency budget.
- Freeze outputs on severe evidence and publish degraded confidence.
- Log count, max deviation, and recovery time for maintenance.
- Attach a high-capacitance TVS directly across a sensitive differential node without checking bandwidth.
- Increase RC “until it stops failing” while ignoring response delay.
- Swallow transient artifacts silently; this breaks fault isolation and trend evidence.
- Declare “healthy” immediately after a disturbance without stability timers.
- Assume redundancy alone fixes EMI; shared references and rails can fail together.
Redundancy architecture & fault isolation (single/dual/triple channels)
Redundancy only adds safety when it includes independent evidence, cross-check rules, and isolation logic. The architecture must prevent common-cause failures (shared rail, shared reference, shared pneumatic path) from making “consistent but wrong” outputs.
Recommended architecture summary (readable for engineering + procurement)
- Single-channel: lowest SWaP and complexity; requires strong BIT and evidence logging to maintain trust.
- Dual-channel: cross-compare and isolate; excellent for drift and soft faults; requires a clear “primary/backup” strategy.
- Triple-channel: voting enables robust single-fault tolerance; must still mitigate common-cause and shared-resource failures.
What is redundant (and which faults it actually covers)
- Port/pneumatic redundancy: protects against port mismatch patterns, local blockage/leak signatures, and pneumatic anomalies.
- Sensor redundancy: protects against sensor drift, saturation, and sensor-side failures.
- Electronics redundancy: protects against AFE/ADC/reference/excitation anomalies and electrical self-test failures.
Rule: redundancy must be placed upstream of the targeted failure mode; otherwise it cannot isolate root cause.
Cross-check rules and isolation (evidence → trigger → isolate → output)
- Evidence: channel deltas, rate-of-change mismatch, delay/response mismatch, and noise-floor mismatch.
- Trigger: time-window counters (not a single sample) to prevent chatter and false isolations.
- Isolation: mark a channel suspect, remove it from voting, and freeze confidence until re-verified.
- Output: publish primary/backup or voter result with fault flags and a confidence level (degraded mode when required).
Common-cause failure (the hidden redundancy killer)
- Shared PSU: all channels drift or reset together → voting cannot detect “everyone wrong.”
- Shared reference/excitation: channels remain consistent while the entire scale moves → the most dangerous failure mode.
- Shared pneumatic layout: same blockage/leak path affects all sensors → consistent but invalid dynamics.
Mitigation principle: monitor shared resources explicitly and design independence where it matters (separate sensing/monitoring paths).
Validation & production checklist (what proves it’s done)
“Done” means the pressure-to-air-data chain is verified across pressure/temperature states, survives injected pneumatic faults and electrical disturbances without false outputs, and remains traceable by calibration versions, serial identity, BIT evidence, and lifetime trend records.
R&D verification gate (coverage-first)
Each item below is written as: Test → Method → Pass criteria → Record. This gate proves the design intent (accuracy, dynamics, fault detection, immunity, and evidence completeness).
- Accuracy matrix (pressure × temperature): Method: sweep multi-point pressures at multiple temperature setpoints. Pass criteria: max error and temperature drift remain within allocated budget per region (low-q and high-q). Record: pressure point IDs, temperature IDs, raw samples, compensated outputs, error summary.
- Repeatability and noise floor: Method: hold constant pressure/temperature and collect repeated windows. Pass criteria: repeatability and RMS noise stay under budget; no periodic interference peaks dominate. Record: window stats (mean/RMS/peak-to-peak), spectral flag summary, filter configuration.
- Hysteresis and settling: Method: approach the same pressure from up-sweep and down-sweep with defined dwell times. Pass criteria: hysteresis error and settling tail do not exceed the spec limit. Record: sweep direction tags, settle time, delta between approach directions.
- Step response (dynamic behavior): Method: apply controlled step changes (or pneumatic equivalent) and measure rise/settle. Pass criteria: response time and overshoot are within the latency/damping budget for intended air data dynamics. Record: step timestamps, rise/settle metrics, group delay estimate.
- Latency vs digital filter settings: Method: validate multiple ADC OSR/filter profiles used by the product configuration. Pass criteria: each profile meets the noise/latency/bandwidth trade target; no profile violates the “minimum dynamics” requirement. Record: filter profile ID, measured group delay, noise, bandwidth proxy metrics.
- Pneumatic leak injection: Method: introduce controlled leak paths (or calibrated bleed) and observe plausibility + classifier outputs. Pass criteria: leak signature is detected, classified, and surfaced as a consistent status (no silent bias). Record: leak condition ID, detection time, classification result, confidence trajectory.
- Blockage/slow-response injection: Method: add restriction to produce delayed response and reduced slew. Pass criteria: abnormal dynamics are flagged (slow response), not misinterpreted as normal flight changes. Record: restriction ID, delay metrics, channel-to-channel response mismatch evidence.
- Water ingress / intermittency simulation: Method: emulate noise bursts, micro-discontinuities, or abrupt bias changes consistent with moisture effects. Pass criteria: outliers are handled via evidence-driven rejection; status/flags are asserted; recovery is well-defined. Record: event window, max deviation, recovery time, “freeze/recover” transitions.
- Electrical disturbance: rail perturbations: Method: inject controlled droop/spike profiles on excitation/reference/rails (within safe test bounds). Pass criteria: disturbances do not produce unbounded output; confidence is degraded when needed; recovery is bounded. Record: rail waveform ID, output deviation, recovery time, associated event counter increments.
- ESD/transient robustness at the readout boundary: Method: apply transient stress representative of interface-level events and monitor output stability. Pass criteria: no persistent latch-up behavior; no long “stuck” offsets; BIT evidence captures the event. Record: event count, max deviation, duration to stable/healthy, fault flag timeline.
- Traceability completeness audit: Method: verify every R&D run produces a complete evidence bundle. Pass criteria: each unit/run has serial identity, calibration version, configuration hash, and BIT logs. Record: SN, CalVer, ConfigHash, test report ID, retention policy tag.
Production test gate (time-bounded subset)
Production tests are a reduced, repeatable subset that catches assembly defects, calibration mistakes, and gross drift without full environmental sweeps.
- Assembly electrical sanity: Method: open/short checks on sensor leads and front-end input paths. Pass criteria: no shorts/opens; impedance within expected bounds. Record: continuity results, lead ID mapping, timestamp.
- Excitation & reference monitor check: Method: measure excitation/reference rails via internal monitors. Pass criteria: values within tolerance; drift is stable over a short window. Record: Vexc/Vref readings, min/max over window, monitor status flags.
- Quick-point calibration (reduced points): Method: calibrate at a minimal set of pressure points that anchor offset and scale. Pass criteria: post-calibration error at anchor points meets threshold. Record: CalVer, coefficients/table ID, anchor errors.
- Noise-floor spot check: Method: collect a short stationary window at a defined condition. Pass criteria: RMS noise below threshold; no abnormal periodic components detected by a simple signature. Record: RMS, peak-to-peak, signature status, filter profile ID.
- Fast dynamics spot check: Method: apply a short step-like pressure perturbation (fixture-based). Pass criteria: response is not excessively slowed by RC/filter mis-build; delay stays inside a production threshold. Record: response time metric, pass/fail, fixture ID.
- Fault-detection spot check (restricted): Method: apply a repeatable “small anomaly” fixture mode (restriction/leak surrogate). Pass criteria: classifier enters expected warning/advisory state; no silent bias. Record: fault code, detection time, confidence output.
- Disturbance behavior sanity: Method: inject a mild, repeatable electrical perturbation (safe bounds). Pass criteria: outputs remain bounded; recovery is prompt; event counter increments. Record: event counter delta, max deviation, recovery time.
- Serialization and lock: Method: write serial identity and calibration metadata; lock using the product policy. Pass criteria: readback matches; unauthorized overwrites are blocked. Record: SN, CalVer, ConfigHash, lock state.
- Report export (unit-level): Method: generate a compact unit record for traceability. Pass criteria: required fields are present. Record: unit report ID, checksum, retention tag.
- Golden-unit drift control (process): Method: validate fixture stability by periodically testing a golden unit. Pass criteria: golden unit stays within control limits; out-of-control triggers fixture review. Record: golden trend chart ID (stored externally), control limits, last in-control timestamp.
Field / maintenance gate (diagnose + decide + document)
- Status-first triage: Method: check status word and confidence before trusting air data outputs. Pass criteria: “healthy” requires stable timers satisfied; “degraded” requires mitigation path. Record: status timeline, confidence level, last recovery timestamp.
- Event evidence review: Method: read event counters and last-N event summaries. Pass criteria: abnormal increases trigger inspection, not silent acceptance. Record: event count, max deviation, recovery time, severity band.
- Pneumatic anomaly indicators: Method: interpret slow response, inability to hold pressure difference, or abnormal dynamics mismatch. Pass criteria: pneumatic fault signatures map to consistent advisory/caution/warning policy. Record: dynamics mismatch metrics, suspected fault class, confidence.
- Electronics drift indicators: Method: look for multi-channel coherent shifts tied to reference/excitation monitors. Pass criteria: coherent drift triggers “shared resource” suspicion rather than per-sensor replacement. Record: Vexc/Vref monitor snapshots, correlated offset magnitude.
- Recovery behavior audit: Method: check whether freeze/recover occurs as designed during disturbances. Pass criteria: bounded recovery time; no repeated chatter. Record: freeze count, recovery time stats, re-entry stability timer value.
- Calibration version continuity: Method: verify CalVer and ConfigHash match approved baseline. Pass criteria: mismatch is treated as actionable discrepancy. Record: CalVer chain, ConfigHash, approval ID reference.
- Recalibration triggers: Method: apply rule-based triggers (drift threshold, maintenance interval, BIT suggestions). Pass criteria: recalibration is performed with versioning and validation steps. Record: recalibration trigger code, new CalVer, validation outcome.
- Fault isolation decision record: Method: document replace/repair actions based on evidence, not raw value alone. Pass criteria: each action references a minimum evidence bundle. Record: action code, evidence IDs, before/after status summary.
- Lifetime trend summary: Method: track slow changes (noise floor, drift, event frequency). Pass criteria: trend thresholds trigger preventative maintenance. Record: trend metrics, slope estimate, observation window tag.
- Exportable maintenance package: Method: generate a compact field report for system-level review. Pass criteria: report includes SN, CalVer, key evidence stats, and last event summary. Record: field report ID, checksum, retention tag.
Representative BOM examples (part numbers as references)
These are example parts frequently used for pressure readout chains. Equivalent-class parts are acceptable if they meet the same drift/noise/latency/self-test needs and pass the same R&D and production gates (qualification and temperature range must be verified per program).
- Differential pressure sensors (examples): TE Connectivity MS4525DO; Honeywell TruStability HSC/SSC series; NXP MPXV7002DP.
- Absolute pressure sensors (examples): Honeywell TruStability HSC/SSC series; NXP MPXH6115A (family example); TE Connectivity MS5611 (barometric/altimeter-class example).
- Instrumentation amplifier / PGA (examples): TI INA188, INA333; Analog Devices AD8421, AD8237.
- 24-bit ADC (ΣΔ) options (examples): TI ADS124S08; Analog Devices AD7124-4/AD7124-8, AD7172-2.
- Reference / monitor (examples): Precision references such as TI REF50xx series; Analog Devices ADR45xx series; rail monitors such as TI TPS37xx family (function-class example).
- Input protection (examples): Littelfuse SMF/SMBJ TVS families; Nexperia PESD ESD diodes (choose low-capacitance for sensitive nodes).
- Nonvolatile memory for calibration/versioning (examples): Serial FRAM such as Fujitsu/Infineon MB85RS family; SPI EEPROM families (program policy dependent).
Note: part selection for certified avionics may require specific qualification flows; the checklist above remains valid regardless of vendor.
FAQs (Air Data Computer)
These FAQs focus on the practical failure patterns and verification evidence that make air-data outputs trustworthy: separating pneumatic faults from electronics drift, controlling latency vs noise, and reporting health status with traceable logs.