123 Main Street, New York, NY 10001

Air Data Computer: Pressure AFE, 24-bit ADC & BIT

← Back to: Avionics & Mission Systems

An Air Data Computer turns pitot/static pressures into usable air data (airspeed, altitude, vertical speed, Mach) with proven accuracy, controlled latency, and continuous health status. It is designed to detect and isolate pneumatic faults (leaks, blockage, icing) and electronics drift through calibration, monitoring, and traceable event logs.

What the Air Data Computer really does (scope & boundary)

An Air Data Computer turns pitot/static pressures into airspeed and altitude signals that are trustworthy in flight. It does this by conditioning pressure-sensor outputs, digitizing them with high resolution, compensating temperature drift, and continuously checking plausibility so leaks, blockages, icing, and sensor drift can be detected and isolated.

This page focuses on the pressure measurement chain and its integrity. It does not cover flight-control laws or navigation fusion.

Inputs, outputs, and the 3 must-have metrics

  • Inputs: total pressure (Pt), static pressure (Ps), differential pressure (q = Pt−Ps), plus temperature.
  • Outputs: IAS/CAS, baro altitude, vertical speed, and Mach (as required), plus health/status flags and event records.
  • Accuracy: bias + scale errors are controlled across pressure and temperature.
  • Dynamic response: latency + bandwidth are managed so air-data signals remain responsive (not “filtered away”).
  • Integrity: faults are detectable and isolatable (pneumatic faults vs sensor/AFE/ADC faults).

Boundary reminder: air data is treated as a measured signal with confidence, not as a control-law or navigation topic.

Air Data Computer signal-chain overview
Air Data Computer: pressure chain + integrity Pt/Ps/q + Temp → AFE → 24-bit ADC → Comp/Filter → Health → Air data Pitot / Static Pitot (Pt) Static (Ps) Temp (T) Leak • Blockage • Icing Sensor Bridge Pt−Ps / Abs AFE INA / PGA EMI / AA 24-bit ADC ΣΔ + FIR noise ↔ latency Comp / Filter Temp tables Linearize Rate limit Outputs & Health Monitoring Airspeed IAS / CAS Altitude Baro Vertical Speed VS BIT Plausibility • Drift watch • Leak detect • Blockage • Event logs
Figure F1 — End-to-end air-data chain: pneumatic inputs are converted to airspeed/altitude/VS with compensation and continuous health checks.

Air data signals: from pressure to IAS/altitude (without flight control)

Air data starts as pressures—total pressure (Pt) and static pressure (Ps). Their difference, dynamic pressure q = Pt − Ps, is the primary driver for indicated airspeed, while static pressure is the primary input for barometric altitude.

Why “pressure → air data” amplifies some errors

  • Low-q region is fragile: when dynamic pressure is small (low speed or low density), a Pa-level offset or drift becomes a large airspeed error.
  • Bias dominates earlier than noise: improving “bits” helps only if offset and temperature drift are already controlled.
  • Dynamics matter: pneumatic volumes, restrictors, and digital filters introduce delay and phase lag—critical for vertical speed and transients.

Input → output mapping (engineering view)

Signal Represents Most sensitive error Design lever
Pt Total pressure input (port + line dynamics) Leak/blockage symptoms + delay Pneumatic-aware plausibility + rate checks
Ps Static pressure for altitude reference Offset/drift (temperature, aging) Temp compensation + calibration versioning
q = Pt−Ps Dynamic pressure for IAS/CAS Pa-level bias amplified at low q Low-drift AFE + ratiometric strategy
T Compensation axis for sensor + electronics Gain/offset drift across envelope 2D tables / segmentation, verified in production

Scope boundary: this section covers sensitivity and measurement dynamics only; how flight control uses the signals is out of scope.

Error amplification at low dynamic pressure
Same pressure bias → different IAS error Low dynamic pressure (q) makes offset/drift much more visible High q (higher speed) Dynamic pressure is large Pa-level bias IAS error small Low q (low speed / low density) Dynamic pressure is small Pa-level bias IAS error large Takeaway: control offset + temperature drift first; “more bits” only helps after bias is managed.
Figure F2 — The same Pa-level bias can look harmless at high q, but becomes a large indicated-airspeed error at low q.

Pneumatic interface & failure modes (the part most pages ignore)

Many “air data errors” are not electronic at all—they are pneumatic. Leaks, blockages, water ingress, icing, and line dynamics can shift offsets, distort transients, and create false noise signatures. A strong Air Data Computer treats the pneumatic path as part of the measurement system and makes these faults observable through consistency and dynamic checks.

Typical pneumatic topology (what changes the signal)

  • Pitot & static lines: length, fittings, and junctions add volume and potential leak points (affects delay and drift-like behavior).
  • Restrictor / orifice: adds damping (reduces high-frequency pressure chatter) but increases phase lag and settling time.
  • Drain / water trap: prevents water accumulation near the sensor cavity (water can raise noise and slow response).
  • Sensor cavity: trapped volume + compliance can create hysteresis-like effects in dynamics and recovery.

Failure mode → symptom → detection → mitigation (engineering checklist)

Failure mode Typical symptom (what is seen) Detection idea (observable) Mitigation
Leak Slow drift, reduced dynamic gain, inconsistent Pt/Ps/q relationships Correlation drops between channels; step response residual increases Fitting inspection; plausibility flags; maintenance-trigger thresholds
Blockage Severe lag, “stuck” pressure during transients, wrong vertical-speed dynamics Abnormal phase lag; rate-of-change limits violated vs expected envelope Port protection; restrictor placement review; fault isolation to pneumatic path
Water ingress Noise floor rises + response slows; strong temperature dependence Noise-shape change; temperature-correlated anomalies; recovery hysteresis Water trap + drain; routing to avoid low points; service procedure
Icing Step-like blockage behavior; intermittent changes; temperature-linked onset Abrupt dynamic change; cold-soak correlation; inconsistent Pt vs Ps evolution Environmental controls; conservative plausibility; event logging for traceability
Port mismatch / static source error Channel bias that changes with flight regime; altitude/IAS disagreement patterns Cross-check trend vs regime; long-term bias signature differs from electronics drift Installation verification; calibration offsets with traceable configuration control

Practical rule: when a problem looks like “random drift,” check for pneumatic causes first—then use dynamics and consistency tests to isolate it.

Pneumatic path and failure points
Pneumatic path: where “electronic” errors begin Ports → lines → restrictor → water trap → sensor cavity → AFE/ADC Ports Pitot (Pt) Static (Ps) Restrictor Water trap Drain Sensor cavity Pressure AFE / ADC leak block ice Failure-point legend Leak Blockage Water Icing
Figure F3 — Pneumatic topology and fault points: leaks, blockages, water, and icing often appear as drift, noise, or lag in electronics.

Pressure sensor choices & excitation (bridge vs capacitive, ratiometric strategy)

Air Data Computers commonly use bridge-type or capacitive pressure sensors because their outputs can be conditioned reliably and calibrated over temperature. A key design decision is how the sensor is excited and how the ADC reference is generated. A ratiometric approach ties excitation and ADC reference together so supply changes have far less impact on the final reading.

Sensor types (readout-relevant differences only)

  • Bridge sensors: small differential output (often proportional to excitation), strong dependence on low-drift INA/PGA and reference stability.
  • Capacitive sensors: output is capacitance change; readout is typically switched/charge-based and can be robust, but needs careful linearization.
  • Common reality: temperature drives offset and gain changes—so calibration tables and stable excitation/reference matter as much as raw ADC bits.

Excitation strategies (what they fight)

  • Constant voltage: simple, but excitation drift can appear as scale drift unless referenced out.
  • Constant current: can reduce some sensitivity, but may complicate protection and stability for certain sensor structures.
  • Ratiometric (recommended concept): use the same reference for excitation and ADC full-scale so supply/rail changes cancel in the ratio.

Ratiometric does not eliminate all error; it mainly removes excitation/reference drift from dominating the budget, shifting attention to sensor physics and temperature compensation.

Selection criteria (usable for engineering & procurement)

  • Range & overload: ensure Pt/Ps/q extremes do not force long recovery from saturation.
  • Accuracy target: separate offset and gain vs temperature—avoid judging only by a single headline number.
  • Temperature envelope: wider envelope usually means more compensation complexity (table size and validation effort).
  • Dynamics: match sensor response to pneumatic damping and digital filtering so latency stays acceptable.
  • Availability & packaging: stable sourcing and mechanical integration reduce hidden variability.
Bridge sensor readout with ratiometric reference tie
Bridge sensor readout + ratiometric strategy Excitation and ADC reference share the same source to reduce drift impact Excitation Vref / Vexc Bridge sensor mV / V INA / PGA low drift Anti-alias RC / FIR 24-bit ADC ΣΔ Digital out ratiometric reference tie Design levers Offset & drift INA + tables Reference ratiometric Dynamics filter + lag Saturation recovery clamp + headroom
Figure F4 — Ratiometric concept: tying excitation and ADC reference reduces sensitivity to supply drift and shifts the budget to sensor physics and temperature compensation.

Analog Front-End design for differential/absolute pressure (low noise + stability)

The AFE determines whether pressure signals remain measurable at low dynamic pressure. Input bias, 1/f noise, drift, and CMRR errors can look like “real” air-data changes when q is small. A robust AFE keeps the differential path balanced, filters interference without destroying dynamics, and stays stable with a ΣΔ ADC input network.

INA/PGA: the four error sources that dominate low-q performance

  • Input bias & leakage paths: bias current through source impedance and protection networks becomes an effective offset.
  • 1/f noise: low-frequency noise is easily mistaken for slow pressure drift; it sets the floor for long-window air data.
  • Offset/gain drift: temperature and aging shape the error budget more than raw ADC bit depth in many regimes.
  • CMRR under imbalance: common-mode interference turns into differential error if the two input paths are not symmetric.

RC / EMI / anti-alias: suppress interference without “filtering away” dynamics

  • Differential filtering: sets the measurement bandwidth and anti-alias corner; too aggressive increases phase lag.
  • Common-mode filtering: targets injected interference; it must not create input imbalance that reduces effective CMRR.
  • Placement logic: protection and series resistance should keep the two input impedances matched; anti-alias should be predictable for stability.

The goal is not maximum filtering; the goal is controlled bandwidth with stable phase and repeatable recovery.

Differential vs absolute pressure (same AFE skills, different priorities)

Chain Most sensitive to What to protect Common pitfall
Differential (q) offset, 1/f noise, CMRR under imbalance balanced input impedances and low-drift INA/PGA protection/RC mismatch converting CM to DM error
Absolute (Pt/Ps) overrange events and recovery, reference stability input clamps + headroom so saturation does not linger long recovery mistaken as “slow pressure”

Reference & return (local measurement view only)

  • Ratiometric reference: tie excitation and ADC reference to reduce sensitivity to supply drift in the final ratio.
  • Reference noise: low-frequency reference/return noise often becomes low-frequency reading noise.
  • Return integrity: keep high-current and noisy returns away from the AFE/reference sense path (local board-level rule).

AFE design checklist (10–14 executable items)

  1. Verify protection + series elements keep both input impedances matched (avoid CMRR collapse).
  2. Confirm clamp/protection does not create long saturation recovery that looks like slow pressure change.
  3. Compute input bias current × source impedance to bound effective offset at the INA inputs.
  4. Check 1/f noise contribution in the intended low-frequency window (avoid “drift-like” noise).
  5. Budget offset/gain drift across the full temperature envelope (drift usually beats extra ADC bits).
  6. Choose anti-alias corner aligned to sample/OSR targets; avoid undefined corners from parasitics.
  7. Separate common-mode suppression from differential bandwidth setting (filter the right thing).
  8. Keep differential routing symmetric; avoid asymmetry that converts CM interference to DM error.
  9. Maintain adequate headroom for absolute-pressure transients; reduce overrange exposure.
  10. Validate AFE stability with the ADC input network and chosen RC values (no marginal phase).
  11. Implement ratiometric tie if excitation drift is a dominant risk (tie must be explicit and verifiable).
  12. Reserve a self-test hook (input short / reference point / small excitation perturb) for later BIT.
AFE detail block diagram with risk tags
AFE details: balanced inputs + controlled filters + stable ADC interface Key risks: noise, drift, stability (each must be actively managed) Inputs IN+ (q/P) IN− (ref) Protection clamp series R DRIFT EMI / CM CM choke RC CM NOISE Anti-alias Diff RC corner INA/PGA CMRR 1/f STABILITY ADC interface + reference ΣΔ ADC Vref / ratiometric Keep reference return quiet (local rule)
Figure F5 — AFE risk map: manage noise, drift, and stability by balancing inputs, separating CM suppression from diff bandwidth, and controlling the ADC interface.

24-bit ADC & digital filtering (resolution vs accuracy vs latency)

“24-bit” describes the converter output format and potential resolution, not guaranteed system accuracy. In air-data pressure chains, accuracy is usually limited by offset, drift, reference quality, and AFE interference paths. ADC choice and digital filtering mainly decide noise, bandwidth, and group delay—which directly affects the dynamic behavior of air data signals.

Why ΣΔ ADCs are common here (and what they cost)

  • Pros: strong noise performance at low bandwidth, robust digital filtering, and practical rejection of periodic interference.
  • Costs: digital filter group delay and longer settling after configuration changes; latency must be managed explicitly.

SAR vs ΣΔ (pressure-chain view only)

  • SAR: low latency and strong transient response, but relies more on analog anti-alias and tighter noise budgeting in the AFE.
  • ΣΔ: excellent resolution after filtering, but adds group delay; the system must accept and account for that lag.

The decision is rarely “which is better”; it is “which meets the required dynamics while keeping noise and drift under control.”

OSR and digital filters: the noise–latency–bandwidth triangle

  • Higher OSR: lower noise, but more group delay and slower output update dynamics.
  • Stronger filtering: better interference suppression, but slower response and longer settling.
  • Wider bandwidth: faster dynamics, but higher noise unless the AFE and reference are strong enough.

Self-test hooks (preparing for BIT without going off-scope)

  • Reference sanity point: verify the ADC/reference chain is not drifting beyond expected limits.
  • Input short / known input: check AFE + ADC offset behavior using a controlled internal condition.
  • Small excitation perturb: apply a tiny, known excitation change and confirm proportional response (helps isolate sensor vs electronics).
Digital filtering and group delay concept
Digital filtering adds group delay (latency) by design Noise ↓ with OSR/filtering, but latency ↑ and bandwidth constraints tighten Sampling fs OSR Decimate Digital filter FIR / IIR Output Air data Time-domain view t input output group delay Tradeoff Noise Latency Bandwidth
Figure F6 — Digital filtering reduces noise but introduces group delay; tuning OSR and filters is a noise–latency–bandwidth tradeoff.

Temperature compensation & calibration workflow (factory + in-field)

Temperature compensation turns pressure-chain drift and nonlinearity into a controlled, traceable process: capture data, fit a model, store a versioned calibration, verify it on every power-up, and track in-field drift to decide when re-calibration is required.

Error sources that must be separated (so they can be corrected)

  • Offset (zero): dominates low-q performance; appears as a constant bias that grows into large IAS error when q is small.
  • Gain (scale): shows as proportional error over the range; typically corrected with multi-point pressure steps.
  • Nonlinearity: bends the transfer curve; often corrected with piecewise segments or a 2D table.
  • Temperature drift: offset/gain change with temperature; requires data across the full operating envelope.
  • Hysteresis: different readings for rising vs falling pressure/temperature; must be checked during verification.
  • Aging: slow long-term drift; handled by trend tracking and maintenance thresholds (not by “more ADC bits”).

Compensation strategies (choose by controllability and data size)

  • Piecewise linear: small parameter set and easy verification; best when the curve is mostly linear with mild bends.
  • 2D LUT (pressure × temperature): practical balance of accuracy and validation; common when temperature coupling is strong.
  • Polynomial (limited use): compact for smooth surfaces but harder to bound at edges; use only with strong verification coverage.

Calibration data management (traceability is part of accuracy)

  • CalVersionID: a unique identifier tied to parameters, test coverage, and build revision.
  • Integrity: CRC/Hash over parameter blocks; invalid blocks must be detected on boot.
  • Validity checks: boot-time verification of calibration state, bounds, and reference consistency.
  • Locking rules: parameters are locked after verification; updates require an explicit maintenance path.

In-field re-calibration triggers (when to stop trusting yesterday’s fit)

  • Drift threshold exceeded: trend of offset/gain indicators crosses a configured limit.
  • Maintenance interval: scheduled service event forces a calibration validity review.
  • BIT evidence: repeated plausibility failures suggest sensor/AFE changes that compensation can no longer absorb.

Scope note: this section describes triggers and data handling only; it does not expand into system safety standards.

Calibration pipeline (factory steps with “what to measure”)

  1. Setup & stabilize: record ambient temperature, excitation/reference levels, and sensor warm-up state.
  2. Raw capture grid: measure raw counts at multiple pressure points across multiple temperature points.
  3. Model build: fit offset/gain + nonlinearity + temperature terms (choose piecewise or 2D LUT).
  4. Write parameters: store the parameter block to NVM with CRC and boundary metadata.
  5. Checkpoint verify: re-measure at verification points (including rising/falling pressure) and compute residuals.
  6. Lock & tag: assign CalVersionID, lock parameters, and store a calibration timestamp/counter.
  7. Power-cycle self-check: verify the boot-time validity checks and confirm parameters are applied correctly.
  8. Enable drift tracking: initialize trend counters and thresholds used to request maintenance or re-calibration.
Calibration workflow state machine
Calibration workflow: data → model → version → verification → drift tracking Factory produces a locked, versioned calibration; field monitors drift and triggers re-cal when needed Acquire P,T grid Fit LUT/model Write NVM+CRC Verify residuals Lock sealed Version CalVersionID In-field monitoring trend drift • validate on boot • log evidence Re-cal trigger drift / BIT Result: calibration becomes a versioned asset that can be validated, monitored, and renewed
Figure F7 — Calibration pipeline state machine: acquire data, fit a model, write + verify, lock and version, then track drift in the field to trigger re-calibration.

Health monitoring & BIT/BITE (detect leaks, blockage, sensor drift)

Health monitoring proves the air data output is trustworthy by building an evidence chain: raw pressures → derived air data → consistency checks and electrical self-tests → fault classification → status words and logs that separate pneumatic faults from sensor/electronics issues.

Plausibility checks (physics, dynamics, and cross-channel consistency)

  • Physics bounds: clamp impossible Pt/Ps/q combinations and prevent out-of-range values from silently propagating.
  • Rate-of-change: detect steps or ramps that exceed feasible dynamics; separate real transients from measurement artifacts.
  • Consistency: compare correlated channels (Pt vs Ps trends, redundant sensors if present) to detect incoherent behavior.

Sensor/AFE self-test evidence (electrical layer)

  • Excitation monitor: detect excitation droop, overrange, or mismatch that changes sensor sensitivity.
  • Reference monitor: validate ADC/reference sanity so “pressure changes” are not reference drift.
  • Open/short detect: detect wiring or sensor failures with explicit fault flags.
  • Noise-floor anomaly: rising noise or spectral shape changes often indicate water ingress, loose connections, or interference.

Pneumatic fault signatures (use dynamics to distinguish faults)

  • Blockage: response becomes sluggish and phase-lag increases; changes look like “extra filtering” that was never configured.
  • Leak: inability to sustain pressure difference; steady-state bias and time-dependent decay become visible.
  • Water ingress: noise + drift + sudden steps with slow recovery; behavior often correlates with temperature changes.
  • Icing: abrupt dynamics changes tied to temperature conditions; may appear as intermittent blockage-like patterns.

Alert grading (principles only)

  • Advisory: suspicious evidence, but data may remain usable; prioritize logging and trend tracking.
  • Caution: reduced confidence or degraded dynamics; recommend maintenance or degraded use modes.
  • Warning: strong evidence of invalid air data; require immediate system handling (details are out of scope here).

Scope note: this section explains evidence and grading logic only; system-level actions and standards are not expanded here.

Fault → Observable → Test method → Action (field-usable checklist)

Fault Observable Test method Action
Blockage slow response, increased phase lag step/impulse response signature; compare to baseline raise confidence flag; maintenance recommendation
Leak cannot sustain q; time-dependent decay hold test / decay metric from logged segments log event; caution if persistent
Water ingress noise jumps, drift, step-like glitches noise-floor monitor + recovery time tracking caution; maintenance; increase sampling/logging
Icing pattern intermittent blockage-like dynamics vs temperature condition correlation (temp window) + dynamics signature caution/warning by evidence strength
Sensor drift slow bias growth; residuals increase trend counters + checkpoint residual tests request re-cal; advisory→caution if persistent
Excitation anomaly sensitivity change; correlated channel shifts excitation monitor + plausibility mismatch caution; isolate electrical root cause
Reference drift multiple channels shift together reference sanity point + internal checks warning if unbounded; force maintenance
Open/short hard rail/zero, stuck behavior explicit open/short detection flags warning; invalidate affected channel
Noise-floor jump SNR drop; unstable derived outputs noise metric + consistency failure count advisory/caution; log evidence for root cause
Health monitoring evidence chain
Health monitoring builds an evidence chain, not a single “pass/fail” Separate pneumatic faults from sensor/electronics and produce status words + logs Raw pressure Pt / Ps / q Derived IAS / Alt / VS Checks Bounds Rate Consistency Evidence scores/bits Fault classifier Pneumatic Sensor Elec Status words health bits + confidence Advisory Caution Warning Event log evidence + counters Outcome: air data is accompanied by confidence and explainable evidence for maintenance and fault isolation
Figure F8 — Health monitoring evidence chain: raw data and derived air data feed consistency checks and self-tests, which produce evidence for fault classification and status words.

EMC/Lightning/DO-160 considerations for the pressure measurement chain

The goal is not only “no damage,” but stable air data integrity: transients must not create false spikes, false BIT alarms, or long recovery tails. Protection, filtering, and digital guards must be designed together with the latency budget and evidence logging.

How disturbances enter the chain (entry → target → symptom)

  • Harness coupling: transient couples into sensor leads → AFE input saturates / slow recovery → short spikes, step errors, or “stuck” segments.
  • Power rail disturbance: excitation/reference droops or spikes → ratiometric assumption breaks → multiple channels shift together.
  • Ground bounce: reference point moves during high di/dt → equivalent input bias appears → low-q regions show amplified IAS error.
  • PCB loop pickup: high-impedance nodes collect RF → noise floor rises → BIT noise and consistency checks trip more often.

Protection and filtering side effects (common failure mode: “over-protect”)

  • TVS parasitic capacitance: can load sensitive differential nodes → bandwidth loss and phase lag → slower air data response.
  • RC too large: improves immunity but “filters away” valid dynamics → real changes look like outliers or implausible behavior.
  • Series resistance too large: helps with surges but increases thermal noise and bias error → hurts low-q accuracy.
  • Wrong CM/DM placement: can convert common-mode disturbance into differential error → larger readout jitter.

Engineering rule: define a response/latency budget first, then choose protection values that meet both immunity and dynamics.

Readout resilience (prevent false spikes and false alarms)

1) Sampling window and timing guards
  • Use defined update windows to avoid publishing during known transient recovery intervals.
  • Require stability timers before “healthy” status is reasserted after a disturbance.
2) Limiting and soft clamping (digital)
  • Clamp impossible values at the physics boundary so spikes cannot propagate into derived air data.
  • Apply “rate limits” to reject non-physical jumps while preserving evidence for logs.
3) Outlier rejection aligned with consistency checks
  • Reject isolated points only when cross-check evidence supports “measurement artifact.”
  • Never “silently clean” data; attach confidence and fault flags.
4) Freeze → flag → recover
  • Freeze outputs on severe transients and explicitly publish degraded confidence.
  • Recover using a defined re-entry condition (stable time + plausibility + self-test OK).

Evidence logging (turn EMI into a maintainable asset)

  • Event count: how often disturbances occur, optionally by severity band.
  • Max deviation: largest pressure/air-data offset observed during an event window.
  • Recovery time: time to return to stable, plausible behavior after the event.

Do / Don’t (short, field-usable)

Do
  • Place protection by node function (input, reference, excitation), not “wherever there is space.”
  • Monitor excitation and reference so multi-channel shifts are detectable.
  • Choose RC with a defined dynamics/latency budget.
  • Freeze outputs on severe evidence and publish degraded confidence.
  • Log count, max deviation, and recovery time for maintenance.
Don’t
  • Attach a high-capacitance TVS directly across a sensitive differential node without checking bandwidth.
  • Increase RC “until it stops failing” while ignoring response delay.
  • Swallow transient artifacts silently; this breaks fault isolation and trend evidence.
  • Declare “healthy” immediately after a disturbance without stability timers.
  • Assume redundancy alone fixes EMI; shared references and rails can fail together.
EMC entry paths and protection node map
EMC/Lightning impact map for the pressure readout chain Protect nodes, control side effects, and publish confidence with evidence Sensor leads harness entry Protection TVS / clamp Filter RC / CM INA/PGA bias/noise ADC delay budget Digital guards limit • reject • freeze Bounds Rate Freeze Outputs air data + confidence Status word Logs count/max/t Harness Rail GND bounce TVS Cpar → lag RC too big → slow Design target: stable air data + explicit confidence during and after disturbances
Figure F9 — EMC entry paths and protection node map: disturbances enter via harness, rails, and ground; protection plus digital guards prevent false spikes and enable traceable recovery.

Redundancy architecture & fault isolation (single/dual/triple channels)

Redundancy only adds safety when it includes independent evidence, cross-check rules, and isolation logic. The architecture must prevent common-cause failures (shared rail, shared reference, shared pneumatic path) from making “consistent but wrong” outputs.

Recommended architecture summary (readable for engineering + procurement)

  • Single-channel: lowest SWaP and complexity; requires strong BIT and evidence logging to maintain trust.
  • Dual-channel: cross-compare and isolate; excellent for drift and soft faults; requires a clear “primary/backup” strategy.
  • Triple-channel: voting enables robust single-fault tolerance; must still mitigate common-cause and shared-resource failures.

What is redundant (and which faults it actually covers)

  • Port/pneumatic redundancy: protects against port mismatch patterns, local blockage/leak signatures, and pneumatic anomalies.
  • Sensor redundancy: protects against sensor drift, saturation, and sensor-side failures.
  • Electronics redundancy: protects against AFE/ADC/reference/excitation anomalies and electrical self-test failures.

Rule: redundancy must be placed upstream of the targeted failure mode; otherwise it cannot isolate root cause.

Cross-check rules and isolation (evidence → trigger → isolate → output)

  • Evidence: channel deltas, rate-of-change mismatch, delay/response mismatch, and noise-floor mismatch.
  • Trigger: time-window counters (not a single sample) to prevent chatter and false isolations.
  • Isolation: mark a channel suspect, remove it from voting, and freeze confidence until re-verified.
  • Output: publish primary/backup or voter result with fault flags and a confidence level (degraded mode when required).

Common-cause failure (the hidden redundancy killer)

  • Shared PSU: all channels drift or reset together → voting cannot detect “everyone wrong.”
  • Shared reference/excitation: channels remain consistent while the entire scale moves → the most dangerous failure mode.
  • Shared pneumatic layout: same blockage/leak path affects all sensors → consistent but invalid dynamics.

Mitigation principle: monitor shared resources explicitly and design independence where it matters (separate sensing/monitoring paths).

Redundancy voting and fault isolation block diagram
Redundancy: cross-check + isolate + vote (with common-cause awareness) Channels A/B/C → evidence checks → voter → outputs + fault flags + confidence Channel A Port + Sensor + AFE + ADC Port Sens AFE Channel B Port + Sensor + AFE + ADC Port Sens AFE Channel C Port + Sensor + AFE + ADC Port Sens AFE Consistency delta • rate • delay • noise Δ Rate Delay Noise Voter 2oo3 Outputs data + flags Fault flags Degraded ⚠ Shared PSU ⚠ Shared Ref ⚠ Shared pneumatic Voting works only if shared resources are monitored and common-cause paths are controlled
Figure F10 — Redundancy voting and fault isolation: channels feed evidence checks and a voter; fault flags isolate bad channels while common-cause points are explicitly highlighted.

Validation & production checklist (what proves it’s done)

“Done” means the pressure-to-air-data chain is verified across pressure/temperature states, survives injected pneumatic faults and electrical disturbances without false outputs, and remains traceable by calibration versions, serial identity, BIT evidence, and lifetime trend records.

R&D verification gate (coverage-first)

Each item below is written as: TestMethodPass criteriaRecord. This gate proves the design intent (accuracy, dynamics, fault detection, immunity, and evidence completeness).

  1. Accuracy matrix (pressure × temperature): Method: sweep multi-point pressures at multiple temperature setpoints. Pass criteria: max error and temperature drift remain within allocated budget per region (low-q and high-q). Record: pressure point IDs, temperature IDs, raw samples, compensated outputs, error summary.
  2. Repeatability and noise floor: Method: hold constant pressure/temperature and collect repeated windows. Pass criteria: repeatability and RMS noise stay under budget; no periodic interference peaks dominate. Record: window stats (mean/RMS/peak-to-peak), spectral flag summary, filter configuration.
  3. Hysteresis and settling: Method: approach the same pressure from up-sweep and down-sweep with defined dwell times. Pass criteria: hysteresis error and settling tail do not exceed the spec limit. Record: sweep direction tags, settle time, delta between approach directions.
  4. Step response (dynamic behavior): Method: apply controlled step changes (or pneumatic equivalent) and measure rise/settle. Pass criteria: response time and overshoot are within the latency/damping budget for intended air data dynamics. Record: step timestamps, rise/settle metrics, group delay estimate.
  5. Latency vs digital filter settings: Method: validate multiple ADC OSR/filter profiles used by the product configuration. Pass criteria: each profile meets the noise/latency/bandwidth trade target; no profile violates the “minimum dynamics” requirement. Record: filter profile ID, measured group delay, noise, bandwidth proxy metrics.
  6. Pneumatic leak injection: Method: introduce controlled leak paths (or calibrated bleed) and observe plausibility + classifier outputs. Pass criteria: leak signature is detected, classified, and surfaced as a consistent status (no silent bias). Record: leak condition ID, detection time, classification result, confidence trajectory.
  7. Blockage/slow-response injection: Method: add restriction to produce delayed response and reduced slew. Pass criteria: abnormal dynamics are flagged (slow response), not misinterpreted as normal flight changes. Record: restriction ID, delay metrics, channel-to-channel response mismatch evidence.
  8. Water ingress / intermittency simulation: Method: emulate noise bursts, micro-discontinuities, or abrupt bias changes consistent with moisture effects. Pass criteria: outliers are handled via evidence-driven rejection; status/flags are asserted; recovery is well-defined. Record: event window, max deviation, recovery time, “freeze/recover” transitions.
  9. Electrical disturbance: rail perturbations: Method: inject controlled droop/spike profiles on excitation/reference/rails (within safe test bounds). Pass criteria: disturbances do not produce unbounded output; confidence is degraded when needed; recovery is bounded. Record: rail waveform ID, output deviation, recovery time, associated event counter increments.
  10. ESD/transient robustness at the readout boundary: Method: apply transient stress representative of interface-level events and monitor output stability. Pass criteria: no persistent latch-up behavior; no long “stuck” offsets; BIT evidence captures the event. Record: event count, max deviation, duration to stable/healthy, fault flag timeline.
  11. Traceability completeness audit: Method: verify every R&D run produces a complete evidence bundle. Pass criteria: each unit/run has serial identity, calibration version, configuration hash, and BIT logs. Record: SN, CalVer, ConfigHash, test report ID, retention policy tag.

Production test gate (time-bounded subset)

Production tests are a reduced, repeatable subset that catches assembly defects, calibration mistakes, and gross drift without full environmental sweeps.

  1. Assembly electrical sanity: Method: open/short checks on sensor leads and front-end input paths. Pass criteria: no shorts/opens; impedance within expected bounds. Record: continuity results, lead ID mapping, timestamp.
  2. Excitation & reference monitor check: Method: measure excitation/reference rails via internal monitors. Pass criteria: values within tolerance; drift is stable over a short window. Record: Vexc/Vref readings, min/max over window, monitor status flags.
  3. Quick-point calibration (reduced points): Method: calibrate at a minimal set of pressure points that anchor offset and scale. Pass criteria: post-calibration error at anchor points meets threshold. Record: CalVer, coefficients/table ID, anchor errors.
  4. Noise-floor spot check: Method: collect a short stationary window at a defined condition. Pass criteria: RMS noise below threshold; no abnormal periodic components detected by a simple signature. Record: RMS, peak-to-peak, signature status, filter profile ID.
  5. Fast dynamics spot check: Method: apply a short step-like pressure perturbation (fixture-based). Pass criteria: response is not excessively slowed by RC/filter mis-build; delay stays inside a production threshold. Record: response time metric, pass/fail, fixture ID.
  6. Fault-detection spot check (restricted): Method: apply a repeatable “small anomaly” fixture mode (restriction/leak surrogate). Pass criteria: classifier enters expected warning/advisory state; no silent bias. Record: fault code, detection time, confidence output.
  7. Disturbance behavior sanity: Method: inject a mild, repeatable electrical perturbation (safe bounds). Pass criteria: outputs remain bounded; recovery is prompt; event counter increments. Record: event counter delta, max deviation, recovery time.
  8. Serialization and lock: Method: write serial identity and calibration metadata; lock using the product policy. Pass criteria: readback matches; unauthorized overwrites are blocked. Record: SN, CalVer, ConfigHash, lock state.
  9. Report export (unit-level): Method: generate a compact unit record for traceability. Pass criteria: required fields are present. Record: unit report ID, checksum, retention tag.
  10. Golden-unit drift control (process): Method: validate fixture stability by periodically testing a golden unit. Pass criteria: golden unit stays within control limits; out-of-control triggers fixture review. Record: golden trend chart ID (stored externally), control limits, last in-control timestamp.

Field / maintenance gate (diagnose + decide + document)

  1. Status-first triage: Method: check status word and confidence before trusting air data outputs. Pass criteria: “healthy” requires stable timers satisfied; “degraded” requires mitigation path. Record: status timeline, confidence level, last recovery timestamp.
  2. Event evidence review: Method: read event counters and last-N event summaries. Pass criteria: abnormal increases trigger inspection, not silent acceptance. Record: event count, max deviation, recovery time, severity band.
  3. Pneumatic anomaly indicators: Method: interpret slow response, inability to hold pressure difference, or abnormal dynamics mismatch. Pass criteria: pneumatic fault signatures map to consistent advisory/caution/warning policy. Record: dynamics mismatch metrics, suspected fault class, confidence.
  4. Electronics drift indicators: Method: look for multi-channel coherent shifts tied to reference/excitation monitors. Pass criteria: coherent drift triggers “shared resource” suspicion rather than per-sensor replacement. Record: Vexc/Vref monitor snapshots, correlated offset magnitude.
  5. Recovery behavior audit: Method: check whether freeze/recover occurs as designed during disturbances. Pass criteria: bounded recovery time; no repeated chatter. Record: freeze count, recovery time stats, re-entry stability timer value.
  6. Calibration version continuity: Method: verify CalVer and ConfigHash match approved baseline. Pass criteria: mismatch is treated as actionable discrepancy. Record: CalVer chain, ConfigHash, approval ID reference.
  7. Recalibration triggers: Method: apply rule-based triggers (drift threshold, maintenance interval, BIT suggestions). Pass criteria: recalibration is performed with versioning and validation steps. Record: recalibration trigger code, new CalVer, validation outcome.
  8. Fault isolation decision record: Method: document replace/repair actions based on evidence, not raw value alone. Pass criteria: each action references a minimum evidence bundle. Record: action code, evidence IDs, before/after status summary.
  9. Lifetime trend summary: Method: track slow changes (noise floor, drift, event frequency). Pass criteria: trend thresholds trigger preventative maintenance. Record: trend metrics, slope estimate, observation window tag.
  10. Exportable maintenance package: Method: generate a compact field report for system-level review. Pass criteria: report includes SN, CalVer, key evidence stats, and last event summary. Record: field report ID, checksum, retention tag.

Representative BOM examples (part numbers as references)

These are example parts frequently used for pressure readout chains. Equivalent-class parts are acceptable if they meet the same drift/noise/latency/self-test needs and pass the same R&D and production gates (qualification and temperature range must be verified per program).

  • Differential pressure sensors (examples): TE Connectivity MS4525DO; Honeywell TruStability HSC/SSC series; NXP MPXV7002DP.
  • Absolute pressure sensors (examples): Honeywell TruStability HSC/SSC series; NXP MPXH6115A (family example); TE Connectivity MS5611 (barometric/altimeter-class example).
  • Instrumentation amplifier / PGA (examples): TI INA188, INA333; Analog Devices AD8421, AD8237.
  • 24-bit ADC (ΣΔ) options (examples): TI ADS124S08; Analog Devices AD7124-4/AD7124-8, AD7172-2.
  • Reference / monitor (examples): Precision references such as TI REF50xx series; Analog Devices ADR45xx series; rail monitors such as TI TPS37xx family (function-class example).
  • Input protection (examples): Littelfuse SMF/SMBJ TVS families; Nexperia PESD ESD diodes (choose low-capacitance for sensitive nodes).
  • Nonvolatile memory for calibration/versioning (examples): Serial FRAM such as Fujitsu/Infineon MB85RS family; SPI EEPROM families (program policy dependent).

Note: part selection for certified avionics may require specific qualification flows; the checklist above remains valid regardless of vendor.

Validation matrix: pressure × temperature × state
Figure F10 — Validation matrix (Pressure × Temperature × State) N: normal · F: fault injection · D: disturbance (bounded output + recovery) Pressure points Temperature points T1 T2 T3 T4 T5 T6 P1 (low) P2 P3 P4 P5 P6 (high) N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D N F D Pass criteria Accuracy Dynamics Detection Recovery Record bundle SN CalVer ConfigHash BIT logs Each cell represents a required test condition; every run must generate a complete traceable evidence bundle
Figure F10 — Test matrix: pressure points × temperature points × state (Normal / Fault injection / Disturbance) mapped to pass criteria and required traceability records.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs (Air Data Computer)

These FAQs focus on the practical failure patterns and verification evidence that make air-data outputs trustworthy: separating pneumatic faults from electronics drift, controlling latency vs noise, and reporting health status with traceable logs.

1) What’s the practical difference between airspeed error from sensor drift vs pneumatic blockage?
Sensor drift typically shows a slow bias change that correlates with temperature, time, or shared references and may appear across many operating points. Pneumatic blockage more often shows abnormal dynamics: delayed response, reduced slew rate, and inconsistency during rapid changes. Confirm by comparing step-response metrics, channel-to-channel dynamics, and health flags (slow-response vs bias drift), not by raw airspeed alone.
2) Why does “24-bit ADC” not guarantee accurate air data?
“24-bit” describes code resolution, not end-to-end accuracy. Accuracy is limited by sensor offset/gain drift, reference stability, analog front-end noise and 1/f behavior, linearity, PCB leakage, and calibration quality. A high-resolution ΣΔ ADC can still produce biased air data if the excitation/reference path drifts or if compensation is weak. Use a total error budget and validation matrix to prove accuracy across pressure and temperature.
3) How should digital filter settings trade off noise vs latency for airspeed/vertical-speed response?
Higher OSR and heavier filtering reduce noise, but increase group delay and slow response. The correct setting starts from the maximum allowable latency for airspeed/VS dynamics, then spends the remaining margin on noise reduction. A common approach is a “dual-profile” strategy: a faster, noisier path for transient response and a slower, cleaner path for steady-state accuracy. Validate with step-response and delay measurements, not only RMS noise.
4) When should ratiometric measurement be used for bridge pressure sensors?
Ratiometric measurement is preferred when a bridge sensor output scales with its excitation. By referencing the ADC to the same excitation (or tightly tracked reference), supply variation largely cancels in the ratio. This helps when excitation droop or regulator tolerance would otherwise look like pressure change. Ratiometric design does not remove front-end offset drift, sensor hysteresis, or thermal nonlinearity—those still require calibration and monitoring of Vexc/Vref health.
5) How can the ADC detect a slow leak in pitot/static lines?
A slow leak often appears as an inability to sustain a pressure difference, a gradual bias shift under steady conditions, or a mismatch between expected and observed dynamics. Detection is typically evidence-driven: plausibility checks, residual trends, and confidence accumulation over time rather than a single threshold. Useful signals include decay-rate estimates, channel-to-channel consistency, and “hold” tests during controlled conditions. The output should report a leak-suspect status with supporting event evidence.
6) What factory calibration steps are essential to control temp drift across the flight envelope?
Essential steps include establishing offset and span anchors, characterizing temperature dependence at multiple setpoints, and validating nonlinearity with independent check points. A robust workflow writes coefficients/tables, verifies results, then locks and versions the calibration data (CalVer) with a configuration hash. Production should include a reduced-point calibration plus a quick stability/noise sanity check. Each unit must retain traceable records: serial identity, CalVer, and a pass/fail evidence summary.
7) How do you validate compensation tables without overfitting and causing field drift surprises?
Avoid validating only on the same points used to fit the model. Reserve hold-out points, validate across temperatures and pressures not used in fitting, and check sensitivity to noise and repeatability scatter. Keep the model complexity appropriate: overly flexible tables can “learn” fixture quirks and inflate field surprises. Cross-unit testing and re-verification after thermal cycling helps expose brittle fits. Finally, enforce an acceptance matrix and record the model version so field anomalies can be traced to a specific calibration build.
8) Why can adding TVS/RC protection worsen measurement dynamics or stability?
Protection parts change the analog network. TVS diodes add parasitic capacitance that can reduce bandwidth and distort step response, while large RC values can slow the signal and increase group delay. In differential paths, asymmetric components can degrade CMRR and inject apparent noise. High source impedance may also impact ADC settling or front-end stability, especially near the anti-alias corner. Validate protection by measuring step response, recovery time after disturbances, and stability across the full temperature range.
9) What health-monitoring outputs should be reported alongside air data (confidence, flags, BIT)?
In addition to air data values, report a status word with channel validity, confidence level, and fault class (e.g., slow-response, bias drift, disturbance event). Include BIT summary, event counters, and “last-N event” windows capturing max deviation and recovery time. Operationally useful metadata includes filter profile ID, calibration version (CalVer), and configuration hash. Status should be interpretable without decoding long logs: it must tell whether the output is healthy, degraded, or invalid and why.
10) How do dual/triple redundant channels avoid common-cause failures?
Redundancy works best when failure sources are not shared. Avoid common-cause failures by separating excitation/reference resources, limiting shared rails, and routing pneumatic paths to prevent a single blockage or leak from affecting all channels. Diversity can be physical (separation), electrical (independent monitors), and sometimes sensing (different ports or sensor technology). Voting should be paired with consistency checks that detect “shared drift” signatures; when common-cause is suspected, the system should enter a defined degraded mode rather than silently output averaged bias.
11) What are the must-run production tests to ensure repeatability unit-to-unit?
Minimum production coverage typically includes: sensor/lead open-short checks, excitation and reference monitor verification, reduced-point zero/span calibration, a short noise-floor window, and a fast dynamics spot-check to catch mis-built filtering. Add one repeatable fault-detection spot-check (restriction/leak surrogate) and a mild disturbance sanity check with bounded recovery. Finally, serialize and lock calibration metadata (SN, CalVer, ConfigHash) and export a compact unit report. A golden-unit control run helps detect fixture drift before it becomes a yield issue.
12) What event logs are most useful for maintenance when an air-data anomaly is reported?
The most useful logs are time-stamped and evidence-rich: event type, channel status and confidence, snapshots of raw pressures and derived air data, max deviation, and recovery time. Include filter profile ID, temperature, excitation/reference monitor readings, channel voting outcome, and the calibration/configuration identifiers (CalVer and ConfigHash). Storing last-N event windows (plus counters) enables fast triage: whether the anomaly matches pneumatic dynamics (slow response), electronics drift (coherent bias), or disturbance behavior (freeze/recover with short duration).