123 Main Street, New York, NY 10001

Integrated Navigation: GNSS + INS Fusion & Integrity

← Back to: Avionics & Mission Systems

Integrated Navigation (GNSS + INS) combines GNSS measurements with high-rate inertial propagation to deliver continuous position, velocity, attitude, and timing—even through brief GNSS outages. A well-engineered fusion stack depends on correct initialization, strict time alignment, robust gating/mitigation, and integrity flags so outputs remain traceable and safe to use.

H2-1 · What Integrated Navigation Owns (GNSS+INS in practice)

This section defines the owned scope of GNSS+INS integration: what the navigation engine must output, what it must guarantee, and how success is measured—without drifting into GNSS anti-jam RF design or IMU analog front-end circuitry.

Extractable Definition (45–55 words)

Integrated navigation fuses GNSS measurements with inertial propagation to produce position, velocity, attitude, and time with quantified uncertainty. It owns time alignment, consistency checks, and integrity flags so the system degrades gracefully during outages, interference, or suspect GNSS updates. The output is a state product—not a single coordinate.

What the system must output (engineering deliverables)

A GNSS+INS solution is not “a latitude/longitude.” A usable avionics navigation product is a state package with confidence, health, and traceability.

  • P / V / A: Position, velocity, attitude (with a defined reference frame and units).
  • Time state: A consistent time tag for the solution (for sensor correlation and logging).
  • Uncertainty: Covariance (or equivalent uncertainty metrics) for each output component.
  • Integrity / protection: Alert limits, protection levels, and validity flags (navigation is safe only when it can say “do not use”).
  • Mode & health: Normal / degraded / recovery mode, sensor health, gating outcomes, event counters.
Drift rate under GNSS loss Re-acquire time Re-convergence time Innovation consistency False alarm rate Availability vs integrity

Why GNSS+INS is used (the practical ownership boundary)

GNSS provides absolute measurement updates but can fail in dynamic/obstructed/interfered environments. INS provides continuous propagation but drifts over time. Integrated navigation owns the “glue” that makes the pair operational:

  • Continuity: Maintain navigation through short GNSS gaps (with controlled drift).
  • Consistency: Prevent bad GNSS updates from corrupting the inertial solution (gating / weighting).
  • Integrity: Produce usable flags and protection metrics, not just estimates.
  • Time coherence: Align multi-rate sensors so residuals represent physics, not timestamp error.
Out of scope on this page: detailed anti-jam RF/array methods, GNSS receiver front-end circuit design, precision clock phase-noise theory, and IMU analog front-end circuits. This page uses only the minimum GNSS/IMU concepts needed to design fusion, time alignment, integrity, and validation.

How “done” is measured (acceptance metrics that cannot be faked)

Metrics should reflect operational behavior rather than lab-only accuracy. The minimum set below supports both engineering closure and flight-test reporting.

  • GNSS-outage drift: position/velocity/heading drift rate during defined outage scripts (e.g., 10–60 s).
  • Recovery: time to regain valid navigation after GNSS returns (re-acquire + re-converge).
  • Consistency: innovation/residual statistics stay within gates during normal operation (no hidden divergence).
  • Integrity behavior: alert flags trigger when inputs are inconsistent; availability vs integrity is explicitly traded.
Figure F1 — Integrated navigation information flow (owned scope)
GNSS measurements + IMU propagation + fusion + integrity checks → state outputs with uncertainty and health flags.
Integrated navigation block diagram: GNSS and IMU feed fusion EKF, integrity monitoring, and produce navigation outputs GNSS Measurements pseudorange · doppler quality: C/N0 · AGC flag IMU Inertial Samples accel · gyro (high rate) bias/scale modeled in states Time-tag & Latency Alignment multi-rate buffers · interpolation · delay compensation INS Propagation strapdown mechanization (concept) predict state + covariance Fusion Filter (EKF/UKF) states: P/V/A + biases + clock innovation · gating · adaptive weighting outputs include uncertainty Integrity Monitor residual checks · consistency tests protection level · validity flags Navigation Outputs P · V · Attitude · Time covariance / uncertainty mode + health flags event counters / logs GNSS obs + quality IMU samples state + uncertainty

H2-2 · Coupling Architectures: Loose vs Tight vs Deep (and when each wins)

Coupling is not a buzzword; it defines where the measurement model lives. This section turns “loose/tight/deep” into a decision method based on coverage, dynamics, compute budget, and certification/debug needs.

The core difference (what enters the fusion filter)

The three architectures differ by the measurement layer used during fusion. That layer determines robustness under weak coverage—and also determines engineering cost and debug visibility.

  • Loose coupling: GNSS produces PVT first; the fusion filter consumes PVT as measurements.
  • Tight coupling: the fusion filter consumes GNSS raw observables (e.g., pseudorange/doppler) directly with the INS state.
  • Deep coupling: coupling extends closer to tracking/estimation loops (concept only here); highest potential robustness with highest integration risk.
Deep coupling is mentioned only as a system choice. Implementation details belong to the GNSS receiver / anti-jam pages, not here.

When each wins (practical boundaries)

Selection should be driven by environment and verification constraints rather than ambition. The boundaries below reflect common aerospace/mission integration realities.

  • Loose coupling wins when certification, modularity, and fast debug matter most—while coverage is generally healthy.
  • Tight coupling wins when satellites drop below “comfortable” levels (urban canyon, masking, high dynamics) and measurement consistency must be exploited.
  • Deep coupling becomes relevant only when extreme weak-signal/interference conditions dominate and the program accepts higher integration and validation cost.
Coverage stress: masking Dynamics: high accel/turn Compute budget Debug visibility Certification burden

Why tight coupling can be harder (failure modes to expect)

Tight coupling usually fails for engineering reasons—not for “math reasons.” The following are the most common causes of painful bring-up and unstable behavior.

  • Observation model fragility: lever-arm, coordinate transforms, clock states, and measurement assumptions become explicit; small mistakes look like “random residuals.”
  • Time alignment sensitivity: milliseconds of latency or wrong time tags can create systematic residual bias that the filter interprets as motion or sensor bias.
  • Tuning & observability traps: too many states or incorrect process noise can cause divergence or overconfidence (appearing stable until it fails abruptly).
  • Reduced debug visibility: the GNSS PVT is no longer the primary debug object; innovation statistics and gating become the primary truth.

A robust program treats coupling choice as a verification plan choice: loose coupling supports clearer black-box tests; tight coupling requires residual-driven diagnostics from day one.

A decision recipe (use this to avoid “deep for prestige”)

  • Step 1 — Coverage reality: if extended masking/weak coverage is expected, evaluate tight coupling; otherwise start loose.
  • Step 2 — Verification constraints: if certification/debug speed dominates, loose coupling is the default.
  • Step 3 — Time-tag discipline: if accurate time tagging/latency control is not guaranteed, tight coupling risk increases sharply.
  • Step 4 — Budget & schedule: deep coupling is justified only with explicit weak-signal/interference requirements and a heavier validation budget.
This page will later map these choices into filter design (H2-4), alignment (H2-5), and time-tagging/latency control (H2-6). Tight coupling without time discipline is a common root cause of “mysterious drift.”
Figure F2 — Loose vs Tight vs Deep coupling (measurement layer comparison)
The key difference is what enters fusion: PVT vs raw observables vs deeper tracking-level coupling (concept only).
Coupling architectures comparison: loose, tight, deep, showing measurement layer, complexity, and time alignment sensitivity Loose Tight Deep GNSS PVT position/velocity/time Fusion Filter INS propagation + PVT update Debug Visibility PVT is easy to inspect GNSS Observables pseudorange · doppler Fusion Filter joint estimation + gating Key Sensitivity time-tag & latency alignment Deeper Coupling tracking-level concept Fusion + Control highest integration burden Program Risk validation cost increases Time-alignment sensitivity Loose Tight Deep

H2-3 · Strapdown Mechanization + Error Budget (only what fusion needs)

Strapdown mechanization is used here only as the prediction model inside GNSS+INS fusion: propagate state and covariance at IMU rate, then correct with GNSS updates. The goal is to understand how IMU error parameters become states—and why drift follows predictable shapes during GNSS loss.

Engineering takeaway

During GNSS outages, integrated navigation quality is dominated by how IMU error states are modeled and propagated. Small biases are repeatedly integrated: bias → attitude/acceleration error → velocity error → position drift. A fusion filter works only when those error sources are represented as states and the drift behavior matches the predicted covariance growth.

The minimal mechanization loop (no textbook detours)

The mechanization used by fusion can be reduced to a practical loop that runs at IMU rate:

  • Gyro integration updates attitude (body → navigation).
  • Specific force (accelerometers) is rotated into the navigation frame.
  • Velocity is updated by integrating the rotated force (plus gravity model in concept).
  • Position is updated by integrating velocity.
  • Covariance grows according to process noise and error-state dynamics.

This page does not cover IMU analog circuits or sensor physics. Only the minimum needed to design and debug fusion is included.

Error sources → error states (what must be estimated)

In fusion, “IMU quality” becomes a set of parameters that must be estimated or bounded. The most important ones are:

  • Gyro bias → attitude error grows, then projects into acceleration and causes velocity/position drift.
  • Accel bias → direct velocity error growth, then faster position drift through integration.
  • Scale factor → errors grow with dynamic excitation (hard turns, acceleration profiles).
  • Misalignment → cross-axis coupling; a maneuver on one axis leaks into others as systematic residuals.
A practical rule: if an error source produces a repeatable drift signature and cannot be removed by calibration alone, it typically needs an error-state representation (or an explicit bound) inside the filter.

Drift signatures during GNSS loss (what “normal” looks like)

GNSS loss does not produce random behavior. Drift tends to follow recognizable patterns that help distinguish modeling issues from measurement issues:

  • Bias-driven drift: attitude/velocity errors grow steadily; position drift accelerates over time because it integrates velocity error.
  • Misalignment-driven drift: drift correlates strongly with maneuvers (turns, pitch changes) and repeats with the same motion profile.
  • Overconfidence mismatch: covariance reports “tight” uncertainty while the solution visibly drifts—often caused by too-small process noise.
Drift rate matches covariance growth Recovery is stable after GNSS returns No systematic bias in residuals Mode flags reflect confidence
Figure F3 — Error propagation and error-state view (bias → drift)
A minimal propagation chain plus the error-state blocks that the fusion filter must estimate or bound.
Error propagation in GNSS+INS: biases drive attitude/velocity errors and position drift; error-state blocks shown Propagation chain (concept) IMU Error Sources gyro bias · accel bias · scale · misalignment Attitude / Specific Force Error wrong rotation → wrong acceleration projection Velocity Error integrates acceleration error Position Drift integrates velocity error (drift accelerates) Error-state view used by fusion Core navigation states P · V · Attitude Covariance / uncertainty IMU error states (examples) gyro bias accel bias scale factors misalignment GNSS outage drift shape (concept) GNSS ok GNSS outage drift grows

H2-4 · Fusion Filter Design: State Vector, Updates, and Tuning Knobs

This section turns fusion into an implementable engineering object: how to pick EKF/UKF, how to build the state vector without observability traps, how to run multi-rate propagation and asynchronous measurement updates, and how to tune Q/R and gating using innovation statistics.

Engineering takeaway

The algorithm label matters less than disciplined modeling. A stable fusion design needs: (1) a layered state vector with only observable terms, (2) strict time-tag discipline for asynchronous updates, and (3) tuning based on innovation/residual statistics so the covariance matches real drift. Overconfidence is the most dangerous failure mode.

EKF vs UKF (practical boundary, not theory)

  • EKF: common default in GNSS+INS because it is compute-efficient, widely validated, and easier to certify and debug.
  • UKF: can help when nonlinearities are strong and linearization errors dominate, but adds compute and tuning burden.
  • Most failures come from wrong models or time alignment—not from choosing EKF vs UKF.
A program that cannot produce clean innovation statistics and time alignment evidence will not succeed with a “more advanced” filter.

State vector by layers (a safe default blueprint)

Use a layered structure to control complexity and avoid unobservable states. Expand only when evidence proves it is needed.

  • Layer 1 — Core nav: position, velocity, attitude.
  • Layer 2 — IMU errors: gyro bias, accel bias (optionally scale/misalignment when observable and supported by calibration evidence).
  • Layer 3 — Consistency states: clock bias/drift; lever arm (when platform geometry and maneuvers support observability).

Rule of thumb: a state that cannot be observed will “float,” contaminating other states. If a term cannot be observed, lock it by calibration or bound it explicitly.

Multi-rate and asynchronous updates (what actually runs)

Integrated navigation runs a high-rate prediction loop and inserts measurement updates when data arrives—based on time tags.

  • Prediction (IMU rate): propagate state + covariance using mechanization and error-state dynamics.
  • Update (GNSS rate): apply measurement updates when GNSS observables or PVT arrive.
  • Asynchronous arrivals: different GNSS observables may arrive with different latencies; update at the correct measurement time or compensate delay.
When time tags are wrong, the filter sees systematic residuals that look like motion or bias. This is often misdiagnosed as “tuning issues.”

Tuning knobs (Q / R / gating) with observable symptoms

Tuning is not guesswork when it is tied to innovation and covariance behavior.

  • Process noise Q too small → overconfidence; innovations grow; outages drift faster than covariance predicts.
  • Process noise Q too large → noisy outputs; weak smoothing; availability drops due to aggressive uncertainty growth.
  • Measurement noise R too small → GNSS is trusted too much; multipath/interference can pull the solution.
  • Gating strategy → prefer “down-weight” for marginal data and “reject” for inconsistent data; record decisions for traceability.
Innovation mean ~ 0 Innovation variance matches model NIS stays within gates Covariance matches drift Gating decisions logged

Checklist: observability traps & overconfidence (common root causes)

  • Too many states: unobservable terms drift and leak into position/attitude estimates.
  • Lever arm ignored: maneuver-correlated residual spikes and biased updates during turns.
  • Time alignment ignored: residual bias persists even with “good” GNSS; tuning cannot fix this.
  • Overconfidence: filter reports tight covariance while truth drifts—dangerous because it disables safety logic.
  • Hard reject only: availability collapses in difficult environments; adaptive weighting is often required.
Fast triage: model issues produce systematic, maneuver-correlated residual patterns; tuning issues produce distribution mismatches without strong physical correlation.
Figure F4 — Fusion dataflow: propagation loop + multi-rate updates + gating
IMU drives high-rate prediction; GNSS updates arrive asynchronously; innovations feed gating and integrity flags.
Fusion dataflow for GNSS+INS: IMU propagation loop, GNSS measurement updates, innovation gating, and state outputs IMU (high rate) accel · gyro samples GNSS (low rate) observables or PVT Time tags latency-compensated Prediction (Propagation) state + covariance mechanization + error-state model Measurement Update innovation (residual) multi-rate · asynchronous insert Q/R tuning affects consistency Gating reject / down-weight record decisions Outputs P · V · Attitude · Time covariance mode + integrity flags innovation metrics logged predict at IMU rate update on arrival state product for avionics

H2-5 · Initialization & Alignment: Getting the Filter to Start Correctly

A navigation filter does not “start itself.” Initialization must produce a credible state and a credible uncertainty. This section defines a practical alignment workflow (coarse → fine → quality gates), with failure branches that prevent silent bad starts.

Engineering takeaway

Correct alignment is a controlled transition from uncertain to trusted states. A valid start requires: (1) a heading/attitude source that matches the motion regime, (2) bias/uncertainty initialization that avoids overconfidence, and (3) explicit quality gates (innovation and maneuver checks) with re-init or degraded-mode fallbacks.

What “alignment” must output (not just a position)

Initialization should deliver a complete starting package for fusion, otherwise early updates can lock in wrong states.

  • Attitude & heading: initial roll/pitch/yaw (heading is the hardest).
  • Velocity: stationary constraint (≈0) or motion-derived estimate (from GNSS when valid).
  • IMU error seeds: initial gyro/accel bias estimates or bounds (coarse is acceptable, but must be consistent).
  • Covariance: initial uncertainty large enough to allow convergence, small enough to avoid instability.
  • Mode flags: coarse/fine/aligned states are explicit and logged.
A common failure pattern is a “good-looking” output with incorrect uncertainty. Overconfidence disables safety logic and hides drift until late.

Stationary vs in-motion alignment (choose the right entry path)

The heading source and gating logic should depend on whether the platform is stationary or moving with sufficient speed.

  • Stationary alignment: best for controlled starts. Use stationary constraints and allow time for coarse bias settling.
  • In-motion alignment: used when start occurs during taxi/launch/flight. Heading relies on motion observability and stronger quality gates.
  • COG heading boundary: GNSS course-over-ground becomes reliable only above a speed/turn-rate regime; below that, heading can be noisy or misleading.
  • Gyrocompassing boundary: feasible only with sufficiently low-noise inertial sensors and enough time in a low-vibration regime (conceptual boundary only).

Magnetic sensors can be mentioned as a possible aid, but detailed magnetics modeling is out of scope for this page.

Lever arm (GNSS antenna ↔ IMU) compensation and why it matters

The GNSS antenna observes motion at its own location. The INS propagates at the IMU reference point. The vector between them (lever arm) creates systematic errors during turns and accelerations if not modeled.

  • Symptom: maneuver-correlated residual bias (especially during yaw turns) that looks like “bad GNSS” or “mysterious tuning.”
  • Minimal handling: include lever-arm compensation in the measurement model (geometry is a first-class input).
  • Calibration concept: run a repeatable maneuver script (left/right turns, figure-eight, accel/decel) and verify residual correlation drops.
Lever arm terms can be observable only under sufficient excitation. When observability is weak, hold the lever arm fixed from calibration rather than letting it float.

Quality gates + failure branches (prevent silent bad starts)

Alignment must end with explicit pass/fail gates. If a gate fails, do not “push forward.” Use re-init or degraded mode.

  • Innovation sanity: residual mean near zero; NIS or equivalent stays within expected bounds.
  • Uncertainty realism: covariance growth/shrink matches observed drift and correction speed.
  • Heading stability: heading does not jump in stationary mode; responds plausibly in motion.
  • Maneuver check: turn-induced residual bias is not systematic (lever arm / timing issues ruled out).
  • Mode transition rules: coarse → fine → aligned conditions are deterministic and logged.
Residual mean ≈ 0 NIS within gates No maneuver-correlated bias Covariance not overconfident Aligned mode is explicit
Figure F5 — Alignment workflow: coarse → fine → quality gates (with failure branches)
A practical SOP to start the filter correctly and avoid silent bad initialization.
Initialization and alignment workflow: coarse alignment, fine alignment, quality gates, and re-init/degraded branches Start / Power-up Stationary? motion check Stationary Coarse Align zero-vel constraint + bias settle In-motion Coarse Align COG usable only above threshold Fine Alignment GNSS updates + innovation tracking Quality Gates residual / NIS / maneuver check NAV VALID state + covariance + flags Fail → Re-init reset states / widen covariance Fail → Degraded limit outputs / raise alerts Yes No Pass Fail Common fail signals residual bias · turn-correlated error overconfident covariance

H2-6 · Time Tagging, Latency, and Sample Alignment (the silent killer)

Time errors can masquerade as bias, lever-arm errors, or “bad tuning.” This section defines measurement time vs arrival time, explains fixed vs variable latency, and shows how buffering/interpolation (or delay-state modeling) prevents systematic residual bias. Network synchronization topics (PTP/SyncE) are intentionally out of scope.

Engineering takeaway

Measurement time (t_meas) must drive fusion, not message arrival time (t_arrive). Fixed delay can be compensated; variable delay (jitter) requires timestamp discipline, buffering, and alignment logic. If residuals show persistent bias correlated with speed or turns, prioritize time-tag validation before changing Q/R.

Time objects that must be distinguished

Integrated navigation contains multiple “times.” Confusing them creates systematic residual errors that the filter interprets as physics.

  • IMU sample time: high-rate sampling clock for inertial data.
  • GNSS measurement time (t_meas): when the observation is valid (often tied to GNSS time-of-week).
  • Arrival time (t_arrive): when the CPU/driver receives the message (can lag and jitter).
  • Solution time: timestamp of the fused output state.
  • 1PPS + TOW: local alignment aids for mapping sensor times onto a common timeline (local only).
Key rule: t_arrive is not a measurement timestamp. Use it only to manage buffers and latency statistics.

Latency types and how they enter fusion

Latency appears as a time shift between when a measurement happened and when it is processed. Treatment depends on whether latency is stable.

  • Fixed delay: stable pipeline delay. Apply a constant compensation (update at t_meas, not at t_arrive).
  • Variable delay (jitter): queueing/CPU load/bus contention cause changing delay. Use robust timestamping plus buffering/interpolation.
  • Multi-rate reality: IMU propagation runs continuously; GNSS updates insert corrections at the correct measurement time.
fixed delay compensated jitter quantified updates applied at t_meas residual bias removed

Alignment strategies: buffer + interpolation (default) and delay-as-state (advanced)

Two practical approaches are common. The default is buffer-based alignment; delay-as-state is reserved for cases where delay varies slowly and is observable.

  • Buffer + interpolation: store recent IMU increments; when GNSS arrives, apply the update at t_meas by interpolating/retrodicting to the correct time.
  • Delay-as-state: include a small time-offset parameter as a slow state when jitter is structured and the platform motion provides observability.
  • Verification focus: confirm that innovation statistics stop showing speed/turn-correlated bias after alignment is enabled.
Delay-as-state increases tuning and observability risk. Apply only when buffer-based compensation cannot meet residual-bias targets.

Diagnostics: how time errors look in residuals (fast triage)

Time issues have recognizable signatures. Use these checks before changing filter tuning.

  • Persistent residual bias that correlates with speed or turn rate → likely time shift rather than random noise.
  • “Tuning does nothing” pattern → Q/R changes do not remove a systematic offset caused by mis-timestamped updates.
  • Tight coupling sensitivity → raw-observable fusion amplifies time-tag mistakes into large innovations.
  • Load-dependent behavior → performance degrades when CPU/bus load increases (jitter grows).
Acceptance evidence should include: monotonic timestamps, bounded delay statistics, and innovation/NIS distributions consistent with model assumptions across motion scripts.
Figure F6 — Time axis: IMU samples, GNSS t_meas vs t_arrive, buffering and aligned update
The fusion update must be applied at t_meas. Buffering/interpolation bridges the IMU timeline to the GNSS measurement timeline.
Time tagging and latency alignment: IMU high-rate samples, GNSS measurement time vs arrival time, buffer and aligned update point Time alignment (concept) Top: IMU samples (high rate) · Bottom: GNSS measurements (low rate) IMU samples GNSS measurements t_meas t_meas t_meas t_arrive t_arrive t_arrive delay delay delay Buffer & Interpolation apply updates at t_meas quantify jitter (optional) aligned update Latency types fixed delay constant compensation jitter timestamp + buffer 1PPS + TOW local timeline mapping not network sync

H2-7 · GNSS Outage Handling: Dead-Reckoning, Aiding, and Graceful Degradation

GNSS outages must be treated as a managed operating mode—not a surprise failure. This section defines what the system should output during dead-reckoning, how uncertainty should grow (covariance inflation), how optional aiding enters as external measurements, and how recovery is admitted safely.

Engineering takeaway

During GNSS loss, the navigation solution is a prediction with growing uncertainty. The correct behavior is to (1) inflate covariance realistically, (2) apply only well-gated aiding measurements, and (3) transition through Degraded → Recovery with explicit admission windows. The most dangerous failure mode is “locking in” wrong aiding as truth, which hides drift until it is too late.

What must be reported during outage (output contract)

A dead-reckoning output is only useful when users can see its confidence and mode. Output should include:

  • Navigation state: position, velocity, attitude, and time estimate.
  • Uncertainty: covariance (or equivalent bounds) that expands with outage duration.
  • Mode flag: Normal / Degraded / Recovery / Re-init.
  • Evidence counters: reject/down-weight counts, gating failures, and aiding usage.
“Looks stable” is not an acceptance criterion. The solution must be traceable: why it is trusted, and what evidence supports the trust level.

Covariance inflation: preventing silent overconfidence

When GNSS updates stop, the filter relies on propagation. Uncertainty must grow to match expected drift, otherwise recovery and safety logic fail.

  • Too little inflation: covariance stays tight while the estimate drifts—dangerous overconfidence.
  • Too much inflation: outputs become noisy and may trigger unnecessary mode changes.
  • Best practice: tune inflation to match observed drift in repeatable outage scripts, then lock it with evidence.
covariance expands with time drift matches uncertainty mode flags are explicit recovery remains stable

ZUPT / ZARU (principle + trigger criteria only)

Zero-updates are optional “pseudo-measurements” that reduce drift when the platform provides a valid stationary window.

  • ZUPT: inject a zero-velocity measurement during verified stationary/very-low-speed windows.
  • ZARU: inject a near-zero angular-rate measurement during verified stationary windows to help bias stability.
  • Trigger criteria: low accel variation + low gyro variation + dwell-time window + confidence gate.
  • Risk: a false stationary detection can lock wrong states (real motion is absorbed as “error”).
A safe design records every ZUPT/ZARU activation and requires a hysteresis window to avoid rapid toggling.

External aiding as measurements (do not turn this page into other pages)

Aiding sources are useful only when treated as gated measurements with known failure modes. This page only defines how they enter fusion at a high level.

  • Baro altitude: vertical aiding measurement; must be gated against slow bias and environmental offsets.
  • Wheel speed: speed/odometry constraint; must be gated against slip and surface changes.
  • Vision / map constraints: relative motion or constraint measurements; only the fusion interface is in scope.
The checklist for any aiding: (1) time-tag correctness, (2) residual statistics, (3) failure detection, and (4) safe down-weight / reject policy.

Recovery admission: when GNSS returns, when is it allowed back in?

“GNSS is back” is not the same as “GNSS is safe.” Recovery should be staged and evidence-driven.

  • Stage 1: admit GNSS with conservative weighting and strict gating (Recovery mode).
  • Stage 2: require an admission window (stable innovations + consistency checks) before Normal mode.
  • Fallback: if bias persists or gating fails repeatedly, trigger Re-init rather than forcing convergence.
innovation stabilizes consistency pass window no systematic bias safe return to Normal
Figure F7 — Outage state machine: Normal → Degraded → Recovery → Re-init
A managed mode machine with explicit gates and fallback paths that prevent “silent drift.”
GNSS outage handling state machine: Normal, Degraded, Recovery, and Re-init with admission gates Normal GNSS + INS fused standard weights Degraded dead-reckoning covariance inflation Recovery admit GNSS strict gates Re-init restart alignment widen covariance GNSS invalid GNSS returns admission window pass persistent bias / gate fail repeated rejects alignment complete Outage checklist • covariance expands • aiding gated • mode flag logged • safe recovery window

H2-8 · Jamming & Spoofing Resilience at the Fusion Layer (not RF design)

This section covers fusion-layer detection and mitigation only. It uses quality indicators and innovation statistics to detect inconsistent GNSS measurements, then applies down-weighting, rejection, mode switching, and recovery admission windows. RF/antenna/beamforming designs are out of scope.

Engineering takeaway

Fusion-layer resilience is evidence-driven: detect anomalies using quality indicators plus innovation/residual statistics and multi-constellation consistency, mitigate by down-weighting before rejecting, and recover only after a stable admission window. If anomalies persist, switch modes and protect the solution rather than forcing convergence.

What the fusion layer can observe (no RF deep dive)

The fusion layer does not need RF internals to detect abnormal GNSS behavior. It can use:

  • Quality indicators: C/N0 drops, abnormal AGC flags, sudden satellite/geometry changes (as inputs).
  • Innovation statistics: residual bias, variance inflation, repeated NIS gate breaks.
  • Consistency checks: multi-constellation disagreement and time-consistent contradictions vs inertial propagation.
Persistent innovation bias is often more informative than a single “bad-looking” measurement. Use windows and hysteresis.

Detection patterns: quality drop vs consistency break

Separate “weak signal” from “inconsistent signal.” The mitigation policy is different.

  • Jamming-like: quality indicators worsen (C/N0 down, tracking quality down) and innovations become noisier.
  • Spoofing-like: quality may look normal, but consistency breaks (innovation bias, constellation disagreement, inertial mismatch).
  • Key test: does the residual show systematic direction and persistence? If yes, it is not random noise.
quality indicators innovation bias NIS gate breaks constellation vote

Mitigation ladder: down-weight → reject → mode switch → vote

A resilient system preserves availability while protecting correctness. Use a staged policy:

  • Down-weight: increase measurement noise (or reduce weight) when evidence is mild or ambiguous.
  • Reject: remove measurements only when gates fail strongly and persistently.
  • Mode switch: prefer tighter control by switching tight ↔ loose depending on observability and data quality.
  • Consistency voting: use multi-constellation and external aiding consistency to avoid single-source lock-in.
A common failure mode is “all-or-nothing reject.” It collapses availability and can trigger unstable recovery behavior.

Recovery admission window: when to trust GNSS again

After mitigation, GNSS should re-enter gradually with strict evidence. A safe recovery sequence is:

  • Step 1: require quality indicators and consistency checks to pass for a continuous window.
  • Step 2: admit with conservative weights in Recovery mode; monitor innovation statistics.
  • Step 3: restore default weights only after stable innovations and cross-source agreement.
window pass stable innovations cross-source agreement weights restored
Figure F8 — Fusion-layer workflow: Detection → Mitigation → Recovery (evidence logged)
A compact process that uses innovation statistics and consistency votes, without relying on RF design details.
Fusion-layer jamming and spoofing resilience: detect, mitigate, recover with evidence logging Detection quality indicators C/N0 · AGC flags consistency innovation · NIS · votes Mitigation down-weight increase R / reduce weight reject persistent gate failure Recovery admission window stable innovations restore weights return to Normal Mode switch tight ↔ loose control sensitivity Consistency vote multi-constellation cross-source agreement Evidence logged gates · rejects · weights votes · mode transitions

H2-9 · Integrity Monitoring: RAIM-style checks, Protection Levels, and Fault Isolation

For mission and aviation systems, integrity matters more than raw accuracy. Integrity answers: “How confident is the error bound, and will the system alert or limit outputs before a hazardous error is used?” This section introduces RAIM-style consistency checks, protection levels (concept), and practical fault isolation across GNSS, INS, and optional aiding.

Engineering takeaway

Accuracy can be good while integrity is bad. Integrity requires redundancy and evidence: residual consistency tests, fault hypotheses, and explicit actions (alerts, output limiting, exclusion). Protection Levels (PL) are meaningful only when they drive decisions against an Alert Limit (AL).

Integrity vs accuracy (why both are needed)

Accuracy Integrity
How close the estimate is to truth (average error). How confident the system is that error stays within a safe bound—and whether it will alert/limit outputs when it cannot guarantee that bound.
Can look excellent even when a rare fault occurs. Designed to prevent silent hazardous errors through detection, isolation, and controlled degradation.
A navigation solution should not be used purely because it “looks smooth.” Integrity provides the safety contract: what is safe to use, when, and why.

RAIM-style consistency checks (concept only, engineering behavior)

RAIM-style checks rely on redundancy: when multiple measurements inform the same state, residuals/innovations should be statistically consistent. When they are not, the system assumes a fault hypothesis and tests whether consistency can be restored.

  • Residual consistency: innovations remain within gates (e.g., NIS-style bounds) over a window.
  • Fault hypothesis: temporarily assume one source is faulty and evaluate whether consistency improves.
  • Exclusion: down-weight or exclude the suspect source if evidence persists.
A single outlier is not always a fault. Integrity logic uses windows, hysteresis, and evidence accumulation to avoid oscillation.

Protection Levels (PL) vs Alert Limit (AL): output limiting must be explicit

Protection Levels are conservative bounds on position/velocity error (concept). They become operational only when compared to an Alert Limit that represents the maximum safe error for a given task phase.

  • PL: “how large the error could be” under current evidence (concept; often split into horizontal/vertical).
  • AL: “how large the error is allowed to be” for safe use.
  • Rule: if PL > AL, the system must limit outputs and raise status flags (not just log a warning).
PL computed AL defined PL>AL → limit outputs flags + logs

Fault detection & isolation across sources (GNSS / INS / aiding)

Fault isolation should be source-aware: different sources fail differently and need different mitigation ladders.

GNSS anomaly

  • Signals: constellation disagreement, persistent innovation bias, repeated gate breaks.
  • Actions: down-weight → exclude → switch mode (tight↔loose) → Recovery admission window.

INS anomaly

  • Signals: drift growth inconsistent with model, systematic mismatch vs multiple independent measurements.
  • Actions: increase process uncertainty, limit outputs, transition to Degraded or Re-init if persistent.

External aiding anomaly

  • Signals: residuals correlate with maneuvers or conditions (e.g., slip-like behavior).
  • Actions: gate and down-weight first; exclude when evidence persists; record the reason.

Consistency voting (concept)

  • Goal: avoid lock-in to one wrong source when others disagree.
  • Policy: require cross-source agreement for Normal mode; degrade when votes split.
The integrity monitor should be able to explain its decision: which test failed, which hypothesis was applied, and why an output was limited or a source excluded.

Acceptance evidence: proving integrity is “working”

  • Fault injection: introduce a controlled bias/jump in one source and verify detection + isolation behavior.
  • Stable PL behavior: PL responds plausibly to conditions; no random jumping without evidence.
  • Output limiting: when PL>AL, outputs are explicitly limited and flagged—not silently passed through.
  • Traceability: logs record test failures, actions taken, and recovery admission results.
Figure F9 — Integrity monitor block: inputs → tests → alert / limit outputs
A RAIM-style integrity layer that drives decisions (PL vs AL) and explains actions through flags and logs.
Integrity monitor block diagram with inputs, tests, and outputs including protection levels and output limiting GNSS inputs measurements quality indicators INS prediction propagated state covariance Aiding inputs optional measurements health flags Integrity tests Residual consistency innovation / NIS gates Fault hypothesis single-source exclusion Protection Level PL computed (concept) PL > AL ? limit decision Status flags mode · excluded src integrity alert Output limiting degraded outputs bounds / do-not-use Event log test fail → action recovery evidence always Yes log evidence No

H2-10 · Implementation Partitioning: Compute, Interfaces, and Data Products

Integrated navigation is a system product: compute partitioning, time-tag discipline, and output contracts matter as much as filter design. This section maps typical roles across MCU/SoC/FPGA (concept), defines interface expectations (examples only), and specifies the data products that downstream mission functions need: solution + covariance + flags + integrity outputs + logs.

Engineering takeaway

A robust navigation engine has a clear partition: deterministic capture and health management at the edge, fusion and integrity in the compute core, and explicit data products for downstream consumers. Covariance is not decoration—it enables gating, integrity, and safe system-level decisions.

Partitioning roles (MCU vs SoC vs FPGA, concept)

  • MCU: sensor capture, timestamping discipline, health flags, state machine control, watchdog, event logging.
  • SoC: fusion loop (propagation + updates), integrity monitor, output packaging, configuration management.
  • FPGA (optional): deterministic timestamp pipelines, high-rate buffering/alignment, parallel pre-processing (concept only).
The partition should optimize determinism and traceability. “Fast” is not enough; the system must explain decisions after the fact.

Interface expectations (examples only, no protocol deep dive)

Interfaces are described by constraints, not by a protocol list. Examples such as SPI/LVDS/UART may be used, but the design requirement is the same: correct timestamps, bounded latency, and explicit quality flags.

  • IMU link: high rate, low jitter, monotonic timestamps.
  • GNSS link: includes measurement time (t_meas) and quality indicators; arrival time is not a timestamp.
  • Aiding links: include time tags and health flags; must support down-weight/exclude actions.
  • Output link: carries solution + covariance + flags + integrity outputs + logs.

Data products (output contract for mission functions)

A navigation engine should publish a complete product, not just a PVT number:

  • Solution: position, velocity, attitude, time.
  • Uncertainty: covariance or bounds; used for gating, fusion credibility, and downstream decision logic.
  • Status flags: mode, excluded sources, recovery state, quality summary.
  • Integrity outputs: PL (concept), alerts, output-limit state.
  • Event logs: evidence trail for every mode change and mitigation action.
Covariance provides context: a “small error estimate” without uncertainty can be operationally unsafe because it cannot support integrity decisions.

Traceability checklist (sets up the validation chapter)

  • Every mode transition: timestamp, reason, evidence metrics, and action taken.
  • Every exclude/down-weight: which measurement set, which gate, duration, and recovery condition.
  • Every recovery admission: window length, pass/fail, and restored weight policy.
  • Configuration fingerprint: versioned thresholds and mode rules so flight logs are reproducible.
Figure F10 — Partitioning + dataflow: capture → align → fuse → integrity → data products
A system-level view that emphasizes determinism, timestamps, and a complete output contract.
Implementation partitioning and dataflow for an integrated navigation engine: MCU capture, SoC fusion, optional FPGA alignment, and data products MCU edge capture · timestamps health flags · state machine SoC compute fusion loop integrity + packaging FPGA (opt.) deterministic pipelines buffer · alignment Sensor capture IMU · GNSS · aiding Time align buffer · t_meas Fuse propagate + updates Integrity tests · PL/AL Data products (output contract) Solution P · V · Att · Time Uncertainty covariance / bounds Flags + logs mode · actions · trace Traceability evidence config replay

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-11 · Validation & Flight Test Plan: Proof, Not Promises

A GNSS+INS navigation engine is complete only when evidence proves performance, integrity behavior, and traceability across Simulation → HIL → Field. This section defines a validation pyramid, concrete pass/fail criteria, and data retention rules that make every mode transition explainable.

Engineering takeaway

Validation must demonstrate: (1) convergence time, (2) drift rate during GNSS loss, (3) safe reacquisition behavior, (4) controlled alerts/output limiting (no silent hazardous errors), and (5) replayable evidence (raw + solution + flags + logs + config fingerprint).

convergence proven outage drift bounded recovery admission stable false alerts controlled logs replayable

Done definition (what “validated” means)

Validation is not a statement; it is a set of repeatable checks with explicit thresholds and evidence artifacts. A minimal “done” definition typically includes:

  • Convergence time: time-to-useful after start and after re-init.
  • GNSS outage behavior: drift rate and uncertainty growth remain consistent with bounds.
  • Reacquisition: time to return from Degraded/Recovery to Normal under admission-window rules.
  • Integrity actions: alerts and output limiting trigger when evidence requires (e.g., PL>AL concept).
  • False alert rate: alert frequency remains bounded under nominal conditions.
  • Traceability: every exclusion/down-weight/mode transition is explainable from logs and flags.
Use “threshold + window” wording (e.g., within a 30 s window, no more than N consecutive gate failures) to avoid ambiguous success claims.

Simulation: trajectories, error injection, Monte Carlo

Simulation should stress only what fusion and integrity need: observability, realistic error growth, and controlled corner cases. Focus on scenario coverage rather than a single “pretty” trajectory.

  • Trajectories: straight segments, accelerations, S-turns, and sustained turns to excite state observability.
  • Error injection: INS bias/scale drift (concept), GNSS measurement noise, and time-tag offsets (concept).
  • Monte Carlo: randomized seeds to quantify distribution (P95/P99) of convergence and drift outcomes.
  • Artifacts: scenario matrix, metric summaries, and representative failure archetypes.
Check Pass criteria format
Convergence Reach “usable” state within a defined window, and remain stable for a continuous dwell time.
Outage drift During a T-second GNSS loss, drift stays within declared bounds; covariance expands monotonically.
Integrity gating Under nominal conditions, gate breaks remain under an allowed count per window; under injected faults, detection occurs and actions are logged.

HIL: record-replay, latency/jitter injection, outage scripts

Hardware-in-the-loop (HIL) is where timing discipline and mode logic become measurable. Use controlled scripts that can be replayed and compared across firmware/config revisions.

  • Record-replay: replay recorded IMU/GNSS streams with original timestamps preserved.
  • Latency/jitter injection (concept): apply controlled delay and jitter patterns to arrival timing and verify sample alignment behavior.
  • Outage scripts: introduce GNSS measurement gaps and verify Degraded → Recovery → Normal transitions.
  • Quality-anomaly scripts: toggle quality indicators and verify down-weight/exclusion + recovery admission windows.
Avoid “all-or-nothing” logic. A robust HIL result shows staged mitigation (down-weight → exclude → limit outputs) and stable recovery admission.

Field / flight test: metrics, scripts, and evidence retention

Field validation should be a minimal repeatable experiment: scripted maneuvers, consistent reference methodology, and complete data retention. Success is defined by measurable outputs, not subjective “looks stable.”

  • Metrics: convergence time, outage drift rate, reacquisition time, availability, and false alert rate.
  • Mode behavior: verify explicit flags, output limiting, and logged rationale for each transition.
  • Data retention: store raw streams + navigation solution + covariance/bounds + status flags + event logs.
raw IMU/GNSS solution covariance/bounds flags event logs config fingerprint

Example validation hardware (specific part numbers)

The list below provides concrete, commonly used evaluation/building-block items for a repeatable validation stack. Selection should match required performance and interface constraints.

  • GNSS evaluation / baseline: u-blox ZED-F9P module or EVK-F9P kit (multi-constellation, dual-frequency class).
  • Timing-oriented GNSS (optional): u-blox EVK-F9T kit (1PPS-capable evaluation class).
  • IMU evaluation: Analog Devices ADIS16470 series IMU with evaluation carrier (e.g., ADIS16470 family eval board).
  • Navigation baseline (optional): VectorNav VN-200 (GNSS/INS class module) as a comparative reference stream.
  • GNSS antenna (dual-frequency class): Tallysman TW3870 (example dual-band active antenna class).
  • GNSS simulation for lab repeatability (optional): Spirent GSS7000-class simulator (scenario repeatability tool class).
  • Data capture: high-rate logger capable of deterministic timestamps and large storage (select to match sensor rates).
Part numbers above are provided as concrete examples for validation planning. Final selection should be driven by required dynamics, latency budget, and required evidence quality (timestamps, flags, and replay fidelity).

Test matrix: turn validation into a checklist

Build a small matrix that covers both nominal and stress conditions across all three layers. Each cell should contain: Inputs → Injections → Observables → Pass criteria.

Layer Nominal Outage Timing stress Quality anomaly
Sim Convergence + steady metrics Drift growth + bounds consistency Time-tag offset (concept) Innovation bias injection (concept)
HIL Replay fidelity + stable flags GNSS gaps + safe mode transitions Latency/jitter scripts Down-weight/exclude + admission window
Field Availability + false alert rate Real obstruction segments Realistic latency budget validation Quality indicators vs actions consistency
Figure F11 — Validation pyramid: Simulation → HIL → Field (each with pass criteria)
A three-layer proof stack that turns performance and integrity claims into replayable evidence.
Validation pyramid for integrated navigation: Simulation, HIL, and Field with inputs, injections, and pass criteria FIELD / FLIGHT HIL SIM SIM pass criteria • convergence window met • drift matches bounds • gates behave as designed HIL pass criteria • replay fidelity preserved • latency/jitter tolerated • outage scripts safe • stable recovery admission FIELD pass criteria • metrics meet targets • false alerts bounded • raw + flags + logs retained • config fingerprint recorded Evidence artifacts Inputs raw IMU · raw GNSS Injections error · timing · outage quality anomaly scripts Outputs solution + covariance status flags event logs config fingerprint Proof stack: repeatable scripts + explicit thresholds + replayable evidence

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12 · FAQs (Integrated Navigation: GNSS + INS)

These FAQs target practical design, tuning, and validation questions for GNSS+INS integrated navigation. Answers stay at the fusion/system layer (no RF implementation details) and emphasize observable symptoms, decision thresholds, and evidence that can be logged and replayed.

How to use this FAQ

Each answer provides: a clear boundary/decision, observable signals (innovation, gating, mode flags), and a concrete engineering action (tuning knob, logging requirement, or test acceptance rule). This structure supports both troubleshooting and system verification.

For safety-grade deployments, treat “sounds plausible” as insufficient. Prefer thresholds + time windows + replayable evidence.
Figure F12 — FAQ coverage map: what questions validate which parts of the stack
Numbers (Q1–Q12) cluster by architecture, filter/tuning, timing, resilience, integrity, and validation evidence.
FAQ coverage map for GNSS+INS integrated navigation Architecture Q1 · Q2 loose vs tight · tuning cost Filter & Modeling Q3 · Q4 state vector · Q tuning Alignment Q5 · Q6 init failure · lever arm Timing Discipline Q7 timestamps · latency alignment Resilience Q8 · Q9 · Q10 outage · anomaly handling Safety & Evidence Q11 · Q12 integrity · logging for replay Goal: answers map to observable signals (innovation, gates, flags) and replayable artifacts (raw, solution, logs)

FAQs × 12 (with answers)

Q1What is the practical engineering boundary between loose and tight coupling?

The boundary is the measurement depth that enters the estimator. Loose coupling updates the filter with receiver-level navigation outputs, while tight coupling updates using lower-level GNSS observables so partial satellite information still contributes under weak geometry or dropouts. The decision hinges on interface access, latency discipline, observability, and verification cost—not on “accuracy” alone.

Q2Why is tight coupling stronger under weak signal, yet harder to tune?

Tight coupling can exploit partial observability and preserve more information when satellite tracking is marginal, improving continuity. It is harder to tune because it is more sensitive to timing alignment, measurement modeling, and unobservable states. Innovation gates, multi-rate updates, and delay compensation must be consistent; otherwise the filter can oscillate, over-trust bad data, or diverge.

Q3Which states are essential, and which are “over-modeling”?

Essential states explain dominant error growth and are supported by observability: position/velocity/attitude plus inertial bias terms are common. Additional states (lever arm, clock, delays) are justified only when measurements can actually constrain them. “Over-modeling” shows up as weakly observable states that drift freely, create false confidence, or force aggressive tuning to mask instability.

Q4How should process noise (Q) be tuned to avoid both divergence and sluggishness?

Q sets how quickly uncertainty grows between updates; it is the balance between trusting the motion model and trusting measurements. Overly small Q can cause overconfidence, gate breaks, and brittle behavior; overly large Q yields slow convergence and noisy solutions. Use innovation statistics and gate-hit rates as feedback: aim for stable gates under nominal motion and prompt correction under real disturbances.

Q5What are the most common causes of initialization and alignment failure?

Most failures come from violated assumptions rather than a “bad filter.” Common causes include incorrect time tags, unreliable initial heading sources, false stationary detection, and missing lever-arm compensation. Symptoms often appear as large initial innovations, repeated gating failures, or immediate mode fallback. A robust startup sequence validates prerequisites (time alignment, motion state, heading quality) before enabling full-rate updates.

Q6What symptoms appear when lever arm or installation misalignment is ignored?

Unmodeled lever arm and misalignment typically create motion-correlated errors: biases that grow during turns, acceleration-dependent position offsets, and systematic innovation patterns that repeat with direction or maneuver type. The filter may “fight” these errors by distorting bias estimates, which can degrade outage performance. When such signatures appear, either calibrate the lever arm or represent it conservatively and enforce clear validity flags.

Q7How do timestamp or latency mismatches show up as “fake drift”?

Misaligned time tags can mimic sensor bias, scale-like errors, or step-like jumps at update times. Typical signs include phase-lagged corrections, innovations that peak at consistent offsets, and solution degradation during high dynamics. The fix is rarely “more filtering.” Enforce deterministic timestamps, align measurement time (t_meas) rather than arrival time, and log alignment diagnostics for replay.

Q8During GNSS outage, how to decide “reset” vs “keep coasting”?

Reset should be a controlled last step, not the default. Continue coasting when uncertainty expands plausibly, constraints remain consistent, and recovery admission can be satisfied once GNSS returns. Reset becomes appropriate when innovations show persistent structural inconsistency, states become effectively unobservable, or recovery repeatedly fails under defined windows. Use explicit mode states (Normal → Degraded → Recovery → Re-init) with logged evidence for each transition.

Q9How can innovation statistics help distinguish interference, spoofing-like anomalies, and normal dynamics?

Use patterns, not single samples. Normal dynamics may increase innovation magnitude but should remain statistically consistent across sources and time windows. Interference-like conditions often coincide with degraded measurement quality indicators and broadened innovation variance. Spoofing-like anomalies at the fusion layer tend to break cross-source consistency (e.g., constellation disagreement) or create structured residual biases that the motion model cannot explain. Decisions should be windowed and traceable.

Q10What mitigation actions can the fusion layer take without relying on RF design?

A practical mitigation ladder is: down-weight suspicious measurements, exclude them when evidence persists, switch coupling mode when appropriate, and require multi-source consistency before returning to Normal. Recovery should use admission windows (continuous pass time, bounded gate failures) to prevent oscillation. Every action needs explicit entry/exit criteria and logs: which tests triggered, which sources were affected, and what evidence allowed recovery.

Q11Why are integrity and accuracy not interchangeable?

Accuracy describes typical error; integrity describes the confidence that error stays within safe bounds and that hazardous errors are not silently used. A solution can be “accurate” most of the time yet fail dangerously under rare faults. Integrity requires evidence-driven actions: consistency tests, alerting, and output limiting (e.g., PL vs AL concept) with clear status flags and traceable logs. Without those actions, accuracy metrics alone do not protect the mission.

Q12In HIL and field validation, what signals must be recorded for reliable replay and root-cause analysis?

Record a minimal evidence set that makes decisions reproducible: raw IMU and raw GNSS streams with measurement time tags (t_meas), the navigation solution with covariance/bounds, status flags (mode, excluded sources, recovery state), and event logs that include reason codes and thresholds. Add summaries of innovation/gating counts over windows and a configuration fingerprint (versions and parameter hashes) so reruns compare apples-to-apples.