123 Main Street, New York, NY 10001

Radiation (TID/SEE) Monitor for Spacecraft Health Monitoring

← Back to: Avionics & Mission Systems

A Radiation (TID/SEE) Monitor turns the space radiation environment into actionable engineering data—cumulative dose and time-stamped event statistics—so missions can trend lifetime risk, detect bursts, and make mode decisions with confidence. It focuses on a trustworthy sensing chain (sensor → AFE → discrimination/counting → telemetry) with calibration and redundancy, helping distinguish real radiation effects from false triggers.

H2-1 · What a Radiation Monitor really does (TID + SEE, and why both matter)

Extractable answer: A spacecraft Radiation Monitor turns the local radiation environment into two engineering signals: TID dose (a slow, cumulative trend) and SEE events (fast, discrete upsets). Sensors (RADFET/PIN/Si) feed an AFE that integrates dose and shapes pulses for event discrimination, then produces counters, timestamps, and compact telemetry for health and mission decisions.

How it works (end-to-end chain)
  1. Particles / LET / dose interact with a sensing element (RADFET threshold shift, PIN/Si current or pulses).
  2. AFE conditioning converts tiny signals into measurable quantities: TIA/charge integration for dose, pulse shaping for SEE.
  3. Quantization & decisions separate slow vs fast: ADC tracks dose-related trends; a discriminator classifies valid events.
  4. Data products are formed: dose accumulator (krad(Si) trend) plus event counters and timestamps (rate/fluence evidence).
  5. Redundant power domains keep monitoring and reporting continuous across single-domain disturbances and transient conditions.

Why TID and SEE must be handled differently: TID behaves like a slow variable (cumulative drift and long-term margin), so accuracy depends on leakage/temperature separation and stable integration. SEE behaves like a fast event stream (SEU/SEL/SET/SEFI), so correctness depends on thresholding, debounce/dead-time, and trustworthy timestamps.

Engineering value (what decisions it enables)
  • Lifetime margin: track dose accumulation to support derating, scheduling, and end-of-life forecasts.
  • Actionable alarms: detect event-rate excursions that justify mode changes or operational constraints (interface-only to protection actions).
  • Correlation evidence: time-align upset bursts with subsystem anomalies to separate real radiation from false triggers.
  • Mission context: compare measurements across attitude/shielding changes using consistent metrics and reporting fields.

Scope boundary: This page focuses on sensing → AFE → counting/timestamping → reporting and the resulting telemetry products. Power protection, bus architectures, and downlink systems are referenced only as interfaces, not explained in depth.

Figure F1 — Radiation monitor signal chain: environment → sensor/AFE → dose & event telemetry
Radiation (TID/SEE) Monitor — From particles to telemetry Environment Sensor + AFE Data Products Particles LET + Dose Slow: TID trend Fast: SEE events Sensors RADFET • PIN • Si AFE TIA / Integrator Pulse shaping Discriminator ADC CMP Dose accumulator Event counter SEU / SEL / SET Timestamp Telemetry packet Domain A Domain B Slow path (TID) + Fast path (SEE) share the same monitor chain

H2-2 · Radiation metrics engineers actually use (krad(Si), LET, fluence, rate)

Radiation monitoring becomes actionable only when measurements are expressed in metrics that map directly to AFE dynamic range, threshold/debounce policy, and reporting granularity. This section defines the minimum set of terms that keep dose trends and SEE event streams comparable across test and flight.

TID (Total Ionizing Dose) — slow, cumulative health variable
krad(Si) (dose) Cumulative deposited dose referenced to silicon. Design mapping: sets the sensor output span and integration headroom, plus when range switching or controlled discharge is required to avoid saturation and preserve long-term resolution.
Dose rate Dose per unit time. Design mapping: sets integration/update cadence and determines how strongly leakage and temperature drift can masquerade as “dose change” if the AFE is not stabilized and compensated.
Reference basis The metric is meaningful only when the reference is stated (e.g., “Si” in krad(Si)). Design mapping: reporting must include the reference label so trends remain interpretable across missions and test campaigns.
SEE (Single-Event Effects) — fast, discrete event stream
LET Linear Energy Transfer: a proxy for how “strong” a particle strike is in the sensitive volume. Design mapping: motivates multi-threshold discrimination (event “bins”) because event amplitude/shape can vary across LET, affecting how thresholds and hysteresis should be chosen.
Fluence Total particle exposure (integrated flux). Design mapping: requires reporting an exposure window alongside counts; otherwise event totals cannot be compared between orbits, attitudes, or test runs.
Cross-section (σ) Event probability per particle exposure for a given mechanism (e.g., SEU). Design mapping: reinforces that counts alone are insufficient; telemetry should pair counts + time + exposure tag to preserve engineering meaning.
Event rate Events per unit time. Design mapping: sets alarm thresholds and drives debounce/dead-time choices to keep false triggers low without hiding real bursts. Timestamp health must be maintained to support correlation.

Metric → design constraints (fast reference):
Dose range → sensor span + integrator headroom + drift separation
Dose rate → update cadence + leakage/temperature compensation strategy
LET/fluence → discriminator thresholds + hysteresis + debounce/dead-time
Event rate → alarm policy + reporting granularity (summary vs histogram vs event queue)

Common traps that distort interpretation:
• Focusing on TID only can miss bursty SEE conditions where dose remains low but SEU/SET activity is high.
• Focusing on SEE only can miss slow drift that changes thresholds and biases, causing the monitor itself to lose fidelity over time.

Figure F2 — Slow TID trend vs fast SEE event stream on the same mission timeline
Metrics map to two data shapes: cumulative dose vs discrete events Mission time TID (slow): dose in krad(Si) SEE (fast): events with LET/fluence context Output: dose accumulator Output: counts + timestamps Slow path: integration + drift control Fast path: threshold + debounce + timebase Blue curve = TID dose; blue strokes = SEE events

H2-3 · Sensing elements: RADFET vs PIN diode/Si detector vs “victim-based” monitors

A Radiation Monitor is only as good as its sensing element. The correct choice depends on the primary intent: TID lifetime trending, dose-rate/fluence tracking, or SEE proxy evidence. The key engineering rule is simple: the sensor output type dictates the AFE architecture (voltage drift, continuous current, charge pulses, or event/error counts).

Selection boundaries (choose by what must be proven)
Need: long-term TID margin Prefer RADFET-style sensing (threshold/voltage drift). It directly represents cumulative ionizing dose, but demands low-drift readout and careful temperature interpretation.
Need: dose-rate / fluence changes Prefer PIN diode / Si detector sensing (current or pulse rate). It reacts quickly to environment changes, but must handle leakage, noise, and saturation across wide dynamic range.
Need: “did the platform get upset?” Use victim-based proxies such as SRAM/FPGA scrub statistics or error counters as SEE evidence. This is closest to mission impact, but it is not a pure environment measurement and must be tagged with exposure time and operating mode.

How temperature and aging show up in data: RADFET drift can change slope with temperature and annealing, so dose trending requires a temperature context tag. PIN/Si leakage typically rises with temperature, which can look like “dose-rate” unless leakage is bounded or compensated. Victim-based counts can change with workload and scrub policy, so they must be interpreted as platform-upset rate under a defined mode.

Output type → AFE implication (fast reference)
  • Voltage drift (RADFET): precision readout, low offset drift, stable sampling cadence (slow variable).
  • Continuous current (PIN): TIA/integration, range headroom, leakage control and guarding.
  • Charge pulses (Si detector): pulse shaping + discriminator + timestamped counting (fast events).
  • Error counts (proxy): requires exposure window, timebase health, and operating mode tag to be comparable.
Figure F3 — Sensor options compared by output type, usable range, and main error sources
Sensing elements (selection by output type) RADFET PIN / Si Detector Victim-based Proxy Output type Output type Output type Typical range Typical range Typical range Main error sources Main error sources Main error sources Vth / V drift Best for TID trend I / pulses Rate & fluence 123 Error counts SEE evidence Slow drift over mission Trend (TID) Wide current / pulse span pA–µA / pulse rate Counts per window Rate needs tags Temp Anneal Offset Leakage Noise Sat Workload Policy Tag Rule: Output type determines the AFE (drift readout vs TIA/integration vs pulse chain vs time-windowed counters).

H2-4 · AFE architecture for dose: charge integration, TIA, and drift control

Dose monitoring is a slow-variable measurement: it succeeds when the AFE makes pA–µA-level sensor signals measurable without confusing leakage, bias drift, and temperature effects as “dose change”. For most implementations, the practical architecture is a guarded front-end plus TIA or charge integration, a stable sampling/ADC stage, and a digital accumulator that records both the dose estimate and confidence flags.

Dose AFE signal chain (what each block must guarantee)
Input domain (pA–µA / charge) The signal can be comparable to board leakage and amplifier bias. Front-end guarding, clean routing, and bounded leakage paths keep the measurement observable rather than drift-dominated.
TIA vs Integrator TIA converts current to voltage for continuous dose-rate proxy; integration converts tiny current/charge into a measurable ramp for better low-level resolution. The choice is driven by whether the data product emphasizes dose-rate tracking or dose accumulation.
Dynamic range controls Windowing, controlled discharge, and range switching prevent saturation across quiet conditions and storm bursts, while preserving resolution for long-term trending. Each control action should be recorded as a health flag to protect interpretation.
Drift control Auto-zero/chopper techniques reduce offset drift so that multi-hour or multi-day dose trends remain meaningful. The intent is not high speed, but stable long-duration fidelity.
ADC + digital accumulation ADC selection prioritizes low noise and stable gain over bandwidth. Digital accumulation produces dose, dose-rate (optional), and quality indicators (saturation, excessive leakage, temperature out-of-range).
Error budget (what most often turns into “fake dose”)
  • Input bias & offset drift: appears as a slow ramp even when environment is quiet.
  • Leakage (sensor/package/PCB): temperature-dependent current that can dominate pA signals.
  • Temperature coefficient: changes the apparent gain/offset and alters long-term slope.
  • Rf noise (TIA): sets the practical low-level resolution; too large a resistor can increase noise and drift sensitivity.
  • Integration saturation: produces clipped ramps; without flags, clipped data can be misread as a plateau.

How “dynamic range” becomes real design: A monitor that must survive both quiet periods and burst conditions typically needs at least one of: (1) integration window control, (2) controlled discharge/reset, or (3) range switching. The telemetry should expose which mechanism is active so trend analysis remains trustworthy.

Figure F4 — Dose AFE architecture: where drift enters, and which blocks control it
Dose AFE: guarded front-end → TIA/integration → ADC → accumulation + flags Sensor RADFET / PIN Protection Limit / clamp TIA / Integrator Rf, Cf, reset path Windowing / range Drift control Auto-zero / chopper ADC Low noise Digital Dose accumulator Confidence flags Telemetry Dose + flags Leakage Bias Temp Window control Discharge/reset Range switch Design focus: keep drift (bias/leakage/temp) from masquerading as dose, and flag any saturation/range actions.

H2-5 · AFE for SEE events: pulse shaping, discrimination, and timestamping

The SEE event path turns fast sensor spikes into trustworthy event evidence. A robust chain can be expressed as a four-step flow: (1) sensor pulses/spikes, (2) pulse shaping, (3) discrimination with threshold policy and dead-time, and (4) counting plus timestamping. The engineering goal is to reject noise-triggered hits without hiding real events, and to preserve enough timing information for correlation.

4-step SEE event chain
  1. Sensor pulse / spike: a short transient current or charge packet appears at the detector output.
  2. Pulse shaping: bandwidth and noise are traded to produce a stable pulse width and amplitude suitable for a comparator.
  3. Discrimination: thresholds + hysteresis + debounce/dead-time produce a single clean trigger per physical event.
  4. Counting + timestamping: counters (or bins) provide statistics; timestamps provide correlation and burst analysis.
Threshold strategy (how “event validity” is enforced)
Fixed threshold Simple and comparable across runs. Best when baseline noise is stable. Needs hysteresis to prevent chatter.
Adaptive threshold Tracks noise floor to stabilize false-trigger rate. Must be bounded and flagged, or it can silently raise the bar and miss real events.
Multi-threshold bins Produces event “levels” (low/med/high) that are ideal for telemetry histograms. Requires clear dead-time and bin definition to avoid double counting.

False triggers in event chains: Noise or coupled spikes can exceed a comparator threshold and look like an event. The countermeasures are implemented inside the discrimination path: hysteresis (prevents edge chatter), debounce (requires persistence), and dead-time (suppresses re-triggering from pulse tails and ringing).

Counting formats: A timestamp queue supports correlation but costs bandwidth and can overflow during bursts. A histogram (counts per bin per window) is bandwidth-friendly and preserves distribution information. Many systems combine both: summary counters always, and a short timestamp queue only when rates spike.

Figure F5 — Event timing: noise vs real pulse, shaping width, discriminator threshold, dead-time, timestamp
SEE event chain timing (sensor → shaping → discrimination → timestamp) Time 1) Sensor input 2) After shaping 3) Discriminator 4) Timestamp + dead-time Noise Real event Tpulse Vth dead-time t0 Goal: reject noise, keep real events, preserve timing

H2-6 · Data products: what to report (dose, rate, histograms, SEE taxonomy)

Telemetry is the product of a Radiation Monitor. The right reporting set must support engineering decisions (lifetime margin, alarms, correlation) without exhausting bandwidth or storage. A practical approach is to publish two levels: a high-frequency summary for operations and a low-frequency detail set for analysis.

Recommended two-level reporting
Level A — Real-time summary TID: dose_total, dose_rate (windowed), dose_quality_flags.
SEE: counts_by_bin (or type), see_rate (windowed), max_burst, timestamp_health.
Purpose: alarms, trending, and quick correlation with subsystem anomalies.
Level B — Low-rate detail Histograms: counts per threshold/bin over longer windows (bandwidth-friendly).
Timestamp queue (limited): short event list during spikes or when triggered (correlation evidence).
Purpose: root-cause support without continuous high-rate data.
Minimum field set (must-have for correct interpretation)
  • Dose fields: dose_total, dose_rate (windowed), dose_flags (saturation, reset/discharge, range switch).
  • SEE fields: counts_by_bin/type, see_rate, max_burst (short-window peak), queue_depth/overflow flag.
  • Correlation tags: temperature, operating mode, exposure/shielding tag (metadata only).
  • Integrity & redundancy: domain_id (A/B), schema_version, seq_counter, timebase_state, CRC.

Granularity trade-off: Reporting too much detail can break bandwidth and storage budgets, especially during bursts. Reporting only coarse totals can hide whether the monitor saturated, switched range, or lost timestamp fidelity. The two-level approach keeps operations stable while preserving enough evidence for post-analysis.

Figure F6 — Telemetry packet layout: header + dose fields + SEE counters + flags + CRC (with A/B consistency)
Radiation monitor telemetry packet (fields that preserve meaning) Telemetry packet Header domain_id • schema_version • seq_counter • timebase_state Dose fields (TID) dose_total • dose_rate(window) • dose_flags SEE counters counts_by_bin/type • see_rate(window) • max_burst • queue_depth/overflow Metadata tags temperature • operating_mode • exposure/shielding_tag CRC A/B consistency schema_version timebase_state seq_counter cross-check Two-level reporting: summary for operations, detail (histograms/short event queue) for analysis when needed.

H2-7 · Calibration & compensation: temperature, annealing, sensor aging

Calibration is what keeps long-duration radiation data meaningful. A monitor that trends dose for days to years must separate true environment change from temperature effects, AFE drift, and sensor aging. The most reliable strategy is a two-part loop: ground calibration produces traceable coefficients, and on-orbit health checks prevent compensation from hiding real events or fabricating “fake dose”.

What must be compensated (and what should only be flagged)
Temperature effects Sensor leakage and sensitivity can change with temperature; AFE offset and bias drift can mimic slow dose slope. Temperature context tags and bounded compensation prevent false trending.
RADFET annealing Apparent “dose rollback” or slope change can occur as trapped charge partially relaxes. Treat it as an expected behavior that must be modeled/flagged, not as an environment improvement.
Sensor aging Long-term sensitivity changes can shift gain and baseline. Trending requires a calibration version and uncertainty grade to keep lifetime estimates conservative.
Sensor drift vs AFE drift (how to tell them apart)
  • Cross-channel consistency: if multiple channels shift in the same direction and scale, the AFE is a likely contributor.
  • Reference/anchor behavior: a stable internal reference path (or a known “quiet” window) helps identify offset/bias drift.
  • Temperature correlation: leakage-driven shifts often track temperature; true dose accumulation should not instantly follow temperature swings.
Ground/production outputs (minimum coefficient set)
Core coefficients gain, offset, tempco (bounded model), plus valid_range or range_id.
Traceability cal_version, cal_date, and an uncertainty grade (e.g., low/med/high or numeric bound).
Health linkage Flags that protect interpretation: saturation, range switch, discharge/reset activity, temperature out-of-range, and “compensation bounded” indicators.
Error sources: calibratable vs not calibratable
Calibratable Repeatable gain/offset errors, temperature coefficients in defined ranges, predictable baseline drift that can be verified by health checks.
Not calibratable (flag instead) Burst transients, saturation/recovery behavior outside valid range, unexpected coupling that creates rare spikes, and timebase loss during an upset. These require quality flags, not aggressive correction.

Compensation safety rule: If a correction cannot be validated on-orbit by a self-consistency check, it should be bounded, versioned, and accompanied by a data-quality flag. This prevents “fixing the data” at the cost of hiding real environment change.

Figure F7 — Calibration closed loop: coefficients, on-orbit health checks, and update policy
Calibration & compensation loop (traceable and bounded) Ground calibration gain • offset • tempco Coefficients store version • validity • uncertainty On-orbit measurement dose • rate • events Health check cross-sensor • anchors • temp tags Update policy freeze • bounded update • flag apply coefficients Traceable versions Bounded correction Principle: calibrate what can be validated; flag what cannot; keep versions and uncertainty visible in telemetry.

H2-8 · Redundant power domains & fault tolerance (monitor must survive the event)

Redundancy exists to keep the monitoring chain alive and trustworthy during SEL-like upsets and transients. The purpose is not to describe a full aircraft/spacecraft power system, but to ensure the radiation monitor can continue measuring, preserve traceability, and avoid fabricating events during recovery.

Redundancy as engineering: goal → mechanism → acceptance
Goal Monitoring continuity through upsets; no silent data loss; consistent timebase and counter meaning across failover.
Mechanisms (monitor-side) Dual domains A/B (each with sensor+AFE+counter), independent health flags, watchdog/state machine for safe reset, and a single output path with controlled failover.
Acceptance checks Failover preserves domain_id, seq_counter continuity, and timebase_state visibility; A/B summaries remain comparable within defined bounds; overflow/saturation states are always flagged.
Data consistency (concept level)
  • Compare: cross-check A/B summary counters in the same window; if divergence exceeds a bound, raise a quality flag.
  • Vote/failover: select the domain with valid timebase and health flags; switch only with explicit state and logging.
  • Align timing: preserve timebase_state and domain_id in every packet so ground analysis can stitch records reliably.

Boundary reminder: Protection actions and power topology belong to the system power page. Here, redundancy is treated only as it impacts the monitoring chain: measurement continuity, event integrity, and traceable telemetry.

Figure F8 — Dual-domain redundancy: A/B measurement chains, cross-check, and single telemetry output with failover
Redundant domains for monitoring continuity (Domain A / Domain B) Domain A Sensor dose / event input AFE TIA / shaping / ADC Counters + flags dose_total • bins • health Domain B Sensor dose / event input AFE TIA / shaping / ADC Counters + flags dose_total • bins • health Cross-check compare vote time align Single telemetry output domain_id • seq • timebase_state failover controlled Intent: keep measurement trustworthy through upsets; preserve traceability with explicit domain + timebase state.

H2-9 · Placement & shielding for monitors (measurement integrity, not full shielding theory)

Placement determines what the radiation data actually represents. A monitor can be deployed to measure a representative cabin/box environment, to capture a worst-case exposure near a sensitive item, or to provide a correlation point that helps explain anomalies. Local structure and partial shielding can introduce bias (a systematic offset versus the true environment) and lag (delayed response to environment changes), so deployment should be treated as part of the measurement system.

Placement intent (what the reading should represent)
Representative Measures typical exposure for trending and long-window comparisons. Best when the goal is stable dose/rate history rather than hotspot detection.
Worst-case Placed near a sensitive component or module to track local peaks and stress. Best for explaining failures and setting conservative margins.
Correlation Positioned near a structural feature, opening, or known gradient path to help interpret changes driven by configuration or shielding variation (metadata-driven interpretation).
How local structure distorts readings (bias and lag)
  • Bias: partial shielding or nearby structure can make a point read consistently lower/higher than the intended reference environment.
  • Lag: if the point is behind structure, step changes in exposure can appear delayed in the recorded dose-rate response.
  • Comparability: any mechanical or configuration change should be recorded as a metadata tag, or historical trends become non-comparable.
Recommended deployment topologies (2–3 practical options)
Single-point (representative) Lowest cost. Good for long-term trending. Limited ability to explain localized faults or shielding changes.
Two-point gradient One point near structure/opening, one near the core zone. The difference/ratio tracks shielding variation and improves interpretability without needing full materials modeling.
Hybrid (representative + sensitive) A baseline point plus a hotspot point near a critical module. Best for correlation: “environment changed” vs “local hotspot changed”.

Scope boundary: This section focuses on measurement integrity (what the data means and how it can be compared over time). It does not attempt to provide a complete shielding design guide.

Figure F9 — In-box placement: representative vs worst-case vs correlation points (with sensitive zones)
Monitor placement inside a box (data meaning depends on intent) Enclosure / bay Structure panel/opening Sensitive zone compute / memory Sensitive zone power rails M1 Representative M2 Worst-case M3 Correlation representative worst-case correlation Use intent + metadata tags: placement changes bias/lag and must be recorded for trend comparability.

H2-10 · Verification & qualification: heavy-ion/proton tests and acceptance criteria

Verification proves the monitor is “done” by producing deliverable evidence for both slow TID behavior and fast SEE event integrity. The goal is not to restate full standards, but to define engineering acceptance criteria: linearity/monotonicity for dose trending, predictable threshold response for event chains, traceable timebase behavior, and uninterrupted monitoring across redundancy events.

TID verification (dose steps, drift curves, recovery/annealing observation)
What to exercise Stepped dose exposure, temperature-tagged intervals, and post-step observation windows for recovery behavior.
What to deliver Drift curves (raw vs compensated), residuals vs temperature tags, saturation/range-switch flags, and calibration version used.
Acceptance focus Trend is monotonic within valid range (or within defined piecewise model bounds), compensation is bounded and flagged when outside validity.
SEE verification (heavy-ion/proton, threshold scans, classification consistency)
What to exercise Heavy-ion / proton exposure with threshold scans (single or multi-bin), burst conditions, and queue/overflow stress.
What to deliver Threshold scan summaries, counts_by_bin/type and rate windows, burst metrics (max_burst), queue depth/overflow statistics, and discriminator settings.
Acceptance focus False-trigger rate controlled by policy; missed-event risk is quantified by comparison to reference injection/conditions; binning remains consistent across runs.
System acceptance criteria (monitor-side)
Timestamp integrity timebase_state is always visible; timebase loss/recovery is logged; timestamp accuracy is within a declared bound for correlation use.
Counter meaning Counts are linear/monotonic versus intended stimulus in valid ranges; saturation/overflow never occurs silently (flags required).
Redundancy continuity Failover does not create fake events; domain_id + seq_counter continuity supports stitching; monitoring remains observable during recovery.
Deliverable evidence template (report fields)
  • Setup: sensor type, range_id, threshold policy, window settings, cal_version.
  • TID outputs: dose step table, drift curves (raw/compensated), residual summary vs temperature tags, flags summary.
  • SEE outputs: threshold scan summary, counts_by_bin/type, rate/burst metrics, queue overflow stats, dead-time settings.
  • Timing: timebase_state transitions, timestamp consistency bound.
  • Redundancy: failover logs, A/B compare metrics, continuity evidence (seq continuity, data gaps detectable).
Figure F10 — Verification matrix: functional blocks × environments, with deliverable evidence per cell
Verification matrix (monitor chain deliverables) TID SEE Thermal-vac Vibe Sensor Dose AFE Event AFE Discriminator Counter Telemetry Redundancy curve resid linear flags scan false bins rate fail temp state mech cont Deliverables curve / resid scan / bins flags / state fail / cont Focus on monitor evidence: curves, scans, flags, and continuity across timebase and redundancy behavior.

H2-11 · Failure modes & field diagnostics: distinguishing real radiation from false triggers

On-orbit anomalies often show up as a sudden rise in event rate, noisy low-threshold bins, or inconsistent telemetry between redundant domains. This section provides a monitor-side diagnostic workflow to separate real radiation-driven changes from false triggers caused by noise/EMI coupling, threshold drift, temperature steps, or single-event upsets affecting configuration and counters.

1) Common “false radiation” signatures (monitor-side only)
  • Noise/EMI triggering: low-threshold bins jump while higher bins remain flat; event width / dead-time occupancy becomes abnormal.
  • Threshold drift: event rate creeps upward over minutes to hours; strongly correlated with temperature or long-term aging.
  • Temperature step: abrupt baseline shift (bias/leakage) changes trigger probability; histogram shifts toward lower bins.
  • Register upset (concept level): counters/configuration flip (threshold_state, dead_time, bin map) causing discontinuities or impossible values.
  • Telemetry inconsistency: packet sequencing/CRC flags, timestamp disorder, or A/B domain mismatch without a plausible physical gradient.
2) Evidence chain (use at least 2 of 3 before declaring “real radiation”)
Three independent evidence types
Spatial consistency Multi-point monitors rise together (or change with a stable, explainable gradient). Single-point-only spikes suggest local coupling or drift.
Shape consistency Histogram/bin profile and burst statistics look physical (not only the lowest bin jumping). Sudden “all-bins-flat except low” often indicates noise.
State consistency Threshold_state, dead_time, domain_status, and timestamp_health support the interpretation; no silent configuration changes or timebase loss.
3) Field triage checklist (do this in order)
  1. Freeze a “configuration snapshot”: threshold_state, hysteresis, dead_time, shaping mode/bandwidth, bin map, cal_version.
  2. Inspect the time shape: is the rise a single spike, a step that persists, or a slow creep?
  3. Check multi-point correlation: synchronous rise across points, or localized to one point?
  4. Check histogram shape: do higher bins move, or only the lowest bin? Is there an abnormal “burst” pattern?
  5. Check discriminator health: dead_time_occupancy (or equivalent), threshold_state stability, overflow/saturation flags.
  6. Check telemetry integrity: packet_seq continuity, CRC_ok, timestamp ordering, and timebase/timestamp_health flags.
  7. Do a minimal self-consistency test (small, safe change): apply a minor threshold/dead-time adjustment and observe sensitivity.
    Heuristic: noise-triggered rates are often extremely sensitive to small threshold shifts; physical spectra usually change more smoothly and predictably.
4) Minimum required telemetry fields (monitor-side “must have”)
Minimum fields to support on-orbit diagnostics
Event statistics event_rate (per bin/type), max_burst, optional queue_depth/overflow
Shape histogram_bins (low-rate report), or compact bin-count summary
Decision state threshold_state, dead_time (or occupancy), discriminator_mode
Time integrity timestamp_health, timebase_state, monotonicity/ordering flag
Redundancy integrity domain_status (A/B active), domain_id, cross-check/vote status
Telemetry integrity packet_seq, CRC_ok, config snapshot reference (cfg_hash or cfg_version)
5) Quick interpretation table (symptom → most likely cause → next check)
Fast mapping for operators
Observed symptom Most likely cause Next check
Only lowest bin spikes; higher bins flat Noise/EMI coupling into discriminator threshold sensitivity test; dead_time_occupancy; multi-point sync
All bins increase; stable gradient across points Real environment change (credible radiation shift) shape consistency + timestamp_health; confirm metadata tags (mode/attitude)
Slow creep over hours; tracks temperature Threshold drift / bias/leakage drift temperature tag correlation; threshold_state history; compensation validity
Sudden step with domain switch; A/B mismatch Failover side effects or configuration upset domain_status timeline; cfg_hash change; packet_seq continuity; cross-check status
Impossible counter jump or wrap without flags Register upset / missing overflow signaling require explicit overflow flags; verify counter width; enforce plausibility checks
6) Example part numbers (architecture mapping; verify grade/QML per mission)
  • MCU (event management / telemetry): Microchip SAMRH71 (rad-hard MCU)
  • FPGA (histograms / higher-rate processing): Microchip RTG4 / RT4G150 (rad-tolerant FPGA family)
  • Comparator (multi-threshold discriminator): TI TLV1704-SEP (rad-tolerant quad comparator in SEP)
  • ADC (monitor acquisition): Renesas ISL73141SEH (rad-hard 14-bit SAR ADC, 1 MSPS class)
  • Op amp (monitor chain building blocks): Renesas ISL70444SEH (rad-hard quad op amp)
  • Voltage reference (ADC/AFE stability): Renesas ISL71091SEH10 (rad-hard precision reference)
  • MUX (range/channel/self-check selection): Renesas ISL73841SEH (rad-tolerant 32:1 analog MUX)
  • Electrometer-grade prototype AFE: ADI ADA4530-1 (femtoamp input bias; typically used for ground/prototype validation of pA–nA chains)
Figure F11 — On-orbit diagnostic decision tree: real radiation vs threshold drift vs noise triggering
Decision tree for “event-rate spike” Event-rate spike detected freeze config + check integrity Multi-point synchronous change? compare points / gradients YES: check histogram shape bins move beyond lowest? NO: suspect local coupling/drift check temp + threshold state All bins rise (credible spectrum) + timestamp health OK Conclusion: real radiation change tag mode/attitude; keep trending Only lowest bin jumps strong threshold sensitivity Temperature step / drift? bias/leakage + threshold drift Conclusion: drift / local false triggers review threshold & dead-time policy Domain switch / timestamp health bad? suspect config/counter upset or telemetry Key fields: event_rate + histogram + threshold_state + dead_time + domain_status + timestamp_health + packet_seq/CRC.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12 · FAQs ×12

These FAQs focus on monitor-side decision rules and deliverable fields (dose/rate, event bins, timestamp health, and redundant-domain status), avoiding system-wide power/bus/EMC deep dives.

1) TID and SEE are often mixed up—what’s the practical boundary?

TID is a slow, cumulative effect that changes device parameters over mission time, so it is tracked as dose_total and dose_rate. SEE is a fast, discrete phenomenon, so it is tracked as event_rate, counts_by_bin (histograms), and timestamps. If the data looks like a smooth trend, it is usually TID; if it looks like bursts or spikes, it is usually SEE. See H2-1/H2-2.

2) Which metric matters for lifetime planning: krad(Si) or dose rate?

Lifetime planning is primarily driven by cumulative krad(Si) (dose_total) because many degradations integrate over time. Dose rate is still important as a context tag: it can shift drift behavior and expose “rate-sensitive” offsets in the measurement chain. A robust monitor reports both dose_total and a filtered dose_rate, plus validity flags when temperature or range conditions limit accuracy. See H2-2/H2-4.

3) RADFET vs PIN diode—how to choose for a small satellite?

Choose RADFET when the priority is long-term TID trending (parameter drift mapped to dose), and choose a PIN/Si detector when the design needs current/pulse response that supports dose-rate or event-style sensing. For small satellites, also weigh calibration burden and stability: RADFET needs strong temperature/annealing awareness, while PIN-based chains need disciplined threshold/dead-time policies and noise control. See H2-3/H2-4.

4) How can pA-level leakage be prevented from looking like “dose”?

pA leakage can mimic a slow “dose” trend if the dose chain cannot separate sensor signal from bias/leakage drift. Use a signal chain that exposes drift entry points: range states, saturation/overflow flags, and temperature tags. Favor integration windows and discharge paths that avoid silent accumulation, and validate that dose_rate changes are not perfectly correlated with temperature steps. Calibration should clearly mark what is correctable vs not. See H2-4/H2-7.

5) What causes false SEE triggers and how is debounce/dead-time set?

False triggers usually come from noise/EMI coupling, threshold drift, or an event chain tuned too close to the noise floor. Set the threshold with margin, add hysteresis, and use debounce/dead-time so one disturbance cannot be counted repeatedly. A practical tuning method is a small threshold scan: if event_rate collapses with tiny threshold changes, the chain is likely noise-limited. Monitor dead_time occupancy and overflow flags so the monitor never “looks quiet” while saturated. See H2-5/H2-11.

6) Should the monitor report raw events or histograms?

Most systems benefit from a two-level approach: (1) real-time summaries (dose_total, dose_rate, event_rate, key flags) for alerting, and (2) low-rate histograms (counts_by_bin) for interpretability and trend comparison. Raw event lists with timestamps are valuable for correlation with external upsets, but they cost bandwidth and storage. If raw events are used, keep a bounded queue and always expose queue_depth/overflow flags. See H2-6.

7) How is timestamp accuracy ensured across redundant domains?

Redundant domains must make time integrity observable. Each report should include domain_id, domain_status, and timestamp_health (timebase_state), so consumers can detect loss of lock, recovery, or failover. If domains can diverge, publish a declared bound or confidence flag rather than implying perfect time. During failover, continuity relies on packet_seq and explicit state transitions so the record can be stitched without inventing events. See H2-5/H2-8.

8) Why does dose sometimes “go backward” after an eclipse (annealing/temperature)?

A “dose going backward” is usually a sensor/physics artifact, not negative radiation. Temperature changes and annealing effects can shift RADFET-like outputs or bias terms, making the interpreted dose appear to decrease. A well-designed monitor separates raw measurement from a monotonic accumulator and reports temperature tags plus validity flags. If corrections are applied, the monitor should version coefficients and expose health checks that show when the model is out of range. See H2-7.

9) Where should monitors be placed to correlate with avionics upsets?

Use placement to match the correlation goal. A representative point supports long-term environment trending, while a worst-case point near a sensitive module captures local peaks that explain upsets. A third correlation point near a structural feature can reveal bias/lag effects. Multi-point gradients often add more interpretability than a single “perfect” location. Always tag configuration and placement metadata so trends remain comparable after hardware changes. See H2-9.

10) What are pass/fail criteria for heavy-ion SEE testing of the monitor itself?

Pass/fail should be defined in monitor terms: controlled false-trigger rate under the intended threshold policy, quantified missed-event risk, and consistent binning across threshold scans. Counters and queues must never overflow silently—overflow and saturation flags are mandatory. Timestamp integrity must remain observable (timestamp_health/timebase_state), and redundancy behavior must preserve continuity across failover without creating fake bursts. Deliverables should include scan summaries, counts_by_bin, rate windows, and continuity evidence. See H2-10.

11) How to prove a spike was real radiation, not EMI/cabling noise?

Treat it as a proof problem: require at least two independent evidence types. First, check multi-point consistency (synchronous rise or stable gradients). Second, check shape consistency (bins beyond the lowest move; burst statistics look physical). Third, confirm state consistency (threshold_state and dead_time stable; timestamp_health OK; no domain switch artifacts). If a tiny threshold change collapses the rate, the spike is likely noise-limited rather than radiation-driven. See H2-11.

12) Can a monitor survive SEL events, and what must remain operational?

The monitor should be designed so at least one domain can keep reporting a trustworthy “health envelope” during disturbances. Minimum required continuity includes: domain_status, timestamp_health/timebase_state visibility, summary counters (dose_total/event_rate), and integrity fields (packet_seq/CRC_ok). If a reset or failover occurs, it must be explicitly flagged so spikes are not misinterpreted as physics. Qualification should show that failover preserves observability and that counters/queues fail “loudly” via flags, not silently. See H2-8/H2-10.