Secure RTC / Time-Stamping turns “time” into verifiable evidence: every reboot, tamper, and power-fail event is bound to a monotonic counter and a signed proof artifact that can be audited and rejects rollback/replay.
The goal is not better accuracy—it is integrity, continuity, freshness, and auditability that still hold when the device is attacked or loses power.
Definition & Scope: What “Secure Time” Means
Secure time turns “time” into verifiable evidence: it can be checked, audited, and proven resistant to tampering and rollback.
This page focuses on integrity and provability—not on ppm accuracy, crystal selection, or battery-life math.
One-line definition
Secure time = Integrity + Continuity + Freshness + Auditability.
What each term must deliver
Integrity
Evidence cannot be modified without detection.
Artifact: signature verification / hash-chain continuity.
Continuity
Reboots and power events leave a provable timeline (or a clearly marked gap).
Artifact: monotonic counter + reset/power-fail cause.
Freshness
Old proofs cannot be replayed to masquerade as new time.
Artifact: counter/nonce binding + replay rejection.
Auditability
A third party can verify “what happened when” using public verification material.
Artifact: device identity + certificate chain reference.
Note: secure time improves trust, not necessarily absolute accuracy.
What can be proven vs what can’t
Can be proven (evidence)
Event/time records were not silently modified (signature/hash integrity).
Time did not move backward without raising a rollback/tamper condition.
Power-fail/reset causes were latched and committed in a defined order.
Time proofs are fresh (replay attempts are rejected).
Not guaranteed by secure time alone
Absolute UTC accuracy (ppm drift, crystal behavior, temperature compensation).
Network-sync correctness under adversarial delay or GNSS spoofing.
Perfect physical security if trust boundary is bypassed (requires system hardening).
Typical system boundary (trust zones)
Trusted boundary
Components allowed to hold keys and produce verifiable time tokens:
Secure RTC / Secure Element / SoC secure enclave.
Untrusted zone
Host software and buses can be monitored or manipulated. A plain I²C “time read” may be useful,
but is not evidence without a signed token or protected state.
External verifier
A server or audit tool checks signature,
freshness, and timeline consistency,
then stores evidence for later dispute resolution.
If only one thing is implemented
The minimum secure-time loop is: event + monotonic counter + signed token.
This provides integrity, freshness, and rollback detection without requiring a full secure log subsystem.
Event context (power-fail/tamper/reset cause flags).
Device identity (public-key ID / certificate reference).
Signature (verifiable integrity proof).
Acceptance check
A verifier can reject token replay, detect time rollback, and confirm event integrity by signature—without trusting the host.
Diagram: Time Trust Pyramid (layers of provable time)
Reading time is optional; proving time requires trusted state (counter/storage) and a verifiable proof (signature/attestation).
Threat Model: Tamper, Rollback, Spoof, and Power-Fail Abuse
Threat modeling for secure time is not about listing every security risk—it is about mapping time-specific attacks to
required proofs. Each item below defines what must be captured and verified so that later disputes can be resolved by evidence.
Threat: Rollback
High impact
Time state is forced back to an older value to bypass policy windows, audits, billing, or warranty rules.
Attacker capability
Can influence host writes, swap backup components, or restore older state from storage snapshots.
Impact
“Expired” conditions appear valid again; event timelines become disputed and unreliable.
Required proof
Monotonic counter never decreases across resets and backup swaps.
Time-backward rule triggers a rollback/tamper flag.
Signed evidence binds time + counter + device identity.
Threat: Time spoof
Evidence corruption
A forged time value is injected to create or deny events (“this happened” / “this did not happen”).
Attacker capability
Can modify host software, inject bus traffic, or overwrite unprotected time registers/state.
Impact
Logs and policy decisions become disputable; audit trails fail under scrutiny.
Required proof
Time statements must be signed by a trusted boundary, not by host software.
Proof must include event context and device identity.
Verifier rejects proofs that lack freshness binding (counter/nonce/window).
Threat: Power-glitch abuse
Commit-window risk
A controlled brownout or sudden drop is used to interrupt state updates and create inconsistent evidence.
Attacker capability
Can induce dips/interruptions so that only part of the “event record” is written before reset.
Impact
Missing power-fail evidence, ambiguous reset causes, or corrupted flags that weaken auditability.
Required proof
Reset/power cause is latched early and preserved.
Commit order is defined (cause → counter bump → flag/log marker).
Evidence includes a commit marker so partial writes are detected.
Threat: Tamper sensor bypass
Silent intrusion
Tamper inputs are held at “normal” levels or spoofed so physical intrusion is not reflected in evidence.
Attacker capability
Can short/clamp tamper pins, inject a constant state, or bypass a single sensor channel.
Impact
Evidence remains “clean” even after physical access; rollback/spoof attempts become harder to dispute.
Required proof
Use latched tamper flags and timestamped events (not only level reads).
Prefer multi-channel evidence (at least two independent signals).
Signed tokens include tamper state snapshot when relevant.
Threat: Proof replay
Freshness break
A previously valid signed-time proof is captured and replayed to impersonate a new event.
Attacker capability
Can record tokens/messages on an untrusted channel and resend them later unchanged.
Impact
Audit systems accept stale evidence; policies triggered by “fresh time” can be bypassed.
Required proof
Proof binds to freshness: nonce challenge, counter, and/or verifier window.
Verifier stores last-seen counter per device and rejects duplicates.
Device increments monotonic counter on relevant events (power-fail/tamper).
Diagram: Attack surface map (time evidence paths)
Each threat is resolved by demanding a specific proof artifact (counter, commit marker, signed token, tamper snapshot) rather than trusting host-reported time.
Secure Time Building Blocks: Trusted Counter, Secure Storage, Keys
Secure time is not “a more accurate RTC.” It is a provable state machine built from a few purchasable blocks:
a monotonic counter (anti-rollback anchor), secure storage (evidence state),
and keys (integrity & non-repudiation). The root of trust defines where these blocks can live.
Block: Monotonic counter
Anti-rollback anchor
Responsibility
Never decreases across reset, power-fail, and backup switchover.
Binds to proofs: included in signed tokens and log head updates.
Defines “freshness”: duplicate or older counter values are rejected.
Failure modes (observable)
Counter can be overwritten or restored from older snapshots → rollback succeeds.
Tokens do not include counter → proofs can be replayed unchanged.
Counter increments are too sparse or inconsistent → events cannot be ordered reliably.
How to verify
Power-cycle stress: verify monotonicity after N rapid resets and brownouts.
Backup swap test: remove/replace backup source and confirm no counter decrease.
Replay test: resend an old token and confirm rejection via stored last-seen counter.
Block: Secure storage
Evidence state
Responsibility
Stores last-known-good (LKG) time state bound to counter.
Stores latched tamper/rollback flags (audit-relevant, not silently cleared).
Stores evidence continuity (log head / hash-chain head / commit marker).
Failure modes (observable)
Tamper flags can be erased without trace → intrusion becomes disputable.
Commit is non-atomic under power-glitch → partial state is indistinguishable from valid state.
LKG time is stored without counter binding → rollback detection is weakened.
How to verify
Fault-injection: cut power during commit and verify detection via commit marker.
Flag persistence: confirm tamper/rollback flags survive resets and backup switching.
Continuity check: verify log head/hash head updates match token counter progression.
Block: Key material
Integrity & non-repudiation
Core rule
Signatures prove that content was not modified; freshness still requires counter/nonce binding.
The private key must remain inside the trusted boundary (no export path).
Verify signature with known public key chain; reject unknown key IDs.
Field coverage test: remove one required field and confirm verifier rejects token.
Uniqueness check: confirm device IDs are unique across production samples.
Block: Root of trust
Where the keys/counter can live
Root of trust is the boundary that can hold keys and protected time state. For secure time, the decision is not “which crypto,”
but which component is allowed to produce verifiable evidence without trusting the host.
Time-focused responsibilities
Protect private key and enforce non-export policy.
Protect monotonic counter and evidence state against rollback.
Provide an attestation API (sign token over canonical fields).
Acceptance check
If host software can rewrite time state or replay proofs without detection, the chosen trust boundary is insufficient.
Diagram: Minimal secure-time core (trusted vs untrusted)
The trusted boundary must protect counter and evidence state; signatures alone cannot prevent replay without counter/nonce binding.
Signed Time & Attestation Flow: What Gets Signed, and How It’s Verified
A signed time token is a verifiable evidence package. It is not a “time read.”
It must bind time to counter, event context,
and device identity so a verifier can detect replay, rollback, and tamper-related disputes.
Tamper is not “one pin.” For secure time, tamper must be treated as multi-channel evidence:
sensors feed an aggregator (debounce + vote + latch), a policy engine selects the response level, and evidence outputs are recorded
(flag + counter bump + log head + optional signed token). This structure reduces disputes from false positives and makes bypass attempts observable.
Trigger strategy
latched + debounced + voted
Latched vs level: audit-relevant tamper must have a latched form (no “close the lid and erase history”).
Debounce as evidence: record a configuration ID (or debounce level) so decisions remain explainable under audit.
A tamper pipeline is effective only when it produces durable evidence: latched flags, monotonic sequencing, and explainable policy decisions.
Power-Fail & Reset Evidence: Capturing Events You Can Trust
Power-fail evidence is a short-window commit protocol. The objective is not “record everything,” but to guarantee that
the system can classify writes as complete vs incomplete after any interruption. Without a commit marker and an enforced storage order,
power-glitch attacks can create ambiguous partial state that undermines secure time.
Minimal reliable record (strict order)
Latch cause: BOR/reset-cause/backup removal/clock-stop flags are latched first.
Bump monotonic counter: sequence advances so the event cannot be replayed as “old.”
Write event flag: persist event/tamper state that explains any continuity gap.
Commit marker (optional if time allows, preferred always): finalize the update so reboot selection is unambiguous.
If the commit step cannot be guaranteed, the design must still guarantee that incomplete state is detectable and handled by policy (mark gap / lock).
Event
Brownout / slow ramp
Required capture latency (target class)
Few-ms class. Brownout can hover near thresholds, causing repeated resets; evidence must remain consistent across cycles.
Storage order
Latch cause → Counter++ → Event flag → Commit marker (A/B slot or log head).
Pass criteria (verification)
Counter never decreases across repeated brownout resets.
Reboot always selects the last complete commit (no ambiguous “half-valid” state).
Continuity gaps are explicitly marked (flag + cause) rather than silently ignored.
Event
Sudden drop / power cut
Required capture latency (target class)
Sub-ms to few-ms class (system hold-up dependent). Prioritize a minimal, deterministic commit sequence.
Storage order
Latch cause → Counter++ → Event flag (commit marker if budget allows; otherwise detect incomplete on reboot).
Pass criteria (verification)
Random cut (Monte Carlo) produces only two outcomes: complete or detectable-incomplete.
No token/evidence is accepted by verifier if counter or commit state is older/ambiguous.
Reset cause and event flags match observed waveforms (scope correlation).
Event
Battery removal / backup switchover
Required capture latency (target class)
Few-ms to tens-of-ms class. The priority is continuity classification: “backup maintained” vs “backup removed.”
Storage order
Latch removal/switchover cause → Counter++ → Continuity flag → Commit marker (selectable on reboot).
Pass criteria (verification)
Counter monotonicity holds across backup removal and reinsertion.
Evidence explicitly marks continuity gaps when backup power is lost.
Verifier rejects tokens that imply continuity without a valid commit trail.
A secure power-fail record must survive worst-case cut points: it either completes cleanly or fails in a detectable way that policy can handle.
Anti-Rollback & Time Continuity: Keeping Time Trust Across Reboots
Rollback attacks do not need perfect time control. They only need the system to accept an old world
as valid. A secure design distinguishes time value (a readable number) from
time proof (a verifiable continuity claim). Time proof requires a monotonic sequence and a durable anchor,
not just an RTC counter that can be reset or rewritten.
Core concept
time value ≠ time proof
Time value (readable)
Useful for scheduling and UX, but not sufficient as evidence. It can be moved backward via backup loss, firmware rollback, or replayed state.
Time proof (verifiable)
Monotonic counter that never decreases.
Anchor (last-known-good + commit marker) that survives reboots.
Audit trail (flags/log head/token) that makes gaps explicit.
Common rollback paths
State reset & substitution
Backup battery removal / replacement
Factory reset that clears anchors/logs
Module replacement that swaps identity or storage
Software & replay
Firmware downgrade / rollback to older policies
Replay of old signed tokens or cached “good” state
Power-glitch to create ambiguous partial commits
Rollback detection rules (engineering)
Rules → action (must be deterministic)
Counter monotonicity: if counter decreases → set tamper flag, lock or degrade trust.
Time moved backward: if time < LKG − window → mark rollback (window prevents benign sync jitter disputes).
Commit classification: missing/invalid commit marker → treat update as incomplete; mark continuity gap.
The audit story is only as strong as the recorded fields. If a field is not persisted, disputes cannot be resolved reliably.
Recovery policy (after rollback)
Degrade
Continue providing time value but set trust_level=degraded; gaps are explicit and auditable.
Quarantine
Refuse high-grade proofs (no signed tokens) and isolate critical operations until external anchor verification succeeds.
Re-provision
Enter recover/provisioned state; re-bind anchors/identity so future proofs are consistent and non-repudiable.
Diagram: Time rollback detector state machine
A robust rollback detector must be able to classify state after any interruption and must never “silently accept” an older anchor.
Interfaces & System Integration: Host MCU, Secure Element, and Trust Boundary
A secure RTC is not a normal I²C time peripheral. A host read returns a time value;
credibility comes from a protected state and a verifiable token
that crosses a clearly defined trust boundary. Integration must prevent the host (or the bus) from rewriting anchors or replaying proofs.
Two interface planes
Data plane
Read/write commands and readable registers (time value, status, event summaries). Useful, but not inherently proof-grade.
Evidence plane
Verifiable artifacts (token / protected anchor / counter) and event causality (tamper, power-fail, commit state) that a verifier can store.
Integration patterns
Pattern A: Secure RTC holds key
Pros: proofs created inside RTC boundary; host cannot forge tokens.
Cons: key injection and lifecycle controls are stricter.
Use when: high audit strength is required under physical access risk.
Pattern B: RTC + Secure Element
Pros: RTC provides counter/events; SE provides key & signing.
Cons: RTC→SE evidence path must prevent host editing/replay.
Use when: platforms already ship with SE/TPM and want reuse.
Pattern C: SoC enclave + external RTC
Pros: enclave can store anchors and enforce policy in software.
Cons: external RTC must deliver evidence into enclave in a non-forgeable way.
Use when: integrated SoC platforms need low-power timebase + enclave policy.
Integration checklist (short)
Host reads must distinguish value vs proof; only proofs are audit-grade.
Integration succeeds when the trust boundary is explicit: who holds keys, who can update anchors, and where replay is rejected.
Key Specs & Verification Metrics: What to Measure and What “Pass” Looks Like
Secure time must be testable. A proof-grade design is defined by deterministic capture, durable evidence, verifiable tokens,
and auditable continuity. Each metric below includes a reproducible test method, required fixtures, and a pass template that can be
filled by the system budget.
Metrics map (metric → threat)
Detection
tamper latencyfalse positivesrollback trigger
Prevents “silent bypass” and reduces disputes by quantifying trigger behavior.
Persistence & continuity
power-fail successcommit classificationhash chain
Makes gaps explicit: complete vs detectable-incomplete, never ambiguous.
Verification
token verifyreplay rejectfailure taxonomy
Ensures proofs are verifiable and stale proofs are rejected deterministically.
Provisioning & Manufacturing: Keys, Certificates, and Audit Readiness
Manufacturing is part of the security boundary. Provisioning must guarantee unique identity, prevent key exposure,
and create an audit anchor that survives the entire lifecycle. The goal is a closed loop:
identity bound → key injected/derived (no readback) → secure-time state initialized → locked → baseline proof stored in audit DB.
Provisioning pipeline (step → artifact)
Assign device identity → device_id / pubkey_id recorded with uniqueness enforcement.
At end-of-line, generate a baseline record that a verifier can store as an external anchor. This baseline links identity, counter,
and initial secure-time state so future disputes can be evaluated against a known-good reference.
A production line is secure only if secrets never leave the secure boundary and every step produces an auditable artifact.
Applications: Where Secure Time Is Non-Negotiable
Secure time matters when time becomes evidence: logs, claims, policies, and audits must remain trustworthy across
reboots, power failures, and physical tamper attempts. Each use case below is framed as “what must be proven” instead of “how accurate the clock is”.
Secure logging & forensics
Problem
Logs can be edited, re-ordered, or “backdated” after an incident.
What must be proven
Each entry has a verifiable time token and an unbroken integrity chain.
Minimal pattern
Monotonic counter + hash-chain head stored in secure boundary + signed time token on key events.
Pass criteria
Verifier rejects any entry with counter ≤ last_seen; hash-chain continuity has 0 gaps over the audit window.
Warranty & usage-based billing
Problem
Time rollback can extend warranty or reduce billed runtime.
What must be proven
Runtime and key events remain monotonic and externally anchorable.
Minimal pattern
Counter-bound signed tokens on service/billing checkpoints + local rollback detector (time moved backward ⇒ tamper flag).
Pass criteria
Any rollback attempt triggers “tampered” state; backend freshness window rejects stale tokens (counter/nonce mismatch).
Industrial safety events & accountability
Problem
Power loss/reset can erase “who/when/why” of a shutdown or fault.
What must be proven
A power-fail event record was captured before backup domain collapses.
Minimal pattern
Latched reset cause + counter bump + atomic flag commit (+ optional hash head update) on brownout path.
Pass criteria
Over N forced drops, capture success rate meets target; any partial write is detectable and never “looks valid”.
Access control & device attestation
Problem
Policies based on “time since last check-in” can be bypassed by backdating.
What must be proven
Time evidence is bound to device identity and is freshness-checked by verifier.
Minimal pattern
Challenge–response signed time token (nonce + counter + context) + strict accept window on backend.
Pass criteria
Replay rejection is 100% for reused nonce/counter; verifier state never regresses across reconnects.
Metering & data integrity pipelines
Problem
Old “valid” data packets can be replayed; timestamps can be rewritten to match quotas.
What must be proven
Each record is fresh and ordered, anchored by a monotonic proof.
Minimal pattern
Signed time token included in data envelope + verifier stores last counter per device to enforce ordering.
Pass criteria
Any duplicated token or counter regression is rejected; audit can reconstruct a total order of records.
Cameras / recorders evidence chain
Problem
Footage can be spliced; “record time” can be forged to create false narratives.
What must be proven
Frames/segments carry non-repudiable time evidence and tamper/power-fail context.
Minimal pattern
Periodic signed time tokens + per-segment hash chain; power-fail marker commits before shutdown.
Pass criteria
Any removed/reordered segment breaks verification; verifier identifies exact cut point and flags tamper.
Use-case map: which evidence types each application needs
Implementation note: “Secure time” typically lands as RTC + monotonic proof + verifiable token.
If only one path gets hardened, prioritize the monotonic proof chain (counter + secure commit) because it directly blocks rollback and replay.
IC Selection & BOM Hooks: How to Choose Secure RTC / Companion SE
This is not a shopping list. It is a selection logic for building tamper + anti-rollback + signed time
into a system without letting the host interface become the trust anchor by accident.
Tamper record is sticky (latched) and included in signed context
B) Trade-offs checklist (choose intentionally)
RTC-only vs RTC+SE: RTC-only is simpler but rarely satisfies “signed time”; RTC+SE isolates keys and tightens non-repudiation.
Integrated key vs external key store: integrated reduces attack surface; external enables reuse across SKUs but increases interface hardening work.
Token frequency: more frequent tokens improve audit resolution but increase energy/bandwidth and verifier state size.
Tamper sensitivity: aggressive thresholds reduce bypass risk but can raise false positives; mitigate by recording multi-channel evidence context.
Backup domain energy: larger holdover improves commit success; verify worst-case leakage and brownout edges, not typical numbers.
Common failure mode
Treating an I²C “time read” as trusted. A secure architecture treats the signed token (or protected register inside a secure boundary) as trusted — not the host bus.
C) Integration questions (ask before committing)
Host interface
How is the challenge/nonce delivered and rate-limited?
Are tamper pins latched and routed with priority (interrupt + always-on domain)?
Is there a defined “safe readout” mode after tamper (read evidence, but no state changes)?
Backend verifier
What freshness window is acceptable (time skew + transport delay)?
What state is stored per device (last counter, last nonce, last token hash)?
How are “tampered” devices handled (deny / quarantine / degrade / re-provision)?
Power-fail path
What is the guaranteed energy/time budget for “cause → counter → commit”?
Is the storage update atomic (dual-slot / journal) with a clear “valid” marker?
How is brownout/glitch injection tested (edge rate, droop depth, repetition)?
These part numbers speed up datasheet lookup and prototyping. Always verify package/suffix, availability, and whether the security properties meet the threat model.
Secure element / authenticator (keys + signatures)
Practical rule: if non-repudiation or external audit is required, default to RTC + SE/TPM and treat the host bus as untrusted.
Then allocate verification time to glitch/power-fail injection and replay/rollback correctness.
Each answer is a four-line, testable checklist: Likely cause /
Quick check /
Fix /
Pass criteria.
Placeholders (N, T, SR_TARGET, FP_TARGET) must be set by system budget.
Signed time verifies locally but fails on server—what’s the first replay/freshness check?
Likely cause: Verifier state uses (nonce/counter/window) but the signed payload is missing a binding field, or the verifier stores the wrong “last seen” reference.
Quick check: Compare device token fields vs verifier rules: {device_id, token_id, counter, nonce_id, issued_time}; confirm server persists last_seen_counter and last_seen_nonce per device_id.
Fix: Bind {nonce_id + counter + device_id + context} inside the signed payload; server rejects any nonce reuse and any counter ≤ last_seen_counter.
Pass criteria: verify_success_rate ≥ SR_TARGET over N requests; replay_accept_rate = 0; counter_regressions = 0 (device_id scoped).
Time moved backward after battery swap—how to confirm rollback detection is armed?
Likely cause: Monotonic proof is not enforced on boot path (time is accepted before counter/LKG validation), or the “time-backward” rule is disabled in the policy state machine.
Quick check: Power-cycle with a forced older time source; read {tamper_flag, rollback_flag, last_known_good_time, monotonic_counter}; confirm flags latch across reboot and are included in signed context.
Fix: Gate “time becomes trusted” on counter/LKG validation; if time < LKG_time or counter regresses, latch rollback_flag and force re-provision or degraded mode.
Pass criteria: rollback_detect_rate = 100% across scenario_set size N; rollback_escape_count = 0; latch_persistence = 100% across M reboots.
Tamper pin triggers randomly—how to separate sensor noise vs real intrusion?
Likely cause: Input integrity issue (floating, long trace, EMI, missing debounce) or sensor biasing outside valid range (light/temp switch thresholds drifting).
Quick check: Log per-trigger context {pin_level, duration, supply_ok, temp, light, reset_cause, counter}; test with shielded cable/short input; compare false triggers per hour with and without EMI stress.
Pass criteria: false_positive_rate ≤ FP_TARGET (per day) under defined EMI profile; detection_latency ≤ LAT_TARGET; evidence_bundle completeness = 100% for each trigger.
Power-fail events are missing intermittently—what is the first commit-order verification?
Likely cause: Commit order places “valid marker” too early, or there is no atomic journal; brownout cuts power between flag write and marker update.
Quick check: Force N random-phase power drops; inspect records for {reset_cause, counter_bump, event_flag, valid_marker}; verify marker is written last and partial records are detected as invalid.
Fix: Use dual-slot (A/B) journal: write payload → write checksum/hash → write valid_marker last; on boot, select newest slot with valid_marker and correct checksum.
Pass criteria: capture_success_rate ≥ SR_TARGET over N cuts; undetected_partial_write = 0; record_monotonicity holds (counter strictly increases).
Token replay is accepted—what counter/nonce binding is likely missing?
Likely cause: Nonce is not inside the signed payload, counter is not strictly enforced on server, or token_id is not unique per issuance.
Quick check: Submit the same token twice; confirm verifier checks {device_id + nonce_id + counter} tuple and stores last_seen (or a sliding window) for both nonce_id and counter.
Fix: Sign nonce_id + counter + device_id + context; set verifier logic: reject if nonce_id seen before OR counter ≤ last_seen_counter.
Pass criteria: replay_accept_rate = 0 across N replays; verifier_state_update_rate = 100%; token_uniqueness collisions = 0.
After tamper, device still outputs “normal time”—what policy state should be latched?
Likely cause: Tamper is recorded but not promoted to a latched policy state (mark-only), or host path continues to serve unauthenticated “display time” as if it were trusted.
Quick check: Trigger tamper; read {tamper_flag_latched, policy_state, key_status, counter}; verify token context includes tamper_reason and policy_state and that host UI/time reads are labeled untrusted.
Fix: Latch policy_state = TAMPERED; require re-provision or restricted mode; optionally zeroize signing keys and only allow evidence readout (no state changes).
Pass criteria: tamper_latch_persistence = 100% across M reboots; post_tamper_trusted_tokens_issued = 0 until recovery; evidence_readout_success = 100%.
Reset cause is recorded but not signed—where does the trust boundary break?
Likely cause: Reset-cause capture happens in an untrusted host domain, or it is appended after signing (not part of the signed payload).
Quick check: Compare token schema vs stored reset record: is reset_cause inside the signed bytes? verify {payload_hash, signature, context_fields} alignment on both device and server.
Fix: Move reset cause into secure boundary capture or seal it: include reset_cause + commit_marker + counter inside the signed payload; reject tokens missing required context fields.
Pass criteria: context_completeness = 100% for required fields; mismatch_rate = 0 between record and signed context across N tests.
“Secure time” drifts like a normal RTC—what part is about integrity vs accuracy?
Likely cause: The design provides integrity (signatures/counters) but does not include an external disciplining source; accuracy and integrity are separate requirements.
Quick check: Verify security proof still holds under drift: counter monotonicity, replay rejection, signed context correctness; separately measure drift vs temperature/time if accuracy is required.
Fix: If accuracy is required, add a discipline source (GNSS/NTP/PTP/known-good server) while keeping the integrity proof unchanged; record “discipline status” in signed context.
Pass criteria: integrity_pass = true across N cycles (no replay/rollback); accuracy_error ≤ ACC_TARGET only if discipline is enabled and status is signed.
Field logs show gaps—how to tell storage wear-out vs attack attempt?
Likely cause: Endurance/wear causes write failures, or adversary causes repeated brownouts/glitches to prevent commits while keeping system “running”.
Quick check: Correlate gaps with {reset_cause frequency, brownout flags, tamper flags, commit failures}; read wear indicators if available; check if gaps coincide with repeated power anomalies.
Fix: Add journaling + retry budget + wear leveling (if supported) and sign “commit failed” evidence; set policy: repeated commit failures within T window ⇒ suspicious state + alert.
Pass criteria: gap_rate ≤ GAP_TARGET over audit window; commit_failure_evidence_rate = 100%; suspicious_trigger_accuracy meets target on known fault injection set.
Provisioning passed, but devices share the same identity—what manufacturing control failed?
Likely cause: Identity seed/key injection is not unique per unit (fixture reuse, duplicated programming blob, missing CA issuance control, or weak UID binding).
Quick check: Sample K units; compare {device_id, public_key_id, cert serial, token signature verification key}; check audit DB for duplicates and for a per-unit “baseline signed record”.
Fix: Enforce uniqueness at issuance: CA refuses duplicate serials; programmer requires per-unit challenge and logs immutable artifact {unit_id, pubkey hash, cert serial, baseline token hash}.
Pass criteria: identity_collision_count = 0 over lot size N; baseline_artifact_coverage = 100%; key_no_readback is verified by procedure (no export steps).
Brownout causes corrupted flags—what’s the first atomicity/double-buffer check?
Likely cause: Single-copy flags updated in place without a validity marker; brownout interrupts a multi-byte write, creating a “half-updated” state.
Quick check: Inject brownouts with varied edge rates; read both copies (A/B) or inspect checksum/marker; verify boot logic chooses newest valid copy and rejects mixed/invalid states.
Fix: Implement dual-slot + checksum + marker-last commit; treat any marker/checksum failure as invalid and fall back to the last valid record with a signed “abnormal reset” context.
Pass criteria: invalid_state_accept_count = 0 over N injections; recovery_selects_last_valid = 100%; signed_abnormal_reset_coverage = 100%.
Can I trust I²C time reads at all—what’s the minimum proof artifact to request?
Likely cause: I²C time reads can be spoofed by a compromised host or bus attacker; raw time is not evidence without an integrity proof.
Quick check: Identify which artifact is verifiable: signed token vs plain registers; confirm token includes {device_id, counter, nonce_id/seq, context flags} and is validated by a known public key/cert chain.
Fix: Use I²C only as a transport for a signed proof artifact; verifier stores state (last counter/nonce) and enforces freshness; label plain reads as “display only”.
Pass criteria: trusted_time_decisions use token only (100%); spoofed_plain_read_detection = 100% in test; verifier rejects stale tokens with replay_accept_rate = 0.