123 Main Street, New York, NY 10001

Secure RTC & Time-Stamping for Tamper-Proof Event Evidence

← Back to:Reference Oscillators & Timing

Secure RTC / Time-Stamping turns “time” into verifiable evidence: every reboot, tamper, and power-fail event is bound to a monotonic counter and a signed proof artifact that can be audited and rejects rollback/replay. The goal is not better accuracy—it is integrity, continuity, freshness, and auditability that still hold when the device is attacked or loses power.

Definition & Scope: What “Secure Time” Means

Secure time turns “time” into verifiable evidence: it can be checked, audited, and proven resistant to tampering and rollback. This page focuses on integrity and provability—not on ppm accuracy, crystal selection, or battery-life math.

One-line definition

Secure time = Integrity + Continuity + Freshness + Auditability.

What each term must deliver
Integrity
Evidence cannot be modified without detection. Artifact: signature verification / hash-chain continuity.
Continuity
Reboots and power events leave a provable timeline (or a clearly marked gap). Artifact: monotonic counter + reset/power-fail cause.
Freshness
Old proofs cannot be replayed to masquerade as new time. Artifact: counter/nonce binding + replay rejection.
Auditability
A third party can verify “what happened when” using public verification material. Artifact: device identity + certificate chain reference.

Note: secure time improves trust, not necessarily absolute accuracy.

What can be proven vs what can’t
Can be proven (evidence)
  • Event/time records were not silently modified (signature/hash integrity).
  • Time did not move backward without raising a rollback/tamper condition.
  • Power-fail/reset causes were latched and committed in a defined order.
  • Time proofs are fresh (replay attempts are rejected).
Not guaranteed by secure time alone
  • Absolute UTC accuracy (ppm drift, crystal behavior, temperature compensation).
  • Network-sync correctness under adversarial delay or GNSS spoofing.
  • Perfect physical security if trust boundary is bypassed (requires system hardening).
Typical system boundary (trust zones)
Trusted boundary

Components allowed to hold keys and produce verifiable time tokens: Secure RTC / Secure Element / SoC secure enclave.

Untrusted zone

Host software and buses can be monitored or manipulated. A plain I²C “time read” may be useful, but is not evidence without a signed token or protected state.

External verifier

A server or audit tool checks signature, freshness, and timeline consistency, then stores evidence for later dispute resolution.

If only one thing is implemented

The minimum secure-time loop is: event + monotonic counter + signed token. This provides integrity, freshness, and rollback detection without requiring a full secure log subsystem.

Minimal payload fields (small but complete)
  • Time value (human-readable timestamp).
  • Monotonic counter (never decreases; anchors freshness).
  • Event context (power-fail/tamper/reset cause flags).
  • Device identity (public-key ID / certificate reference).
  • Signature (verifiable integrity proof).
Acceptance check
A verifier can reject token replay, detect time rollback, and confirm event integrity by signature—without trusting the host.
Diagram: Time Trust Pyramid (layers of provable time)
Time trust pyramid for secure time A layered diagram showing RTC and counter at the base, then tamper sensing, secure storage, signing/attestation, and external verification for audit. Secure time Integrity Continuity Freshness Audit Trusted boundary Verifier / Audit Signing / Attestation Secure Storage Tamper Sensors RTC / Counter

Reading time is optional; proving time requires trusted state (counter/storage) and a verifiable proof (signature/attestation).

Threat Model: Tamper, Rollback, Spoof, and Power-Fail Abuse

Threat modeling for secure time is not about listing every security risk—it is about mapping time-specific attacks to required proofs. Each item below defines what must be captured and verified so that later disputes can be resolved by evidence.

Threat: Rollback High impact

Time state is forced back to an older value to bypass policy windows, audits, billing, or warranty rules.

Attacker capability
Can influence host writes, swap backup components, or restore older state from storage snapshots.
Impact
“Expired” conditions appear valid again; event timelines become disputed and unreliable.
Required proof
  • Monotonic counter never decreases across resets and backup swaps.
  • Time-backward rule triggers a rollback/tamper flag.
  • Signed evidence binds time + counter + device identity.
Threat: Time spoof Evidence corruption

A forged time value is injected to create or deny events (“this happened” / “this did not happen”).

Attacker capability
Can modify host software, inject bus traffic, or overwrite unprotected time registers/state.
Impact
Logs and policy decisions become disputable; audit trails fail under scrutiny.
Required proof
  • Time statements must be signed by a trusted boundary, not by host software.
  • Proof must include event context and device identity.
  • Verifier rejects proofs that lack freshness binding (counter/nonce/window).
Threat: Power-glitch abuse Commit-window risk

A controlled brownout or sudden drop is used to interrupt state updates and create inconsistent evidence.

Attacker capability
Can induce dips/interruptions so that only part of the “event record” is written before reset.
Impact
Missing power-fail evidence, ambiguous reset causes, or corrupted flags that weaken auditability.
Required proof
  • Reset/power cause is latched early and preserved.
  • Commit order is defined (cause → counter bump → flag/log marker).
  • Evidence includes a commit marker so partial writes are detected.
Threat: Tamper sensor bypass Silent intrusion

Tamper inputs are held at “normal” levels or spoofed so physical intrusion is not reflected in evidence.

Attacker capability
Can short/clamp tamper pins, inject a constant state, or bypass a single sensor channel.
Impact
Evidence remains “clean” even after physical access; rollback/spoof attempts become harder to dispute.
Required proof
  • Use latched tamper flags and timestamped events (not only level reads).
  • Prefer multi-channel evidence (at least two independent signals).
  • Signed tokens include tamper state snapshot when relevant.
Threat: Proof replay Freshness break

A previously valid signed-time proof is captured and replayed to impersonate a new event.

Attacker capability
Can record tokens/messages on an untrusted channel and resend them later unchanged.
Impact
Audit systems accept stale evidence; policies triggered by “fresh time” can be bypassed.
Required proof
  • Proof binds to freshness: nonce challenge, counter, and/or verifier window.
  • Verifier stores last-seen counter per device and rejects duplicates.
  • Device increments monotonic counter on relevant events (power-fail/tamper).
Diagram: Attack surface map (time evidence paths)
Attack surface map for secure RTC time-stamping A block diagram showing trusted and untrusted zones: RTC and backup domain, tamper I/O, secure storage and signing key, host interface, and external verifier. Red dots highlight attack points. Trusted boundary Untrusted + External RTC domain time + counter Backup domain Secure storage flags + log head Signing key attestation Tamper I/O Host SW / OS / bus Verifier / Audit verify + store Red dot = primary time-evidence attack point

Each threat is resolved by demanding a specific proof artifact (counter, commit marker, signed token, tamper snapshot) rather than trusting host-reported time.

Secure Time Building Blocks: Trusted Counter, Secure Storage, Keys

Secure time is not “a more accurate RTC.” It is a provable state machine built from a few purchasable blocks: a monotonic counter (anti-rollback anchor), secure storage (evidence state), and keys (integrity & non-repudiation). The root of trust defines where these blocks can live.

Block: Monotonic counter Anti-rollback anchor
Responsibility
  • Never decreases across reset, power-fail, and backup switchover.
  • Binds to proofs: included in signed tokens and log head updates.
  • Defines “freshness”: duplicate or older counter values are rejected.
Failure modes (observable)
  • Counter can be overwritten or restored from older snapshots → rollback succeeds.
  • Tokens do not include counter → proofs can be replayed unchanged.
  • Counter increments are too sparse or inconsistent → events cannot be ordered reliably.
How to verify
  • Power-cycle stress: verify monotonicity after N rapid resets and brownouts.
  • Backup swap test: remove/replace backup source and confirm no counter decrease.
  • Replay test: resend an old token and confirm rejection via stored last-seen counter.
Block: Secure storage Evidence state
Responsibility
  • Stores last-known-good (LKG) time state bound to counter.
  • Stores latched tamper/rollback flags (audit-relevant, not silently cleared).
  • Stores evidence continuity (log head / hash-chain head / commit marker).
Failure modes (observable)
  • Tamper flags can be erased without trace → intrusion becomes disputable.
  • Commit is non-atomic under power-glitch → partial state is indistinguishable from valid state.
  • LKG time is stored without counter binding → rollback detection is weakened.
How to verify
  • Fault-injection: cut power during commit and verify detection via commit marker.
  • Flag persistence: confirm tamper/rollback flags survive resets and backup switching.
  • Continuity check: verify log head/hash head updates match token counter progression.
Block: Key material Integrity & non-repudiation
Core rule

Signatures prove that content was not modified; freshness still requires counter/nonce binding. The private key must remain inside the trusted boundary (no export path).

Responsibility
  • Signs canonical payload fields (time, counter, context, identity reference).
  • Anchors device identity (public key ID / certificate reference).
  • Enables later dispute resolution (non-repudiation under audit).
Failure modes (observable)
  • Token signs “time only” (no counter/context) → replay and context-misuse become feasible.
  • Identity is not unique or not bound to certificate chain → audit cannot attribute evidence.
  • Key handling allows duplication/extraction → proofs lose non-repudiation value.
How to verify
  • Verify signature with known public key chain; reject unknown key IDs.
  • Field coverage test: remove one required field and confirm verifier rejects token.
  • Uniqueness check: confirm device IDs are unique across production samples.
Block: Root of trust Where the keys/counter can live

Root of trust is the boundary that can hold keys and protected time state. For secure time, the decision is not “which crypto,” but which component is allowed to produce verifiable evidence without trusting the host.

Time-focused responsibilities
  • Protect private key and enforce non-export policy.
  • Protect monotonic counter and evidence state against rollback.
  • Provide an attestation API (sign token over canonical fields).
Acceptance check

If host software can rewrite time state or replay proofs without detection, the chosen trust boundary is insufficient.

Diagram: Minimal secure-time core (trusted vs untrusted)
Minimal secure time core block diagram A block diagram showing trusted boundary containing counter, secure NVM, key store, event engine, and policy. Untrusted host connects via I2C or SPI. Signed tokens go to external verifier. Trusted boundary Counter monotonic Secure NVM flags + LKG + log head Key store private key Event engine power-fail / tamper Policy + commit rules Host I²C / SPI Verifier verify + store Token Red dot = untrusted interaction point (inject / replay)

The trusted boundary must protect counter and evidence state; signatures alone cannot prevent replay without counter/nonce binding.

Signed Time & Attestation Flow: What Gets Signed, and How It’s Verified

A signed time token is a verifiable evidence package. It is not a “time read.” It must bind time to counter, event context, and device identity so a verifier can detect replay, rollback, and tamper-related disputes.

Token schema (engineering fields)
Required fields
  • time value — human-readable timestamp for audit.
  • monotonic counter — freshness anchor; prevents rollback/replay.
  • event context — power-fail / tamper / reset cause flags.
  • device identity — public key ID / certificate chain reference.
  • signature — integrity and non-repudiation proof.
Common freshness bindings
  • nonce/challenge — binds proof to a verifier request.
  • last-seen counter — verifier rejects duplicates/older values.
  • server window — rejects stale tokens outside policy window.

Signing proves integrity. Freshness and rollback resistance require counter/nonce checks in addition to signature verification.

Verification steps (verifier view)
  1. Validate identity: resolve device public key via certificate chain or key ID.
  2. Verify signature: check signature over canonical payload bytes.
  3. Check freshness: nonce match and/or counter monotonicity vs last-seen value.
  4. Check policy flags: tamper/rollback/power-fail states meet acceptance rules.
  5. Store evidence: persist token, update last-seen counter, log decision metadata.
Failure reasons (fast triage)
Crypto failures
  • Signature invalid or payload canonicalization mismatch.
  • Unknown key ID or invalid/expired certificate chain.
  • Identity mismatch (token claims do not match expected device).
Freshness failures
  • Nonce mismatch or missing challenge binding.
  • Counter duplicate/older than last-seen value (replay/rollback).
  • Outside server-side acceptance window (stale evidence).
Semantic/policy failures
  • Tamper flag set (intrusion suspected; evidence accepted only with special policy).
  • Rollback flag set (time moved backward or state discontinuity detected).
  • Power-fail commit marker invalid (partial write suspected).
Diagram: Challenge–response signed time (minimal closed loop)
Challenge response signed time sequence A sequence diagram showing verifier sends a nonce challenge to device, device reads counter and event context in trusted core, signs a token, and verifier verifies and stores it while rejecting replay. Verifier Trusted core Evidence store Challenge Read counter Sign token Verify + store Update last-seen Reject replay Freshness = nonce + monotonic counter + verifier last-seen tracking

The loop is complete only when the verifier stores evidence and enforces last-seen counter rules; otherwise, valid signatures can still be replayed.

Tamper Detection Mechanisms: Electrical, Physical, Environmental

Tamper is not “one pin.” For secure time, tamper must be treated as multi-channel evidence: sensors feed an aggregator (debounce + vote + latch), a policy engine selects the response level, and evidence outputs are recorded (flag + counter bump + log head + optional signed token). This structure reduces disputes from false positives and makes bypass attempts observable.

Trigger strategy latched + debounced + voted
  • Latched vs level: audit-relevant tamper must have a latched form (no “close the lid and erase history”).
  • Debounce as evidence: record a configuration ID (or debounce level) so decisions remain explainable under audit.
  • Multi-sensor voting: correlate channels (e.g., case-open + light) to lower false-positive disputes.
Response level mark / lock / zeroize
  • Mark-only: preserve evidence (flag + counter++ + context snapshot); continue operation with audit trail.
  • Lock: restrict time updates/operations; store lock reason and the last-known-good (LKG) anchor.
  • Zeroize: destroy key capability; explicitly record that high-grade proofs cannot be produced after this state.
Sensor class → bypass → evidence
Physical intrusion case-open / mesh
What it detects

Direct enclosure access or protective loop break consistent with physical contact.

Bypass attempt

Short/bridge inputs, replace loop, clamp the sensor pin, or replay a “closed” status.

Countermeasure & evidence recorded
  • Use latched events; do not rely on level-only signals for audit.
  • Record tamper_flag, counter++, and a short context snapshot.
  • Optionally emit a signed token that includes counter and sensor vote score.
Environmental anomaly light / temp
What it detects

Environment changes consistent with enclosure opening, probe access, or thermal shock.

Bypass attempt

Slow-rate manipulation to stay below thresholds, shielding/遮挡, or localized heating to avoid global sensors.

Countermeasure & evidence recorded
  • Debounce using duration/rate classes; record the configuration ID for audit.
  • Correlate with physical/electrical channels (vote score) to reduce false-positive disputes.
  • Record sensor_vote, counter++, and pre/post snapshot tags.
Electrical / timing anomaly voltage / clock
What it detects

Brownout, sudden drop, backup removal, clock-stop, or abnormal clock behavior that threatens evidence continuity.

Bypass attempt

Power-glitch during commit, injecting a fake “clock OK,” or forcing resets to create ambiguous partial state.

Countermeasure & evidence recorded
  • Use a commit marker and restart rules that can classify “complete vs incomplete” writes.
  • Bind power/clock events to counter++ and store reset_cause and clock_state.
  • Optionally lock or degrade evidence level when continuity cannot be proven.
Diagram: Tamper matrix (sensors → aggregator → policy → evidence)
Tamper matrix block diagram Sensors feed a tamper aggregator with debounce, vote, and latch. A policy engine selects mark, lock, or zeroize. Evidence outputs include flag, counter increment, log head, and optional token. Trusted boundary Sensors Case-open Mesh Light Temp VCC VBAT Tamper aggregator Debounce Vote Latch Policy engine Mark Lock Zero Evidence outputs Flag Counter++ Log head Token Red dot = likely bypass/replay focus; design evidence so disputes are resolvable

A tamper pipeline is effective only when it produces durable evidence: latched flags, monotonic sequencing, and explainable policy decisions.

Power-Fail & Reset Evidence: Capturing Events You Can Trust

Power-fail evidence is a short-window commit protocol. The objective is not “record everything,” but to guarantee that the system can classify writes as complete vs incomplete after any interruption. Without a commit marker and an enforced storage order, power-glitch attacks can create ambiguous partial state that undermines secure time.

Minimal reliable record (strict order)
  1. Latch cause: BOR/reset-cause/backup removal/clock-stop flags are latched first.
  2. Bump monotonic counter: sequence advances so the event cannot be replayed as “old.”
  3. Write event flag: persist event/tamper state that explains any continuity gap.
  4. Commit marker (optional if time allows, preferred always): finalize the update so reboot selection is unambiguous.

If the commit step cannot be guaranteed, the design must still guarantee that incomplete state is detectable and handled by policy (mark gap / lock).

Event Brownout / slow ramp
Required capture latency (target class)

Few-ms class. Brownout can hover near thresholds, causing repeated resets; evidence must remain consistent across cycles.

Storage order

Latch cause → Counter++ → Event flag → Commit marker (A/B slot or log head).

Pass criteria (verification)
  • Counter never decreases across repeated brownout resets.
  • Reboot always selects the last complete commit (no ambiguous “half-valid” state).
  • Continuity gaps are explicitly marked (flag + cause) rather than silently ignored.
Event Sudden drop / power cut
Required capture latency (target class)

Sub-ms to few-ms class (system hold-up dependent). Prioritize a minimal, deterministic commit sequence.

Storage order

Latch cause → Counter++ → Event flag (commit marker if budget allows; otherwise detect incomplete on reboot).

Pass criteria (verification)
  • Random cut (Monte Carlo) produces only two outcomes: complete or detectable-incomplete.
  • No token/evidence is accepted by verifier if counter or commit state is older/ambiguous.
  • Reset cause and event flags match observed waveforms (scope correlation).
Event Battery removal / backup switchover
Required capture latency (target class)

Few-ms to tens-of-ms class. The priority is continuity classification: “backup maintained” vs “backup removed.”

Storage order

Latch removal/switchover cause → Counter++ → Continuity flag → Commit marker (selectable on reboot).

Pass criteria (verification)
  • Counter monotonicity holds across backup removal and reinsertion.
  • Evidence explicitly marks continuity gaps when backup power is lost.
  • Verifier rejects tokens that imply continuity without a valid commit trail.
Diagram: Power-fail waveform & evidence commit timeline
Power-fail waveform and commit timeline A diagram showing VCC falling past BOR, a short write window, and a commit timeline with latch, counter increment, flag write, and commit marker. It highlights complete vs incomplete writes. VCC BOR Write window short & bounded Signals BOR assert Reset cause Backup domain Clock OK Commit order Latch Counter++ Flag Commit Incomplete Complete Goal: deterministic order + detectable completion marker under any interruption

A secure power-fail record must survive worst-case cut points: it either completes cleanly or fails in a detectable way that policy can handle.

Anti-Rollback & Time Continuity: Keeping Time Trust Across Reboots

Rollback attacks do not need perfect time control. They only need the system to accept an old world as valid. A secure design distinguishes time value (a readable number) from time proof (a verifiable continuity claim). Time proof requires a monotonic sequence and a durable anchor, not just an RTC counter that can be reset or rewritten.

Core concept time value ≠ time proof
Time value (readable)

Useful for scheduling and UX, but not sufficient as evidence. It can be moved backward via backup loss, firmware rollback, or replayed state.

Time proof (verifiable)
  • Monotonic counter that never decreases.
  • Anchor (last-known-good + commit marker) that survives reboots.
  • Audit trail (flags/log head/token) that makes gaps explicit.
Common rollback paths
State reset & substitution
  • Backup battery removal / replacement
  • Factory reset that clears anchors/logs
  • Module replacement that swaps identity or storage
Software & replay
  • Firmware downgrade / rollback to older policies
  • Replay of old signed tokens or cached “good” state
  • Power-glitch to create ambiguous partial commits
Rollback detection rules (engineering)
Rules → action (must be deterministic)
  • Counter monotonicity: if counter decreases → set tamper flag, lock or degrade trust.
  • Time moved backward: if time < LKG − window → mark rollback (window prevents benign sync jitter disputes).
  • Commit classification: missing/invalid commit marker → treat update as incomplete; mark continuity gap.
  • Replay rejection: token counter ≤ last_seen_counter → reject as replay immediately.
  • Anchor mismatch: external anchor counter > local → enter recover/provision workflow.
Evidence fields to record
tamper_flag counter LKG_time gap_flag rollback_reason commit_ok last_seen_counter

The audit story is only as strong as the recorded fields. If a field is not persisted, disputes cannot be resolved reliably.

Recovery policy (after rollback)
Degrade

Continue providing time value but set trust_level=degraded; gaps are explicit and auditable.

Quarantine

Refuse high-grade proofs (no signed tokens) and isolate critical operations until external anchor verification succeeds.

Re-provision

Enter recover/provisioned state; re-bind anchors/identity so future proofs are consistent and non-repudiable.

Diagram: Time rollback detector state machine
Time rollback detector state machine A state machine showing Normal, Suspect, Tampered, and Recover states with triggers like time moved backward, counter decrease, commit invalid, and anchor verified. Rollback detection = monotonic sequence + anchor + commit classification Normal trust ok Suspect needs proof Tampered flagged Recover re-provision time<LKG counter↓ commit? anchor ok Outputs: flag counter++ gap report

A robust rollback detector must be able to classify state after any interruption and must never “silently accept” an older anchor.

Interfaces & System Integration: Host MCU, Secure Element, and Trust Boundary

A secure RTC is not a normal I²C time peripheral. A host read returns a time value; credibility comes from a protected state and a verifiable token that crosses a clearly defined trust boundary. Integration must prevent the host (or the bus) from rewriting anchors or replaying proofs.

Two interface planes
Data plane

Read/write commands and readable registers (time value, status, event summaries). Useful, but not inherently proof-grade.

Evidence plane

Verifiable artifacts (token / protected anchor / counter) and event causality (tamper, power-fail, commit state) that a verifier can store.

Integration patterns
Pattern A: Secure RTC holds key
  • Pros: proofs created inside RTC boundary; host cannot forge tokens.
  • Cons: key injection and lifecycle controls are stricter.
  • Use when: high audit strength is required under physical access risk.
Pattern B: RTC + Secure Element
  • Pros: RTC provides counter/events; SE provides key & signing.
  • Cons: RTC→SE evidence path must prevent host editing/replay.
  • Use when: platforms already ship with SE/TPM and want reuse.
Pattern C: SoC enclave + external RTC
  • Pros: enclave can store anchors and enforce policy in software.
  • Cons: external RTC must deliver evidence into enclave in a non-forgeable way.
  • Use when: integrated SoC platforms need low-power timebase + enclave policy.
Integration checklist (short)
  • Host reads must distinguish value vs proof; only proofs are audit-grade.
  • Anchor updates (LKG + counter) require commit classification (complete vs detectable-incomplete).
  • Tokens (if used) include counter + device ID + event context and are replay-protected.
  • Tamper/power-fail signals must be latched and persisted across reboot (no “silent clear”).
  • Rollback discovery triggers a defined policy: degrade / quarantine / re-provision.
  • Verifier stores an external anchor (last_seen_counter) and rejects stale proofs.
Diagram: Trust-boundary integration topologies (A/B/C)
Secure RTC integration topologies Three stacked block diagrams comparing: A) Secure RTC with key, B) RTC plus Secure Element, and C) SoC enclave plus external RTC. Each shows trust boundary, buses, tokens, and risk points. A B C Secure RTC holds key Host MCU Secure RTC Key Sign token Verifier I²C Tamper RTC + Secure Element Host MCU RTC counter/events Secure Element Key Sign V Tamper SoC enclave + external RTC SoC / Host Enclave (anchor) External RTC value/events Verifier store anchor Evidence must enter enclave unforgeably Red dot = injection/replay risk point; enforce counter/anchor and proof verification there

Integration succeeds when the trust boundary is explicit: who holds keys, who can update anchors, and where replay is rejected.

Key Specs & Verification Metrics: What to Measure and What “Pass” Looks Like

Secure time must be testable. A proof-grade design is defined by deterministic capture, durable evidence, verifiable tokens, and auditable continuity. Each metric below includes a reproducible test method, required fixtures, and a pass template that can be filled by the system budget.

Metrics map (metric → threat)
Detection
tamper latency false positives rollback trigger

Prevents “silent bypass” and reduces disputes by quantifying trigger behavior.

Persistence & continuity
power-fail success commit classification hash chain

Makes gaps explicit: complete vs detectable-incomplete, never ambiguous.

Verification
token verify replay reject failure taxonomy

Ensures proofs are verifiable and stale proofs are rejected deterministically.

Metric cards (metric → test → tools → pass)
1) Tamper detect latency & false-positive rate
  • Metric: latency = t(latched_flag) − t(sensor_trigger); FPR = #false / time (or cycles).
  • How to test: controlled triggers (case-open/light/temp/voltage) + log flag & timestamp.
  • Tools/fixtures: trigger jig, logic analyzer, event-log reader script.
  • Pass template: latency_p99 ≤ T_tamper_budget; FPR ≤ FPR_budget (under stated environment set).
2) Power-fail capture success rate
  • Metric: success = #valid_records / #powercuts; “valid” includes reset_cause + counter++ + commit_ok (or gap).
  • How to test: N random-phase power cuts across ramp rates; read records after each reboot.
  • Tools/fixtures: programmable power-cut switch, MCU auto-harvester, database logger.
  • Pass template: success_rate ≥ SR_target; unexplained_gaps = 0; detectable_incomplete_rate ≤ DIR_budget.
3) Anti-rollback correctness
  • Metric: counter monotonicity (never decreases) + time-backward trigger + correct policy transition.
  • How to test: battery removal, factory reset, firmware downgrade, module swap, old-state restore.
  • Tools/fixtures: backup disconnect jig, firmware version toggles, scenario runner.
  • Pass template: counter↓ events = 0; rollback_detection = 100% across scenario set; action matches policy table.
4) Token verification & replay rejection
  • Metric: verify success rate (fresh tokens) + replay reject rate (old tokens) + failure taxonomy coverage.
  • How to test: challenge-response baseline; replay old token; mutate fields; swap device ID.
  • Tools/fixtures: verifier script, token capture tool, replay injector.
  • Pass template: verify_success ≥ VS_target; replay_reject = 100%; all failures classified (sig/field/expiry/counter/nonce).
5) Log integrity (hash chain continuity)
  • Metric: chain_valid over retained depth + gap_flag must be explained; no silent truncation.
  • How to test: mixed event stress (tamper + powercuts + reboots) with periodic chain verification.
  • Tools/fixtures: log dumper, hash verifier tool, audit DB snapshot.
  • Pass template: chain_valid = 100% (retained depth); unexplained_gap = 0; tamper edits detected deterministically.
Pitfalls & evidence pack
Common pitfalls (secure-time specific)
  • Latency statistics must define the exact endpoints (sensor edge vs flag latch vs host read).
  • Power-cut tests must be random-phase; fixed timing produces artificially high success rates.
  • Token failures must be classified (signature vs fields vs expiry vs counter vs nonce) to prevent false conclusions.
  • Hash-chain gaps must be explicit (gap_reason). “Missing record” without a reason is an audit failure.
Evidence pack (minimum)
tamper_flag tamper_reason reset_cause commit_ok counter LKG_time token_id nonce verifier_result replay_reject log_head_hash gap_flag gap_reason

If a field is not persisted, audit disputes become “unprovable” even if the device behaved correctly.

Diagram: Verification ladder (bring-up → fault injection → environmental → field audit)
Verification ladder for secure time A ladder diagram with four layers: Bring-up tests, Fault injection, Environmental, and Field audit. Each layer lists 2-3 short test items and gates. Verification ladder: increase stress, keep evidence deterministic Bring-up snapshot token verify counter↑ Fault inj. random cut replay rollback Env. temp vibration suscept. Field anchor chk gap rev export Pass Higher layers must keep results deterministic: complete vs detectable-incomplete, never ambiguous

Provisioning & Manufacturing: Keys, Certificates, and Audit Readiness

Manufacturing is part of the security boundary. Provisioning must guarantee unique identity, prevent key exposure, and create an audit anchor that survives the entire lifecycle. The goal is a closed loop: identity bound → key injected/derived (no readback) → secure-time state initialized → locked → baseline proof stored in audit DB.

Provisioning pipeline (step → artifact)
  1. Assign device identity → device_id / pubkey_id recorded with uniqueness enforcement.
  2. Bind certificate chain reference → cert_ref / chain_ref (reference, not secret).
  3. Key injection or derivation → key_slot_id (private material never leaves secure boundary; no readback).
  4. Initialize secure-time state → counter_init / LKG_init / policy_version with commit marker.
  5. Lock configuration → lock_state / protected-update enabled.
  6. Factory baseline signed record → baseline_token stored in audit DB as external anchor.
  7. Post-provision verification → verify_report (token verify + replay reject + counter monotonic).
Step → risk → mitigation → evidence artifact
Key material
  • Risk: key leakage via fixture logs, debug paths, or readback interfaces.
  • Mitigation: no-readback policy + lock debug + minimal-privilege programmer.
  • Evidence: lock_state record + no-readback verification report.
Identity
  • Risk: duplicate IDs or cloned certificates from offline or inconsistent issuance.
  • Mitigation: DB uniqueness constraints + online issuance + per-workcell audit trail.
  • Evidence: issuance record (device_id, timestamp, workcell, operator).
Secure-time initialization
  • Risk: inconsistent counter/LKG initialization across lines or scripts.
  • Mitigation: standardized write order + commit marker + post-provision self-check.
  • Evidence: baseline_token + verify_report (counter↑, token verify, replay reject).
Factory baseline signed record (external anchor)

At end-of-line, generate a baseline record that a verifier can store as an external anchor. This baseline links identity, counter, and initial secure-time state so future disputes can be evaluated against a known-good reference.

device_id counter_init policy_version lock_state baseline_token
Diagram: Manufacturing trust flow (Factory CA → Programmer → Device → Audit DB)
Manufacturing trust flow for secure time A flow diagram showing Factory CA/HSM issuing references to the programmer, programming the device with no-readback keys, initializing secure-time state, locking, and storing a baseline token in the audit database. Manufacturing trust flow: identity + no-readback key + baseline anchor Factory CA cert ref Programmer inject / derive no readback Device lock Audit DB anchor Highest-risk zone: fixture must not leak secrets Steps (outputs): device_id bound counter/LKG init baseline token → DB NO READBACK post-provision verify: token verify + replay reject + counter↑

A production line is secure only if secrets never leave the secure boundary and every step produces an auditable artifact.

Applications: Where Secure Time Is Non-Negotiable

Secure time matters when time becomes evidence: logs, claims, policies, and audits must remain trustworthy across reboots, power failures, and physical tamper attempts. Each use case below is framed as “what must be proven” instead of “how accurate the clock is”.

Secure logging & forensics

Problem
Logs can be edited, re-ordered, or “backdated” after an incident.
What must be proven
Each entry has a verifiable time token and an unbroken integrity chain.
Minimal pattern
Monotonic counter + hash-chain head stored in secure boundary + signed time token on key events.
Pass criteria
Verifier rejects any entry with counter ≤ last_seen; hash-chain continuity has 0 gaps over the audit window.

Warranty & usage-based billing

Problem
Time rollback can extend warranty or reduce billed runtime.
What must be proven
Runtime and key events remain monotonic and externally anchorable.
Minimal pattern
Counter-bound signed tokens on service/billing checkpoints + local rollback detector (time moved backward ⇒ tamper flag).
Pass criteria
Any rollback attempt triggers “tampered” state; backend freshness window rejects stale tokens (counter/nonce mismatch).

Industrial safety events & accountability

Problem
Power loss/reset can erase “who/when/why” of a shutdown or fault.
What must be proven
A power-fail event record was captured before backup domain collapses.
Minimal pattern
Latched reset cause + counter bump + atomic flag commit (+ optional hash head update) on brownout path.
Pass criteria
Over N forced drops, capture success rate meets target; any partial write is detectable and never “looks valid”.

Access control & device attestation

Problem
Policies based on “time since last check-in” can be bypassed by backdating.
What must be proven
Time evidence is bound to device identity and is freshness-checked by verifier.
Minimal pattern
Challenge–response signed time token (nonce + counter + context) + strict accept window on backend.
Pass criteria
Replay rejection is 100% for reused nonce/counter; verifier state never regresses across reconnects.

Metering & data integrity pipelines

Problem
Old “valid” data packets can be replayed; timestamps can be rewritten to match quotas.
What must be proven
Each record is fresh and ordered, anchored by a monotonic proof.
Minimal pattern
Signed time token included in data envelope + verifier stores last counter per device to enforce ordering.
Pass criteria
Any duplicated token or counter regression is rejected; audit can reconstruct a total order of records.

Cameras / recorders evidence chain

Problem
Footage can be spliced; “record time” can be forged to create false narratives.
What must be proven
Frames/segments carry non-repudiable time evidence and tamper/power-fail context.
Minimal pattern
Periodic signed time tokens + per-segment hash chain; power-fail marker commits before shutdown.
Pass criteria
Any removed/reordered segment breaks verification; verifier identifies exact cut point and flags tamper.

Use-case map: which evidence types each application needs

Secure time use-case evidence map Applications mapped to required evidence types: signed time token, monotonic counter, tamper evidence, power-fail evidence, log hash chain, and verifier audit store. Use cases → Evidence types Keep evidence minimal, verifiable, and monotonic. Use cases Forensics logs Warranty / billing Safety events Access control Metering integrity Video evidence Evidence types Signed time token Monotonic counter Tamper evidence Power-fail evidence Log hash chain head Verifier audit store evidence-critical

Implementation note: “Secure time” typically lands as RTC + monotonic proof + verifiable token. If only one path gets hardened, prioritize the monotonic proof chain (counter + secure commit) because it directly blocks rollback and replay.

IC Selection & BOM Hooks: How to Choose Secure RTC / Companion SE

This is not a shopping list. It is a selection logic for building tamper + anti-rollback + signed time into a system without letting the host interface become the trust anchor by accident.

A) “Must-have” checklist (hard gates)

Anti-rollback
  • Monotonic counter (or equivalent non-volatile monotonic proof)
  • Local rule: time moved backward ⇒ tamper flag + policy action
  • Verifier rule: counter must strictly increase per device identity
Signed time / attestation
  • Device identity (unique ID + cert/public key reference)
  • Private key stays inside secure boundary (no “key export” path)
  • Token includes nonce/challenge + counter + context + time
Power-fail evidence
  • Latched reset cause (or equivalent) accessible after reboot
  • Deterministic commit order for (cause → counter → flag)
  • Detectable partial writes (never “looks valid”)
Tamper IO + response
  • Multiple tamper inputs (case open / light / voltage / temp / clock anomaly)
  • Policy options: mark-only / lock / zeroize (clear key material)
  • Tamper record is sticky (latched) and included in signed context

B) Trade-offs checklist (choose intentionally)

  • RTC-only vs RTC+SE: RTC-only is simpler but rarely satisfies “signed time”; RTC+SE isolates keys and tightens non-repudiation.
  • Integrated key vs external key store: integrated reduces attack surface; external enables reuse across SKUs but increases interface hardening work.
  • Token frequency: more frequent tokens improve audit resolution but increase energy/bandwidth and verifier state size.
  • Tamper sensitivity: aggressive thresholds reduce bypass risk but can raise false positives; mitigate by recording multi-channel evidence context.
  • Backup domain energy: larger holdover improves commit success; verify worst-case leakage and brownout edges, not typical numbers.
Common failure mode

Treating an I²C “time read” as trusted. A secure architecture treats the signed token (or protected register inside a secure boundary) as trusted — not the host bus.

C) Integration questions (ask before committing)

Host interface
  • How is the challenge/nonce delivered and rate-limited?
  • Are tamper pins latched and routed with priority (interrupt + always-on domain)?
  • Is there a defined “safe readout” mode after tamper (read evidence, but no state changes)?
Backend verifier
  • What freshness window is acceptable (time skew + transport delay)?
  • What state is stored per device (last counter, last nonce, last token hash)?
  • How are “tampered” devices handled (deny / quarantine / degrade / re-provision)?
Power-fail path
  • What is the guaranteed energy/time budget for “cause → counter → commit”?
  • Is the storage update atomic (dual-slot / journal) with a clear “valid” marker?
  • How is brownout/glitch injection tested (edge rate, droop depth, repetition)?

Reference BOM hooks (example MPNs; starting points only)

These part numbers speed up datasheet lookup and prototyping. Always verify package/suffix, availability, and whether the security properties meet the threat model.

Secure element / authenticator (keys + signatures)
ATECC608B-TNGTLS SE050C2HQ1/Z01SDZ SLS32AIA010MS (OPTIGA™ Trust M) STSAFE-A110 DS28C36
TPM option (standardized trust + NV)
SLB9670 (OPTIGA™ TPM)
RTC / time-log IC (time + timestamps + backup)
MCP79410-I/SN RV-3028-C7 PCF85263ATT1/AZ PCF2131 AB1805-T3 ISL1208IU8Z-T7A

Typical secure architecture pairs an RTC/time-log IC with a secure element (SE/TPM) that owns keys and produces signed tokens.

Power-fail capture helpers (energy + reset evidence)
Power mux / ideal diode / load switch
TPS2113A (e.g., TPS2113ADRBR) LTC4412 MAX40200 TPS22910A (e.g., TPS22910AYZVR)
Voltage supervisor / reset monitor
TPS3839K33DBZT MCP1316T-29LE/OT TLV803SDBZR
Tamper sensor examples (evidence inputs)
Prefer multi-channel evidence (case-open + light + temperature/voltage anomaly) and record context around trigger.
D2F-01L (case-open switch) VEML7700-TT (light) TMP117MAIDRVR (temperature)

Selection flow: requirements → architecture → capability gates → verification risks

Secure RTC and secure element selection flow A flow from evidence requirements to architecture choice, capability gates, and verification risks for secure time and timestamping. Secure time selection flow Choose by evidence needs first; then lock trust boundary and verification. Requirements Audit Anti-rollback Tamper Power-fail Non-repudiation Architecture RTC-only RTC + Secure Element SoC enclave + RTC Capability gates Monotonic proof Secure commit Key isolation Tamper inputs Signed token Risks Replay Rollback Glitch False tamper Lock the trust boundary early

Practical rule: if non-repudiation or external audit is required, default to RTC + SE/TPM and treat the host bus as untrusted. Then allocate verification time to glitch/power-fail injection and replay/rollback correctness.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs (Troubleshooting) + JSON-LD

Each answer is a four-line, testable checklist: Likely cause / Quick check / Fix / Pass criteria. Placeholders (N, T, SR_TARGET, FP_TARGET) must be set by system budget.

Signed time verifies locally but fails on server—what’s the first replay/freshness check?
Likely cause: Verifier state uses (nonce/counter/window) but the signed payload is missing a binding field, or the verifier stores the wrong “last seen” reference.
Quick check: Compare device token fields vs verifier rules: {device_id, token_id, counter, nonce_id, issued_time}; confirm server persists last_seen_counter and last_seen_nonce per device_id.
Fix: Bind {nonce_id + counter + device_id + context} inside the signed payload; server rejects any nonce reuse and any counter ≤ last_seen_counter.
Pass criteria: verify_success_rate ≥ SR_TARGET over N requests; replay_accept_rate = 0; counter_regressions = 0 (device_id scoped).
Time moved backward after battery swap—how to confirm rollback detection is armed?
Likely cause: Monotonic proof is not enforced on boot path (time is accepted before counter/LKG validation), or the “time-backward” rule is disabled in the policy state machine.
Quick check: Power-cycle with a forced older time source; read {tamper_flag, rollback_flag, last_known_good_time, monotonic_counter}; confirm flags latch across reboot and are included in signed context.
Fix: Gate “time becomes trusted” on counter/LKG validation; if time < LKG_time or counter regresses, latch rollback_flag and force re-provision or degraded mode.
Pass criteria: rollback_detect_rate = 100% across scenario_set size N; rollback_escape_count = 0; latch_persistence = 100% across M reboots.
Tamper pin triggers randomly—how to separate sensor noise vs real intrusion?
Likely cause: Input integrity issue (floating, long trace, EMI, missing debounce) or sensor biasing outside valid range (light/temp switch thresholds drifting).
Quick check: Log per-trigger context {pin_level, duration, supply_ok, temp, light, reset_cause, counter}; test with shielded cable/short input; compare false triggers per hour with and without EMI stress.
Fix: Add explicit bias + debounce + multi-sensor voting; require “evidence bundle” (tamper_reason + timestamp + counter) before entering tampered state.
Pass criteria: false_positive_rate ≤ FP_TARGET (per day) under defined EMI profile; detection_latency ≤ LAT_TARGET; evidence_bundle completeness = 100% for each trigger.
Power-fail events are missing intermittently—what is the first commit-order verification?
Likely cause: Commit order places “valid marker” too early, or there is no atomic journal; brownout cuts power between flag write and marker update.
Quick check: Force N random-phase power drops; inspect records for {reset_cause, counter_bump, event_flag, valid_marker}; verify marker is written last and partial records are detected as invalid.
Fix: Use dual-slot (A/B) journal: write payload → write checksum/hash → write valid_marker last; on boot, select newest slot with valid_marker and correct checksum.
Pass criteria: capture_success_rate ≥ SR_TARGET over N cuts; undetected_partial_write = 0; record_monotonicity holds (counter strictly increases).
Token replay is accepted—what counter/nonce binding is likely missing?
Likely cause: Nonce is not inside the signed payload, counter is not strictly enforced on server, or token_id is not unique per issuance.
Quick check: Submit the same token twice; confirm verifier checks {device_id + nonce_id + counter} tuple and stores last_seen (or a sliding window) for both nonce_id and counter.
Fix: Sign nonce_id + counter + device_id + context; set verifier logic: reject if nonce_id seen before OR counter ≤ last_seen_counter.
Pass criteria: replay_accept_rate = 0 across N replays; verifier_state_update_rate = 100%; token_uniqueness collisions = 0.
After tamper, device still outputs “normal time”—what policy state should be latched?
Likely cause: Tamper is recorded but not promoted to a latched policy state (mark-only), or host path continues to serve unauthenticated “display time” as if it were trusted.
Quick check: Trigger tamper; read {tamper_flag_latched, policy_state, key_status, counter}; verify token context includes tamper_reason and policy_state and that host UI/time reads are labeled untrusted.
Fix: Latch policy_state = TAMPERED; require re-provision or restricted mode; optionally zeroize signing keys and only allow evidence readout (no state changes).
Pass criteria: tamper_latch_persistence = 100% across M reboots; post_tamper_trusted_tokens_issued = 0 until recovery; evidence_readout_success = 100%.
Reset cause is recorded but not signed—where does the trust boundary break?
Likely cause: Reset-cause capture happens in an untrusted host domain, or it is appended after signing (not part of the signed payload).
Quick check: Compare token schema vs stored reset record: is reset_cause inside the signed bytes? verify {payload_hash, signature, context_fields} alignment on both device and server.
Fix: Move reset cause into secure boundary capture or seal it: include reset_cause + commit_marker + counter inside the signed payload; reject tokens missing required context fields.
Pass criteria: context_completeness = 100% for required fields; mismatch_rate = 0 between record and signed context across N tests.
“Secure time” drifts like a normal RTC—what part is about integrity vs accuracy?
Likely cause: The design provides integrity (signatures/counters) but does not include an external disciplining source; accuracy and integrity are separate requirements.
Quick check: Verify security proof still holds under drift: counter monotonicity, replay rejection, signed context correctness; separately measure drift vs temperature/time if accuracy is required.
Fix: If accuracy is required, add a discipline source (GNSS/NTP/PTP/known-good server) while keeping the integrity proof unchanged; record “discipline status” in signed context.
Pass criteria: integrity_pass = true across N cycles (no replay/rollback); accuracy_error ≤ ACC_TARGET only if discipline is enabled and status is signed.
Field logs show gaps—how to tell storage wear-out vs attack attempt?
Likely cause: Endurance/wear causes write failures, or adversary causes repeated brownouts/glitches to prevent commits while keeping system “running”.
Quick check: Correlate gaps with {reset_cause frequency, brownout flags, tamper flags, commit failures}; read wear indicators if available; check if gaps coincide with repeated power anomalies.
Fix: Add journaling + retry budget + wear leveling (if supported) and sign “commit failed” evidence; set policy: repeated commit failures within T window ⇒ suspicious state + alert.
Pass criteria: gap_rate ≤ GAP_TARGET over audit window; commit_failure_evidence_rate = 100%; suspicious_trigger_accuracy meets target on known fault injection set.
Provisioning passed, but devices share the same identity—what manufacturing control failed?
Likely cause: Identity seed/key injection is not unique per unit (fixture reuse, duplicated programming blob, missing CA issuance control, or weak UID binding).
Quick check: Sample K units; compare {device_id, public_key_id, cert serial, token signature verification key}; check audit DB for duplicates and for a per-unit “baseline signed record”.
Fix: Enforce uniqueness at issuance: CA refuses duplicate serials; programmer requires per-unit challenge and logs immutable artifact {unit_id, pubkey hash, cert serial, baseline token hash}.
Pass criteria: identity_collision_count = 0 over lot size N; baseline_artifact_coverage = 100%; key_no_readback is verified by procedure (no export steps).
Brownout causes corrupted flags—what’s the first atomicity/double-buffer check?
Likely cause: Single-copy flags updated in place without a validity marker; brownout interrupts a multi-byte write, creating a “half-updated” state.
Quick check: Inject brownouts with varied edge rates; read both copies (A/B) or inspect checksum/marker; verify boot logic chooses newest valid copy and rejects mixed/invalid states.
Fix: Implement dual-slot + checksum + marker-last commit; treat any marker/checksum failure as invalid and fall back to the last valid record with a signed “abnormal reset” context.
Pass criteria: invalid_state_accept_count = 0 over N injections; recovery_selects_last_valid = 100%; signed_abnormal_reset_coverage = 100%.
Can I trust I²C time reads at all—what’s the minimum proof artifact to request?
Likely cause: I²C time reads can be spoofed by a compromised host or bus attacker; raw time is not evidence without an integrity proof.
Quick check: Identify which artifact is verifiable: signed token vs plain registers; confirm token includes {device_id, counter, nonce_id/seq, context flags} and is validated by a known public key/cert chain.
Fix: Use I²C only as a transport for a signed proof artifact; verifier stores state (last counter/nonce) and enforces freshness; label plain reads as “display only”.
Pass criteria: trusted_time_decisions use token only (100%); spoofed_plain_read_detection = 100% in test; verifier rejects stale tokens with replay_accept_rate = 0.