Security & Compliance for Medical Electronics
← Back to: Medical Electronics
Security & Compliance in medical electronics means building device-side trust that can be proven: only authorized firmware runs, keys stay non-exportable, debug/service access is lifecycle-controlled, and critical events are recorded as tamper-evident evidence. The practical goal is simple—make security actions verifiable and audit-ready without sacrificing serviceability.
What “Security & Compliance” means in medical electronics
Security becomes compliance only when critical actions are enforced by hardware-backed controls and produce verifiable evidence. This page frames security engineering as a closed loop: assets must be protected, attack surfaces must be bounded, and controls must emit audit-ready artifacts that stand up to reviews and root-cause analysis.
1) Assets to protect (engineer-friendly grouping)
- Identity & keys: private keys, certificates, key slots, derivation seeds, true-random outputs. (Compromise → cloning & impersonation.)
- Boot chain & code: ROM/bootloader stages, OS/app images, signature chains, version metadata. (Compromise → persistent backdoors.)
- Policies & configuration: boot policy, debug policy, recovery policy, protected configuration regions. (Compromise → downgrade & policy bypass.)
- Evidence & logs: signed event trails, monotonic counters, secure timestamps/sequence. (Compromise → “no proof” during audit.)
2) Attack surfaces (bounded by entry points)
- Debug entry: accidental exposure or mismanaged lifecycle states can enable memory reads, patching, or policy bypass.
- Boot entry: image replacement, unauthorized stage insertion, or downgrade to a vulnerable version.
- Non-volatile memory entry: raw reads, offline patching, replay of older but valid content, or metadata tampering.
- Physical entry: enclosure access, probing attempts, environmental anomalies used to force unexpected execution paths.
3) “Provability”: controls must produce evidence
A control is not complete until it yields a verifiable artifact. Reviews typically ask “What is enforced?” and “What is recorded?” The table below shows the minimum mapping that keeps security features from becoming undocumented assumptions.
| Threat outcome | Primary control | Evidence to keep |
|---|---|---|
| Device cloning / impersonation | Non-exportable private keys in HSM/SE + certificate chain | Identity certificate + key slot policy + key lifecycle records |
| Unauthorized firmware replacement | Root of trust + secure boot chain (fail-closed) | Boot verification result + image version + signer ID |
| Debug abuse / policy bypass | Secure debug lifecycle + unlock authorization | Debug state transitions + reason codes + operator/auth token ID |
| Physical tamper attempts | Anti-tamper sensing + policy-driven response | Tamper event record + response action + post-event lock state |
The rest of this page starts from secure boot because it is the first enforceable control and the earliest point where verifiable evidence can be generated.
Root of Trust & Secure Boot chain (from ROM to application)
A secure boot design is only as strong as its immutable starting point and its policy discipline. The goal is not “cryptography exists”; the goal is: only authorized images run, downgrades are blocked, and every boot produces evidence that explains what happened.
1) Chain of trust (who verifies whom)
- ROM boot (immutable): starts execution and verifies the first stage using a root key (or root key hash) anchored in hardware.
- 1st-stage bootloader: verifies the next stage and sets security posture (policy locks, debug state, memory protections).
- OS/runtime stage: verifies application packages or critical components before handing over execution.
- Application: consumes security posture as read-only inputs (identity, verified version, debug state) and can log security-relevant events.
2) Engineering gates (controls that prevent “almost secure” systems)
Gates are the decision points that must be explicit in the design spec. They also define what must be recorded for compliance evidence.
| Gate | What it enforces | What to record (evidence fields) |
|---|---|---|
| Debug lock gate | Default locked state; unlock requires explicit authorization and lifecycle state. | debug_state, unlock_reason, auth_token_id, lifecycle_state |
| Boot policy gate | Which signers are accepted; which images are allowed per device mode. | policy_id, accepted_signer_id, image_id, image_version |
| Rollback gate | Blocks downgrade to older, vulnerable versions using a monotonic counter. | anti_rollback_counter, requested_version, decision, decision_reason |
| Recovery boundary gate | Recovery is permitted only with signed images and minimal privileges. | recovery_entry, boot_fail_reason, recovery_image_id, recovery_auth |
3) Fail-closed behavior and recovery boundaries
- Fail-closed means a verification failure never transitions to normal run. The system moves to a controlled state with a restricted policy surface.
- Recovery must be authenticated: the recovery image is verified using the same root trust model (or a strictly scoped recovery signer).
- Recovery privileges are minimal: no secret extraction paths, no uncontrolled policy changes, no “silent” debug enablement.
- Every recovery entry is explainable: a reason code is persisted in protected storage and appended to the audit trail.
4) Minimum boot evidence fields (practical checklist)
These fields are small enough to keep, but rich enough to reconstruct “what ran and why”. They also prevent security from being reduced to verbal claims.
With this evidence baseline in place, later sections can extend the model to cover key lifecycle, secure storage, and signed audit logs—without changing the boot trust assumptions.
Measured boot, attestation & “prove what is running”
Secure boot answers “should this stage run?” Measured boot answers “what actually ran, in what order, under which policy?” The difference matters in compliance: a system can block unauthorized images yet still fail an audit if it cannot provide a tamper-resistant record of software identity and security posture at boot time.
| Mechanism | Core operation | What it proves | Typical evidence field |
|---|---|---|---|
| Verification (secure boot) | Signature check (allow/deny) | Only authorized images can execute | verify_result, signer_id, image_version |
| Measurement (measured boot) | Digest + hash-extend into PCR-like state | What ran and the sequence of components | pcr_value, component_digest, stage_id |
| Attestation | Sign the measurement + policy snapshot | A verifier can trust the reported state | nonce, report_sig, policy_id, debug_state |
1) Measurement records that survive audits
“Measured boot” is not a single hash. It is a sequence-aware record. The common pattern uses a TPM-like PCR concept: each stage computes a digest of the next component and extends it into a running state so that the final value is bound to both content and order. In practice, pairing a compact PCR summary with a small set of per-stage records makes incidents explainable.
2) Attestation reports: turning measurements into usable evidence
Attestation packages measurements into a signed report. A usable report must include freshness to prevent replay: a verifier provides a nonce (challenge), and the report binds that nonce inside the signature. No transport details are required to define correctness; the logic is: verify signature → verify nonce → verify identity → evaluate measurements & policy.
3) Common failure patterns (what breaks “provability”)
- No freshness: reports can be replayed if the nonce/challenge is missing or not bound to the signature.
- Policy not included: measurements without debug state or rollback counter can hide risky operating modes.
- Silent gaps: missing records for critical components make the evidence chain non-auditable, even if boot succeeded.
- Unstable identity: reports must be signed by a stable, non-exportable key linked to a device identity.
HSM / Secure Element: selection dimensions and trust boundaries
HSM/SE devices are valuable because they create a hard boundary: keys do not leave, sensitive operations are mediated by a small API surface, and lifecycle states can be enforced even if the main processor is compromised. This section focuses on role definition and selection criteria, not on bus details.
1) System roles (what to offload vs what to keep on the main processor)
| Capability | Best home | Engineering rationale |
|---|---|---|
| Key isolation (non-exportable keys) | HSM / SE | Prevents cloning: code can “use” keys (sign/derive) but cannot read them. |
| Crypto acceleration (AES/SHA/ECC) | HSM/SE or SoC crypto engine | Reduces implementation risk and improves predictable latency for security-critical paths. |
| Secure storage (certs, counters, policies) | HSM/SE (preferred) + protected NVM | Stores secrets and policy-bound objects; supports controlled erase and lifecycle transitions. |
| Boot orchestration (decisions, fallbacks) | Main processor | Complex control logic stays outside; it consumes security posture as inputs. |
2) Selection checklist (what to ask in design reviews)
A selection discussion is productive only when it is grounded in measurable capabilities. The checklist below focuses on what affects threat coverage, auditability, and lifecycle safety.
| Dimension | What “good” looks like | Why it matters | Evidence / artifact |
|---|---|---|---|
| Key slots & object model | Enough slots for identity + attestation + signing; per-object permissions and usage binding | Prevents “one key for everything” failures and simplifies audits | Key policy table, object ACL, usage constraints |
| Certificate chain support | Stable identity with chain validation and signer separation (prod vs test) | Enables controlled provisioning and clean separation of environments | Provisioning record, signer IDs, chain policy |
| Hardware TRNG | TRNG with health checks; entropy status exposed to the system | Weak randomness breaks keys, nonces, and anti-replay assumptions | TRNG health logs, entropy status flags |
| Side-channel resistance | Clear resistance claims and testable behavior under stress | Protects secrets against physical observation attacks | Security evaluation report, resistance notes |
| Lifecycle management | Factory → provisioned → deployed → service; controlled erase and lock transitions | Prevents “debug left open” and enables audit-ready RMA behavior | Lifecycle state machine, transition audit log |
3) Minimal API surface (the boundary that matters)
To keep the trust boundary stable, limit interactions to a small set of operations. If a design requires “read key” or “export secret”, the boundary is already broken. A healthy boundary looks like sign(), derive(), and store() with usage restrictions and auditable counters.
Key lifecycle: from factory provisioning to rotation and revocation
A “secure key” is not defined only by cryptography. It is defined by a controlled lifecycle: how the key is created, what it is allowed to do, how it is rotated without breaking continuity, and how it is revoked with an auditable reason and enforced post-action. Compliance depends on two outputs at every step: enforced policy and verifiable records.
1) Treat keys as governed objects (not as bytes)
A usable governance model starts with a KeyObject abstraction. The object is stored as non-exportable and is accessed only through controlled operations. The key’s security posture is defined by its policy and lifecycle state.
Purpose tags prevent “one key does everything” drift. Lifecycle state prevents accidental debug/service behaviors from leaking into deployed systems.
2) Provisioning: create / inject / bind (and record it)
Provisioning must answer three audit questions: who created the key, what the key is bound to, and what policy is enforced. A strong design binds keys to device identity and to usage constraints from the first moment. If a key can be created without leaving a record, the lifecycle is already non-auditable.
- Create: generate in a protected boundary or inject under strict authorization; immediately mark as non-exportable.
- Bind: attach usage_policy + purpose_tags + lifecycle_state; optionally bind to counters/sequences for anti-replay.
- Activate: enable the object only after policy is persisted; activation becomes a discrete auditable event.
3) Use: derive / sign / decrypt with purpose-bound audit trails
Usage controls are where systems often become “secure on paper” but weak in reality. The minimum requirement is: operations are allowed only when (a) lifecycle state permits them and (b) purpose tags match the intended use. Each operation should produce a compact record that is sufficient for post-incident reconstruction.
- derive: record the purpose tag and the input digest so derived material is traceable without exposing secrets.
- sign: record what was signed (digest), under which policy, and which key identity was used.
- decrypt: treat as highest risk; enforce narrow policy and ensure failures are not silent.
4) Rotation and revocation: continuity + enforced post-actions
Rotation is a controlled transition, not a single action. A robust approach defines an overlap window where old and new keys coexist, so systems can complete in-flight operations and then converge on the new identity. Revocation must be a policy event that changes allowed behavior and leaves a durable reason record.
- Rotation gate: new key is created + bound + activated before it is accepted for critical operations.
- Overlap window: both keys may verify/operate during a bounded transition; the window end is an auditable milestone.
- Revocation: must explicitly change allowed operations and persist a reason code; “silent disable” is not auditable.
Crypto acceleration: performance, power, and predictable latency
Acceleration is not only about speed. In real-time and compliance-driven designs, the key benefit is often bounded worst-case latency: verification and signing can be scheduled with predictable timing, reducing jitter and preventing critical paths from becoming “occasionally slow” under CPU load.
1) What to optimize: worst-case > average
- Worst-case latency: the maximum time one operation can block a boot gate or a security decision.
- Jitter: how much execution time varies for the same operation across runs.
- CPU blocking time: whether the main core is occupied or can continue other tasks.
- Energy per operation: power cost for verification, hashing, and signing at the required cadence.
- Queueing behavior: whether the engine adds wait time under bursts; timeouts must be recorded.
2) Where accelerators pay off (AES/SHA vs ECC/RSA)
Symmetric and hash operations (AES/SHA) often become “always-on” building blocks for integrity and sealing. Public-key operations (ECC/RSA) typically dominate worst-case latency in verification and signing paths. An engine’s value is highest when it reduces variability and makes the maximum time measurable and schedulable.
| Bucket | Typical role | Determinism benefit |
|---|---|---|
| AES (symmetric) | Sealing / wrapping / local protection | Stable per-op timing and lower CPU occupancy |
| SHA (hash) | Digests for boot/measurement/log integrity | Reduces boot-path variability; easier timing budgets |
| ECC / RSA (public key) | Verify / sign evidence and policies | Bounds worst-case latency and reduces jitter under load |
3) Engineering checklist: keep acceleration auditable
- Timeouts are events: when the engine is busy or unavailable, record a failure reason and the operation type.
- Queue visibility: record whether an operation waited; queueing changes worst-case latency.
- CPU fallback policy: define when CPU-only is allowed and how it is logged to avoid silent weakening.
- Energy budgeting: treat per-operation energy as a design constraint for sustained security tasks.
- Self-check: record engine health checks as part of the security evidence stream.
Unique device identity: UID, certificate chain, and anti-cloning
A unique identity is not a “secret UID.” It is a verifiable identity package: a stable unique anchor (UID/PUF/certificate), a non-exportable private key, and a certificate chain that allows a verifier to trust signatures from that device. Anti-cloning succeeds only when the private key is usable but not readable.
1) Identity anchors: where “uniqueness” comes from
A good design separates “unique” from “secret.” The anchor must be stable and non-colliding, while secrecy is enforced by non-exportable keys and policy.
| Anchor | What it provides | Anti-clone contribution |
|---|---|---|
| Fused ID | Stable unique identifier | Good anchor, not a secret by itself |
| PUF | Device-derived unique material | Useful for key material when stable |
| SE certificate | Identity + signer reference | Strong when paired with non-exportable key |
The anchor is allowed to be readable; anti-cloning comes from a private key that cannot be exported.
2) Certificate chain: how a verifier trusts the device
A certificate chain turns a device signature into trusted evidence. The device presents a device certificate, and the chain links it to an issuer hierarchy that a verifier can validate. The device identity is meaningful only if the private key used for signing is policy-restricted (for example: sign-only) and non-exportable.
- Device cert: identifies the device public key and issuer reference.
- Issuer hierarchy: allows validation without trusting the device itself.
- Key policy: restricts what the private key can do (sign only, purpose-bound).
3) Identity binding: proving “this device + this state”
Identity becomes compliance-grade evidence when it signs a compact “evidence package” that includes a freshness challenge (nonce) and a state summary (measurement/policy snapshot). This binds the device identity to what is running, instead of proving only “a key exists somewhere.”
- Nonce: prevents replay of old evidence.
- State summary: binds identity to measured/policy-relevant state.
- Signature: produced by a non-exportable private key under purpose-bound rules.
4) Anti-cloning pitfalls to avoid
- UID used as a secret: a readable UID cannot prevent cloning.
- Exportable private key: once a key can be extracted, the identity can be copied.
- Chain not validated: checking “a certificate exists” is not verification.
- No nonce binding: evidence can be replayed and still look valid.
- No policy context: identity claims that omit policy/debug/rollback context are incomplete for audits.
Secure storage: config, certificates, counters, and rollback protection
Secure storage is not “encrypt everything.” It is a combination of partitioning, access policy, power-fail safe updates, and anti-rollback. The goal is to keep public configuration manageable while protecting secret material and ensuring that counters and logs remain consistent and auditable across resets.
1) Partition by protection goal (public vs secret)
Treat storage regions as different security domains. Public configuration needs integrity and controlled writes; secret material needs non-exportability and strict policy. Counters and logs must preserve monotonicity and ordering even under power loss.
- Public config: readable; writes are authorized and integrity-protected.
- Secret vault: certificates/keys/counters; access through controlled operations only.
- Counters: monotonic updates with atomic commit.
- Logs: append-only records for audit traceability.
2) Anti-rollback: monotonic counters + atomic commits
Rollback protection fails when counters can be reverted by restoring old storage images or by interrupted updates. A robust design uses a monotonic counter (or equivalent non-decreasing sequence) and updates it through an atomic commit pattern: write the new value, write a commit record, then switch the active pointer.
- Monotonicity: the counter must never decrease.
- Power-fail safety: incomplete writes must be detectable and recoverable to a consistent state.
- Auditability: counter updates must have reason codes and sequence ordering.
3) Secure erase and consistency under wear and power loss
Secure erase must be defined in engineering terms. For secret material, the most reliable erase outcome is often key destruction (rendering data unrecoverable by eliminating the wrapping key or the key object) rather than relying on physical overwrite. Wear leveling and interrupted writes can otherwise leave remnants and inconsistent metadata.
- Key destruction: remove the ability to decrypt or validate sealed data.
- Two-copy metadata: keep critical pointers/counters with a recoverable structure.
- Commit records: a small durable marker that indicates “new state is valid.”
4) Audit cues: what must be explainable after incidents
- Which region changed: config vs secrets vs counters vs logs.
- Why it changed: reason codes for counter increments, erase events, and policy transitions.
- What remained monotonic: counter and sequence ordering never went backward.
- What was rejected: integrity failures and rollback attempts must produce visible, durable records.
Logging & audit trail: turning security actions into evidence
An audit trail is not “more logs.” It is a verifiable evidence stream: events are normalized into canonical records, protected by a hash chain + signed checkpoints, and anchored to a trusted time/sequence. The outcome is that security decisions remain explainable after incidents, even if attackers attempt deletion, insertion, or rollback.
1) Event taxonomy: log what can be audited
Events should be recorded by category and by outcome. Each record must include enough context to answer what happened, why it happened, and what policy applied—without leaking secrets.
| Category | Examples | Audit-critical fields |
|---|---|---|
| Boot events | verify pass/fail, mode transitions | result, reason_code, policy_id, sequence |
| Auth fail | policy deny, retry limit | actor, object_id, result, reason_code |
| Tamper | sensor trigger, confidence, response | sensor_set, confidence, action_level |
| Config change | policy update, critical toggles | before_hash, after_hash, policy_id |
Use digests (hashes) for sensitive material. Evidence should be provable without exposing secrets.
2) Normalize records: make audits deterministic
Normalization is what makes logs comparable across devices and software revisions. A canonical record format avoids “free-form” log lines that cannot be consistently verified or correlated.
- Canonical fields: fixed field set for every record (type, result, reason, actor, object, policy, time, sequence).
- Stable IDs: policy_id and object_id are stable tokens that survive software changes.
- Redaction: log digests and identifiers, not plaintext secrets.
3) Non-repudiation: hash chain + signed checkpoints
Integrity must survive deletion and rewriting attempts. A strong pattern is a hash chain for every record, plus signed checkpoints so attackers cannot “recompute the chain” after tampering.
- prev_hash links records to detect insertion/removal.
- checkpoint seal signs the chain state at defined moments (periodic or on critical events).
- failure is evidence: verify failures and seal failures must generate durable events.
4) Trusted time: sequence first, time second
Time must be explainable. A trusted design pairs a secure time source with a monotonic sequence. If time jumps or moves backward, it should be recorded as an auditable event; the sequence still preserves ordering for investigations.
- Monotonic sequence: cannot decrease; defines event order.
- Secure time: used to interpret “when,” not to define order alone.
- Time anomaly events: make clock changes visible and reviewable.
Anti-tamper sensing: from sensors to an explainable response policy
Tamper protection is a decision pipeline: sensor evidence is filtered and scored, then mapped to a policy-driven response. The response must be explainable (why it happened), auditable (what was done), and reproducible (the same inputs produce the same action).
1) Sensor evidence: strength and false-trigger risk
Sensors do not “prove tamper” by themselves. They provide evidence with different strengths and false-trigger sources. A controller should treat them as inputs to a confidence decision.
- Enclosure open: strong indicator; must be policy-aware for authorized service scenarios.
- Light / magnet: sensitive to opening and proximity; needs thresholds and debounce.
- Accelerometer: movement and shocks; separate transient impacts from sustained anomalies.
- Temperature anomaly: can indicate probing/abuse; often needs correlation with other evidence.
- Probe detect: high-value signal; commonly escalates response level when confirmed.
2) Tamper controller: debounce, score, latch
The controller turns noisy sensor signals into an auditable decision. The key is to make decision rules explicit and stable so the response can be reproduced in testing and defended in audits.
- Debounce / hold time: prevents instantaneous noise from becoming incidents.
- Confidence scoring: combines sensor evidence into low/medium/high confidence.
- Latching: critical events may remain latched until an authorized clear condition is met.
- Policy snapshot: record the active policy_id and lifecycle state at decision time.
3) Response levels: controlled, not chaotic
Responses should be tiered so that evidence strength maps to a predictable action. Extreme actions should require high confidence and must be fully recorded as evidence.
| Level | Action | Audit requirement |
|---|---|---|
| L0 | log only | sensor set + confidence + policy snapshot |
| L1 | alarm | duration + clear condition |
| L2 | restricted mode | which operations were restricted |
| L3 | lock critical objects | object_id + unlock requirements |
| L4 | zeroize key objects | reason_code + irreversible marker |
4) Compliance points: explainable, auditable, reproducible
- Explainable: every action maps to a policy rule and a confidence outcome.
- Auditable: record sensor evidence, decision, action level, duration, and clear condition.
- Reproducible: the same evidence and policy must produce the same response in tests.
Secure debug & manufacturing modes: no “lucky sealing”
Debug access is not a one-time “disable the port” decision. A defensible design treats debug as a lifecycle-governed capability: state determines what is allowed, authorization determines who can temporarily unlock it, and audit records prove when, why, and how it was used.
1) Debug port lifecycle: open → controlled → off
Define explicit lifecycle states and attach debug rules to each state. This prevents “forgotten factory settings” and blocks accidental re-enablement after deployment.
| State | Debug policy | Audit requirement |
|---|---|---|
| Factory | open or limited for production tests | record provisioning + test sign-off |
| Provisioned | controlled unlock only | unlock reason_code + session_id |
| Deployed | off by default | failed attempts must be logged |
| Service/RMA | controlled + scoped capabilities | action scope + clear condition |
“Controlled” means debug is never a permanent switch; it is a time-bounded, policy-scoped session that leaves evidence.
2) Controlled unlock: challenge/response + scoped sessions
A secure unlock scheme makes debug access verifiable and bounded. The device issues a fresh challenge (nonce); an authorized signer returns a proof; the device validates it and opens a limited session.
- Time-bounded: sessions auto-expire and return to locked state.
- Capability scope: allow only what is required (read-only vs limited write), and forbid key-region reads.
- Attempt accounting: failed unlock attempts increment counters and generate security events.
- Fail-closed: any verification failure keeps debug locked and produces durable evidence.
3) Manufacturing vs service privileges: isolate the domains
Avoid using one “master” credential for factory, deployment, and service. Domain separation prevents a single leakage event from turning into full fleet compromise.
| Domain | Purpose | Hard rule |
|---|---|---|
| Manufacturing | program, calibrate, initial provisioning | must not remain valid for deployed devices |
| Device identity | sign evidence, authenticate actions | private key is non-exportable |
| Service/RMA | diagnose and recover safely | cannot unlock “factory unlimited” privileges |
4) RMA key policy: recover without exposing production secrets
An RMA workflow must not “hand back everything.” If device identity keys are ever suspected of exposure, the safer approach is to zeroize key objects and re-provision under controlled rules, producing an auditable record of the transition.
- Enter Service/RMA state: record policy_id and reason_code (durable event).
- Open a controlled debug session: time-bounded, scoped; log session_id and scope.
- Repair / diagnose: keep secret regions non-readable; prefer “sign-only” operations.
- Zeroize or revoke if required: destroy/disable affected key objects; record irreversible markers.
- Re-provision: establish a new certificate/identity binding; generate provisioning record (no secrets).
- Exit Service/RMA state: return to Deployed rules; log state transition.
Example parts: secure identity & controlled debug building blocks
The following part numbers are common starting points for designs that need non-exportable keys, device signing, and controlled authorization flows. Capabilities vary by variant; always confirm features and certifications in the datasheet.
| Role | Example part numbers | Typical use in this chapter |
|---|---|---|
| Secure authenticator / key vault | Microchip ATECC608B · Analog/Maxim DS28C36 · ST STSAFE-A110 · NXP SE050 · Infineon OPTIGA Trust M | non-exportable keys for signing unlock proofs and sealing audit checkpoints |
| Security-capable MCU platform (examples) | NXP LPC55S69 · Microchip SAML11 · NXP i.MX RT1170 | lifecycle state enforcement and policy-driven debug gating (platform-dependent) |
Compliance evidence pack: what to deliver for reviews and audits
Reviews and audits are evidence-driven. A practical evidence pack maps controls to tests and to deliverable artifacts (reports, signed log excerpts, configuration snapshots, provisioning records). The goal is to make security claims verifiable without exposing secrets.
1) Control points that auditors can verify
- Secure boot & policy gates: fail-closed behavior and recorded outcomes.
- Key lifecycle: provisioning, rotation, revocation/zeroize events with durable records.
- Secure logging: normalization + hash chain + signed checkpoints + trusted ordering.
- Tamper response: sensor evidence → confidence → policy → tiered actions (auditable).
- Secure debug lifecycle: controlled unlock sessions with scope, time bounds, and audit trails.
2) Tests: prove controls, not intentions
Tests should include negative cases. Evidence is stronger when a design demonstrates that unauthorized actions are rejected and that rejections generate durable audit events.
| Test type | What it demonstrates | Expected artifact |
|---|---|---|
| Design review | policy rules and lifecycle gates are defined and consistent | policy snapshot hash + review checklist |
| Functional | authorized actions succeed and are logged | test report + signed log excerpts |
| Negative | unauthorized actions fail-closed with evidence | rejection logs + checkpoint verification failure proof |
3) Artifacts: what to hand over (without leaking secrets)
- Signed log excerpts: boot/auth/tamper/debug/config events with chain and checkpoint proof.
- Provisioning record: batch_id, device identifiers, certificate serial ranges, and policy_id (no private keys).
- Configuration snapshot: hashes of critical security settings (debug policy, tamper policy, log policy).
- Test report: test IDs, expected vs observed results, firmware/policy versions, and dates.
- Lifecycle transition evidence: Service/RMA entry/exit records, zeroize/re-provision markers where applicable.
If a reviewer cannot independently validate integrity and ordering, the artifact is informational, not evidentiary.
4) A practical packaging checklist
- Freeze the control list and policy_id for the release under review.
- Run functional + negative tests that exercise fail-closed paths and evidence creation.
- Extract signed log excerpts (with checkpoint proofs) for key scenarios.
- Export configuration snapshot hashes and provisioning records (no secrets).
- Bundle artifacts with firmware/policy version identifiers and verification notes.
Example parts: linking controls to evidence
Evidence quality improves when key operations (signing checkpoints, unlock proofs, monotonic ordering) are bound to non-exportable key objects. These example part numbers are commonly used as secure key anchors.
| Control | Example anchor parts | Evidence artifact enabled |
|---|---|---|
| Signed checkpoints | ATECC608B · DS28C36 · SE050 · STSAFE-A110 · OPTIGA Trust M | checkpoint signatures + verifier proof |
| Controlled debug unlock | ATECC608B · DS28C36 · SE050 | unlock proof record + session audit logs |
| Provisioning records | STSAFE-A110 · SE050 · OPTIGA Trust M | batch_id + cert serials + policy_id snapshot |
FAQs (Security & Compliance) × 12
These FAQs focus on device-side security controls and compliance evidence. Interface protocols and gateway operations are intentionally out of scope.