123 Main Street, New York, NY 10001

Safety Island / HSM for ADAS and Autonomous Driving

← Back to: ADAS / Autonomous Driving

I use this page to plan the safety island / HSM in my ADAS compute platform. It helps me turn abstract security and safety requirements into concrete hardware roles for secure boot, key storage, crypto acceleration, fault-injection detection and integrity signals that feed the rest of the ECU.

  • Secure boot enforcement
  • Key storage and key vault management
  • Crypto acceleration for OTA, V2X and data protection
  • Fault-injection detection and tamper reaction
  • Integrity measurement and status signals into safety logic

What I mean by a “Safety Island / HSM” in an ADAS ECU

When I say “safety island / HSM” in an ADAS ECU, I do not mean a generic CPU with an AES engine bolted on. I mean a dedicated hardware root of trust with its own execution environment, its own key storage and clear control over how my ECU boots and proves its integrity.

This safety island sits alongside my application cores, not inside their normal software context. It owns the secure boot flow, guards my long-lived secrets and exposes a small, hardened interface for the rest of the platform to use. If the main CPU misbehaves, the safety island is still able to react.

For my ADAS and autonomous driving projects I expect four things: an isolated execution engine, a dedicated key vault, a crypto engine that can keep up with my traffic and clear status or fault signals that plug into my safety mechanisms.

Misunderstanding What I actually mean
“My MCU has AES so that is already a safety island.” A real safety island gives me a separate key vault, its own execution context and control of secure boot decisions, not just a crypto function inside the main CPU.
“If the software library handles security, the hardware is good enough.” The hardware isolation, NVM layout, fault detection and narrow interfaces are what really define a safety island / HSM for my ADAS ECU.
Safety island and HSM inside an ADAS ECU Block diagram style illustration showing an ADAS ECU, with application cores and memories on one side and an isolated safety island or HSM on the other side, connected to external flash and a safety PMIC. ADAS ECU / Domain Controller Application cores & memories Safety Island / HSM isolated core · key vault · crypto Secure boot root of trust Key storage OTP / secure NVM Crypto engine AES · ECC · hash Fault & integrity detect · report External flash Safety PMIC
I treat the safety island / HSM as an isolated hardware block alongside my application cores, with its own key vault, crypto engine and fault or integrity outputs into the rest of the ECU.

Why I cannot just rely on CPU security features

In low-risk ECUs I can sometimes get away with basic CPU security features and a software library. For ADAS and autonomous driving compute I treat that as a starting point, not the finish line. The threats, regulatory expectations and long service life push me toward a dedicated safety island / HSM.

Threats I care about

  • Bootloader or firmware being modified so the ECU runs code that was never signed off for ADAS use.
  • OTA, V2X, map or model encryption keys leaking, which would let attackers clone or tamper with my system.
  • Power or clock glitching that skips critical checks and lets untrusted code start or safety logic be bypassed.
  • Logs and counters being reset so I lose evidence of tampering or repeated attack attempts.

Limitations of basic CPU security features

  • Software-only secure boot is hard to trust if the same CPU and flash can modify the verification code or keys.
  • Generic external flash and NVM are easy to probe or dump and are not designed as a long-term key vault.
  • If my main CPU is hung, compromised or clock-glitched, it cannot be the only element judging its own integrity.
  • There is often no dedicated, hardened path to raise a safe-state request into safety monitors or PMICs.

Safety vs cybersecurity: where the safety island fits

Functional safety cares about random faults not causing unreasonable risk. Cybersecurity cares about motivated attackers not being able to steer my ECU. The safety island / HSM lives in the overlap: it anchors secure boot, protects keys, detects fault-injection attempts and exposes status signals that my safety concept can react to.

Why basic CPU security is not enough for ADAS Diagram showing threats on the left, basic CPU security features in the middle and a dedicated safety island or HSM on the right, with arrows indicating which threats remain uncovered until the safety island is added. Threats Basic CPU security Safety Island / HSM Tampered boot / firmware Key and secret leakage Fault-injection glitches Lost logs and evidence Main CPU features AES / crypto in main core Software secure boot Generic flash for code & data Hardware-enforced secure boot Isolated key vault Fault-injection detection Integrity & status into safety logic
This is how I think about the gap: basic CPU features cover some threats, but a dedicated safety island / HSM is what closes the loop for tampered code, leaked keys, glitch attacks and long-term integrity reporting.

Block diagram – Where the Safety Island sits in my ADAS compute

In my ADAS compute design I treat the safety island / HSM as a visible hardware block, not as a hidden software feature. I place it alongside the application cores inside the SoC, connect it directly to external flash and the safety PMIC and give it clear status lines back into my safety concept.

The diagram below is how I explain this layout to my team: external flash and the safety PMIC sit on the left, the ADAS SoC in the middle and vehicle networks and sensors on the right. The safety island sits inside the SoC with its own secure boot controller, key storage, crypto engine and fault or integrity outputs.

Where the safety island sits in my ADAS compute Block diagram showing external flash and a safety PMIC on the left, an ADAS SoC with application cores and an isolated safety island or HSM in the middle, and vehicle networks and sensors on the right. Lines illustrate code and key paths, fault lines and integrity signals. External Flash / NVMe Safety PMIC ADAS SoC / Domain Controller Application Cores & Memories Safety Island / HSM isolated core · key vault · crypto Secure Boot controller Key Storage OTP / secure NVM Crypto AES · ECC · hash Fault monitors Integrity outputs Vehicle Network & Sensors Safe state / reset request Integrity / status into application
This is where I place the safety island: next to my application cores, wired directly to external flash and the safety PMIC, with integrity and safe-state signals feeding the rest of the ADAS ECU.

How I architect secure boot and runtime integrity

For ADAS compute I do not treat secure boot as a one-time box to tick. I design it as a sequence where the safety island holds control from power-on until my ADAS stack is running, and then continues to protect the platform during runtime and over OTA updates.

Power-up & secure boot sequence

  1. Power on and reset: the safety PMIC brings rails and clocks up in a controlled way and releases reset into the safety island and Boot ROM.
  2. HSM self-check: the safety island runs its own self-test, unlocks its key vault and makes sure its crypto and fault monitors are ready.
  3. Boot ROM and first-stage bootloader: the HSM and Boot ROM together verify the first boot stage stored in external flash or NVMe before any complex code runs.
  4. OS / hypervisor check: the HSM verifies the operating system, hypervisor or safety RTOS image that will host my ADAS applications and safety tasks.
  5. ADAS and safety apps check: the HSM validates the ADAS application stack and any safety-related software before allowing it to start.
  6. Handover or safe state: if all checks pass the HSM releases an “OK to run” condition; if any step fails, it requests a safe state and records what went wrong.

Runtime integrity & update handling

  • During runtime I let the safety island perform periodic or event-based integrity checks on key code segments, using hashes or challenge-response patterns anchored in its own key material.
  • When an OTA update arrives, the new image is stored in a staging area of external flash or NVMe. The HSM treats it as untrusted until it validates the metadata, signatures and version.
  • Only after the HSM has verified the new image and updated its monotonic counters do I switch the active boot slot or configuration, which prevents simple rollback attacks.
  • Whenever a verification fails, the safety island refuses to activate the new image, logs the event and, if necessary, signals my safety mechanisms so the ECU can fall back to a defined safe behavior.
Secure boot and runtime integrity flow anchored by the safety island Vertical flow diagram showing power-on at the top, HSM self-check, Boot ROM verification, bootloader and OS checks, ADAS application checks and a final release-to-run step, with side branches to a safe-state and error log block and an OTA update path feeding new images into HSM verification. Power-up & secure boot path Power on / reset HSM self-check keys · crypto · monitors Boot ROM & bootloader image and signature checks OS / hypervisor load and verify ADAS / safety apps image verification Release to run or request safe state Safe state & error log failed step records and safety request OTA update new image arrives New image stored staging slot HSM verifies new image
I design secure boot as a sequence controlled by the safety island, from power-on through OS and ADAS apps. The same hardware root of trust also validates OTA images and drives safe-state and logging when checks fail.

Design hooks I need to think about when selecting a safety island / HSM

When I read a safety island or HSM datasheet, I do not just tick boxes for “AES” or “secure boot”. I translate the numbers and features into design hooks: key storage needs, crypto throughput, fault-injection coverage and how the block is isolated and connected in my ADAS ECU.

Key storage & NVM planning

  • I start by listing every key and certificate my ECU will carry: boot keys, identities, OTA, V2X, map and model keys and any fleet-level roots.
  • I estimate how many keys I need per ECU and how many I may need over its lifetime so the key vault does not run out of space.
  • I decide which values must live in OTP or eFuse (roots, monotonic counters) and which can live in secure flash as renewable certificates or feature keys.
  • I require a monotonic counter inside the HSM so version and rollback rules cannot be reset by application software.
  • I check NVM endurance against my expected number of key and certificate updates over the full vehicle life.

Crypto performance planning

  • I start from my OTA and diagnostics bandwidth and update time targets, then back-calculate how much AES/GCM and ECDSA or similar signature throughput the HSM must sustain.
  • I only ask for the crypto algorithms I really need for OTA, V2X and data protection instead of enabling every option in the catalog.
  • I treat the hardware RNG or TRNG as its own component and look for health tests, entropy quality and clear integration into my key generation flows.
  • I check whether the crypto engines can handle concurrent sessions from several domains or VMs without becoming a bottleneck.

Fault injection detection capabilities

  • I map which fault-injection vectors I care about: voltage glitches, clock glitches, temperature extremes or other environmental stresses.
  • I require a clear reaction path: interrupts, reset requests and safe-state signals that reach my safety monitors or safety PMIC.
  • I look for event logging or monotonic counters that record fault-injection attempts so I can trend or flag suspicious patterns.
  • I check how configurable the thresholds are so I can balance coverage against false positives in my real vehicle environment.

Isolation and interface

  • I note whether the HSM sits on AXI, AHB, PCIe or a mailbox-style interface and how that maps into my software architecture and isolation plan.
  • I check if the HSM can live in its own power and clock domain so it can stay trusted when the main core is stressed or misbehaving.
  • I plan how JTAG or SWD access will be locked down in production so my safety island does not become a new debug backdoor.
  • I define a small, stable API surface from the HSM into my software so only specific services are exposed to applications and middleware.
Design hooks for selecting a safety island or HSM Block diagram style illustration showing a central safety island or HSM design hook node with four surrounding blocks: key storage and NVM planning, crypto performance, fault detection and isolation and interface, each annotated with short checklist labels. Safety Island / HSM – design hooks HSM design focus map datasheet numbers to real needs Key storage & NVM • keys per ECU & lifetime • OTP vs secure flash • monotonic counters • NVM endurance Crypto performance • OTA / diagnostics bandwidth • AES / GCM / signatures • RNG / TRNG quality • concurrent sessions Fault detection • voltage / clock glitches • interrupt / reset / safe state • event logs & counters • threshold tuning Isolation & interface • AXI / AHB / PCIe / mailbox • power & clock domains • debug lock strategy • narrow API surface
These are the hooks I map from any safety island or HSM datasheet: key storage and NVM, crypto performance, fault detection and how the block is isolated and connected in my ADAS ECU.

Example IC roles and vendor mapping

When I look at possible platforms, I group them by how the safety island or HSM is provided: built into a large ADAS SoC, added as an external secure element or combined with power and safety in a system solution. This helps me decide whether I am extending a legacy ECU or committing to a full ADAS compute platform.

Type How I use it in ADAS / ECU projects
SoC with built-in HSM I rely on this pattern when I already commit to a high-end ADAS or gateway SoC. The internal safety island zone is tightly coupled to the CPU complex and memory, but I still check whether its key vault, counters and fault detection match my requirements.
External secure element / TPM I use external secure elements or TPM-style devices when I cannot change the main MCU or SoC but still need secure boot, key storage or basic signing for a legacy or lower-cost ECU.
Combined safety island + PMIC solution I see these as platform solutions where security and power safety are designed together. I still evaluate the security island on its own merits while keeping the detailed power sequencing and safety functions on my PMIC-focused pages.
IC role mapping for safety island and HSM options Diagram placing three IC roles along an axis from external secure element or TPM through SoC with built-in HSM to combined safety island and PMIC system solutions, with integration level and flexibility indicated. Integration level lower higher Flexibility on existing ECU design easier retrofit more platform driven External secure element / TPM • add-on HSM • fits legacy MCU • interface via serial bus SoC with built-in HSM • tight coupling to cores • high-bandwidth access • good match for ADAS ECUs Combined safety island + PMIC • platform-level solution • security & power aligned • tighter vendor ecosystem easiest add-on for existing ECUs mainstream choice for ADAS SoCs most integrated, platform-driven option
I group safety island and HSM options by how they are delivered: external secure elements, SoCs with built-in HSMs and combined safety island plus PMIC solutions, then match them to my ECU and platform strategy.

BOM & procurement notes

When I prepare an RFQ or BOM for a safety island or HSM, I turn my technical decisions into explicit fields that suppliers can respond to. This table is how I write those requirements so there is no ambiguity about secure boot, key storage, crypto performance, fault detection and isolation expectations.

Field What I write in the BOM / RFQ Notes
Secure boot enforcement Required: hardware-enforced secure boot controlled by safety island / HSM In the RFQ I explicitly ask for hardware-enforced secure boot driven by the safety island, not just a software library running on the main CPU.
Key storage size ≥ 32 KB secure NVM for roots, ECU identities, OTA, V2X, map/model keys I treat the key vault as a dedicated resource and include both root keys and certificates, with headroom for lifetime updates.
Crypto algorithms AES-GCM, SHA-2, ECDSA (e.g. P-256) and optional ECDH for key exchange I only ask for the algorithms I actually use for OTA, V2X and data protection, and I state that ECB-only or legacy modes are not sufficient.
Crypto throughput AES-GCM throughput ≥ X Mbps and ECDSA verify ≥ Y ops/s at target clock I derive these numbers from my OTA and diagnostics bandwidth and update-time targets instead of accepting generic marketing values.
Fault injection detect Voltage and clock glitch detection, optional temperature / environmental sensors, with event logging or monotonic counters I specify which vectors I care about and ask for logging or counters so repeated fault-injection attempts can be observed at vehicle or fleet level.
Isolation Dedicated power and clock domains where possible; bus firewall or access control from main CPU / SoC In the BOM I call out isolation explicitly so the HSM is not treated as just another crypto block sharing all rails and buses with the main cores.
Interface & integration On-chip HSM via AXI/AHB for high-speed access; external secure element via I²C/SPI if used I state my preferred integration pattern so suppliers know whether I am planning a built-in HSM in the SoC or an external secure element attached over a serial bus.
RNG / TRNG requirements Hardware TRNG with health tests and entropy quality suitable for key generation and protocols I treat the random number source as a separate requirement and ask for health-test details and standards alignment where applicable.
Safety / security standards ISO 26262 ASIL-B/C/D capable at system level; security concept aligned with ISO/SAE 21434 I ask for use in ASIL-capable systems and request safety manuals and security guidance so I can integrate the HSM into my overall safety and cybersecurity concepts.
Lifetime & environment Automotive-grade temperature range, data retention and endurance consistent with vehicle lifetime I confirm that NVM retention, endurance and operating conditions match my ADAS ECU lifetime and environment assumptions, not just lab conditions.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs about planning and sourcing a safety island / HSM

These are the questions I keep coming back to when I plan or source a safety island or HSM for ADAS compute. I use them as a quick checklist during architecture reviews, RFQ discussions and supplier calls so I do not miss any of the security and safety hooks I care about.

When do I truly need a dedicated safety island instead of relying only on basic CPU security features?

In my projects I start looking for a dedicated safety island when secure boot, key storage and fault detection become shared services for several domains, or when ASIL levels are high. If the main CPU security block cannot be isolated, independently monitored or given its own resources, I treat it as insufficient.

Can an external HSM IC realistically support an ASIL D safety goal, and what documentation should I ask vendors for?

I treat an external HSM as a candidate for ASIL D systems only when the vendor provides a safety manual, FMEDA data and clear integration guidance. In my RFQ I ask how the device fits into a system safety concept and which diagnostics and failure modes it actually supports.

What conditions must be met before I treat secure boot as hardware enforced rather than just a software convention?

For me secure boot is hardware enforced only when the root of trust and decision logic live in the safety island or boot ROM, not in changeable application code. The boot chain must be unskippable, keys must be protected in hardware and failures must lead to a defined safe state.

How do I estimate the crypto throughput I need for OTA updates and diagnostics on my ADAS ECU?

I work backwards from my largest update image, my available network bandwidth and the maximum allowed update time. From there I calculate how many megabytes per second need to be decrypted and verified. That figure becomes my minimum AES and signature throughput requirement for any safety island or HSM.

How should I decide whether keys and certificates live in OTP style memory or in secure flash when I plan my vault?

I keep long term roots, monotonic counters and debug lock fuses in OTP style memory that cannot be rolled back. Certificates and renewable keys usually live in secure flash managed by the HSM. I size the vault so certificates can be rotated over vehicle life without running out of space.

After the HSM detects a fault injection attempt such as a voltage or clock glitch, what should the typical system reaction look like?

In my safety concept a detected fault injection is never ignored. The HSM should raise an interrupt, increment an event counter and where needed request a reset or safe state through the safety PMIC or voter. I also log attempts so I can spot patterns across vehicles or over time.

How can I add an external HSM to an existing ECU platform without redesigning the entire software and safety architecture?

I start by limiting the scope to what the external HSM can realistically own, such as secure boot, key storage and a small set of crypto services. Then I introduce a narrow driver and API layer so existing applications call the HSM through one place instead of touching many interfaces.

How should I wire HSM health or integrity signals into my safety PMIC or safety monitor so they actually influence safe state decisions?

I treat HSM health and integrity outputs as inputs to my safety monitors, not as informational flags. In practice that means routing them into the safety PMIC, a safety microcontroller or voter logic so repeated failures can trigger resets, degraded modes or shut down paths according to the safety concept.

How do I plan my JTAG or SWD debug strategy so bring up stays practical but production units do not ship with an open backdoor?

During bring up I rely on full JTAG or SWD access, but I also plan from day one how it will be locked for production. That usually means permanent fuses or secure debug authentication managed by the HSM. My BOM and manufacturing flow both document when and how the lock happens.

For an ADAS domain controller, what practical roles can the HSM play in V2X, map or AI model protection beyond secure boot?

In ADAS domain controllers I use the HSM as the anchor for V2X keys, map update signatures and protection of sensitive AI model parameters. It can sign or decrypt data streams, control which software images are trusted and provide evidence that configurations and models have not been tampered with.

How do I factor NVM endurance and lifetime for the HSM into a vehicle level lifetime that can easily exceed ten years?

I estimate how often keys, certificates and counters will be updated over the vehicle life and compare that to the rated endurance of the secure NVM. If the margin is small I redesign my update strategy, batch changes or increase vault size so I am not forced into premature field replacements.

How does my choice of security and evaluation standards such as FIPS, Common Criteria or ISO SAE 21434 change my supplier options?

My choice of standards directly shapes my supplier pool. Some devices target FIPS or Common Criteria and come with heavy certification evidence, while others focus on automotive safety and ISO SAE 21434 alignment. In my RFQ I state which evidence I really need so vendors can respond realistically.