ADAS Data Logger / EDR: Hold-Up Logging and Events
← Back to: ADAS / Autonomous Driving
This page is my practical guide for planning an ADAS data logger and EDR on a production vehicle: I turn crash and ADAS event requirements into clear choices for signals, logging windows, storage and hold-up time, plus integrity, security and RFQ checklists I can send directly to suppliers.
What an ADAS Data Logger / EDR actually does
In ADAS development we usually start with powerful external data loggers that sit in the trunk or on a rack. They stream raw camera frames, lidar packets and full bus traffic for hours or days, feeding training pipelines, calibration tools and lab analysis. These units assume stable power, large removable SSDs and an engineer who can swap disks and manage terabytes of data.
A production ADAS Data Logger or Event Data Recorder (EDR) has a very different job. It lives permanently inside the vehicle, shares space and power with the ADAS domain controller and must survive crashes, brown-outs and harsh temperature cycles. Instead of recording everything, it focuses on a short time window around critical events, writing only essential signals into NVMe or UFS so that the last seconds before power loss are preserved.
This page focuses on in-vehicle ADAS loggers and EDRs in series-production programs. The emphasis is on how to structure event windows, ring buffers and hold-up based flush strategies so that evidence survives a crash or brown-out. External development recorders, lab rigs and training-data boxes are referenced only as a contrast, not as the primary design target.
- Development loggers: maximum bandwidth, long-duration recording, focus on raw signals for training and debugging, usually with external power and bulk SSD storage.
- Production EDR: constrained storage and power, short pre/post event windows, must meet regulatory and OEM EDR specifications and remain readable long after the crash.
- ADAS focus: this topic assumes the logger is tightly coupled to an ADAS compute domain and captures only the signals needed to reconstruct system behaviour around a safety-relevant event.
Some of the signals that eventually end up in the EDR originate from HD map, localization and TSN-based sensor backbones. Here we treat them as input streams to the logger; the map engines, TSN switches and sensor-fusion algorithms themselves are covered in their own topics.
What needs to be captured: signals, buses and time window
Once we distinguish development logging from a production EDR, the next question is much more practical: what exactly needs to be recorded, and for how long? The answer drives storage capacity, NVMe or UFS selection, ring-buffer size and even the hold-up budget you need from the power system.
A typical ADAS EDR does not store raw camera or lidar frames. Instead it preserves a compact set of signals that let you reconstruct vehicle motion, driver inputs and ADAS decisions around a safety-relevant event. Most of these signals already exist on in-vehicle networks; the logger's job is to sample them at an appropriate rate and keep a rolling history so that a defined pre- and post-event window is always available.
- Vehicle dynamics and driver input: speed, steering angle, brake pressure, pedal position and yaw-rate signals from CAN or FlexRay form the backbone of any EDR dataset.
- ADAS status and object summaries: high-level states such as AEB active, lane-keeping status or obstacle classifications are usually exported over Automotive Ethernet or a gatewayed CAN channel.
- Acceleration and orientation: IMU-based longitudinal and lateral acceleration, plus roll and pitch information, provide context for crash severity and vehicle attitude.
- Power and brown-out indicators: ignition status, battery and domain supply voltages, brown-out flags and reset reasons tie the event to the actual power conditions the EDR had to survive.
- Optional sensor summaries: depending on the program, you may also log compressed camera metadata, radar target lists or road-friction estimates when they materially change how a crash is interpreted.
| Signal type | Typical rate | Bytes / sample | Mandatory? |
|---|---|---|---|
| Speed, steering, brake (CAN/FlexRay) | 10–50 Hz | 8–16 B | Yes for most EDR specs |
| ADAS mode and object status (Ethernet) | 10–20 Hz | 32–64 B | Often mandatory |
| IMU acceleration / rotation | 100–500 Hz | 12–24 B | Strongly recommended |
| Supply voltage / brown-out flags | 50–100 Hz | 4–8 B | Yes for power analysis |
| Compressed sensor summaries | Frame-based | 1–5 kB | Program-dependent |
For a given program you can estimate the required ring-buffer size by summing the per-signal bandwidth and multiplying by the desired pre- and post-event durations. That calculation, plus your hold-up time, determines whether an NVMe or UFS-based logger can reliably flush the entire window during a crash or brown-out sequence.
Some IMU and localization fields are generated by dedicated fusion engines and HD map processors. In this topic they are treated purely as input signals to the logger; the fusion details and map-matching algorithms belong to the HD Map & Localization Assist and sensor fusion topics.
Architecture: from ADAS compute to NVMe/UFS with event triggers
At system level an ADAS data logger or EDR is a narrow bridge between the ADAS compute domain and a piece of non-volatile storage. On one side it sees time-stamped signals coming from ECUs, networks and sensors. On the other side it has to turn those samples into a compact event window written into NVMe or UFS before power disappears. The internal architecture can be viewed as three main blocks: a logging interface at the ADAS compute boundary, a ring buffer with trigger logic, and a storage backend.
The ADAS SoC or domain controller exports selected signals over internal buses or high-speed interfaces. Some implementations use PCIe to connect a dedicated NVMe SSD, others keep the logger inside the SoC and talk directly to a UFS device. In both cases the EDR logic sees a stream of time-aligned samples: vehicle dynamics, ADAS mode and object summaries, power status, crash and diagnostic flags. These samples are written into a ring buffer that covers the configured pre- and post-event durations.
Under normal driving conditions the ring buffer runs in a steady state: older entries are continuously overwritten as new ones arrive. A separate trigger block monitors events such as airbag deployment from the crash ECU, high-g acceleration, critical ADAS fault codes or brown-out indicators. When any configured trigger fires, the logger freezes the relevant window in the ring buffer, marks the pre- and post-event bounds and initiates an immediate flush to NVMe or UFS using the available bandwidth.
All of this only works if the samples share a common notion of time. The logger does not invent its own clock; instead it consumes time stamps from the vehicle's synchronization domain, typically based on PTP, 802.1AS or another time-distribution scheme. From the logger's perspective this appears as a monotonic time base attached to each sample so that signals from different ECUs and network segments line up correctly in the stored event window.
- ADAS compute boundary: exports selected signals and status fields with a unified time stamp and provides access to PCIe or UFS for storage.
- Ring buffer and trigger logic: continuously records samples, evaluates trigger conditions and freezes the configured event window when a condition is met.
- Storage backend: NVMe or UFS media that accepts burst writes to persist the frozen window within the available hold-up time.
Time distribution, TSN scheduling and network-level synchronization are covered in the Time Sync & Interfaces topic. Here we focus on how the logger consumes those synchronized signals and turns them into a well-defined event window in non-volatile storage.
Power-fail / brown-out events and hold-up logging
In a real vehicle power never fails at a convenient moment. Ignition can be turned off while the ADAS domain is still processing data, a wiring fault can pull a rail down abruptly, or a crash can cause the high-voltage system to disconnect. From the logger’s point of view these all look like brown-out events: the supply voltage starts to fall and there is only a short, finite time window to push the frozen event window into non-volatile storage.
The basic sequence is simple. During normal operation the logger writes to the ring buffer and monitors brown-out indicators and trigger sources. When a trigger occurs — for example an airbag deployment or a critical ADAS fault — the logger marks the current pre- and post-event window. If at any point the supply voltage crosses a brown-out threshold, the logger switches into an emergency flush mode and writes the marked window to NVMe or UFS as fast as possible, using the remaining hold-up energy in the power system.
The amount of data that can be written in this phase is bounded by write bandwidth and hold-up time. To first order you can approximate the flush capacity as bytes_writeable ≈ write_bandwidth × T_hold-up . If the ring buffer window is larger than this capacity, some part of the window will not make it into storage. That forces a design trade-off between window length, signal selection and the type of storage device you choose.
- With a fast NVMe drive and a generous hold-up interval, it may be realistic to flush the entire event window, even if it includes higher-rate IMU data and ADAS summaries.
- With a slower eMMC or limited hold-up, the same window might be only partially written, forcing you to prioritise which signals to keep and whether to compress some of them into metadata.
- In practice many designs define a minimum guaranteed window that can always be written under worst-case hold-up, and an extended window that is written opportunistically when bandwidth allows.
How much hold-up time can be provided and how the power stage achieves it is addressed in the Hold-Up & Safe-State Power topic. Here we concentrate on what the logger does with that limited time budget and how storage choice and window length must be sized so that the most valuable data is flushed before the rails collapse.
NVMe vs UFS (vs eMMC): endurance, bandwidth and logging patterns
A production ADAS logger stresses storage in a very specific way. It does not behave like an infotainment download cache or a simple black-box recorder. For most of its life it performs a continuous, low-to-medium bandwidth ring-buffer write pattern, and then occasionally it executes a very aggressive burst flush when an event window must be preserved under limited hold-up time. Choosing between NVMe, UFS and eMMC is therefore less about peak benchmark numbers and more about how each device type handles this workload over years of vehicle life.
NVMe SSDs offer the highest bandwidth and usually the best behaviour for random writes, but they come with higher power consumption, stricter thermal requirements and a stronger need for careful mounting and cooling in the ADAS domain. UFS provides a more balanced profile: good burst throughput, a mobile-friendly power envelope and tight integration with many automotive SoCs. eMMC can still satisfy low-to-medium bandwidth EDR requirements in cost-sensitive programs, as long as the window size and signal set are aligned with its endurance and throughput limits.
Endurance and write amplification are just as important as raw speed. A logger that relies on very small, unaligned writes to maintain its ring buffer will quickly burn through the available TBW and may expose weak spots in the controller firmware. To avoid turning the EDR into an SSD killer, it is common to pre-allocate a dedicated logging area, align writes to erase or page boundaries and bound the number of event windows that can be stored or rewritten over the life of the vehicle. Temperature ratings and derating behaviour must also be considered, since ADAS storage often sits in one of the hottest zones of the vehicle.
| Storage type | Logger use case | Bandwidth profile | Endurance & thermal notes |
|---|---|---|---|
| NVMe SSD | High-end ADAS loggers with many sensors and wide windows | Very high burst write, strong random performance | Good TBW but sensitive to heat; needs cooling and aligned writes |
| UFS | Mainstream ADAS EDR coupled closely to the SoC | High sustained and burst throughput, optimised for mobile patterns | Balanced endurance and power; good fit for ring-buffer + flush |
| eMMC | Cost-sensitive EDR with short windows and minimal fields | Limited bandwidth, more sequential-friendly | Lower TBW and higher thermal stress; requires tight control of window size |
- Pre-allocate the logging region: keep the ring buffer within a fixed area to limit fragmentation and make wear more predictable.
- Align writes to device granularity: choose record sizes that map cleanly to pages or erase blocks so that each update causes minimal write amplification.
- Bound the number of stored windows: define how many crash and diagnostic events can be retained and rotated over the lifetime of the logger to keep total writes within the TBW budget.
This section focuses on storage technology from the logger’s workload perspective. General memory hierarchy and flash selection for the whole vehicle are covered in the Memory Planning & Selection topic.
Event classes and how to size your logging window
Not every event deserves the same logging window. A high-energy crash needs several seconds of history to reconstruct driver behaviour, road conditions and ADAS decisions, while a transient brown-out may only need a short window to explain why the system reset. Sizing the pre- and post-event durations is therefore one of the most important design steps when planning an ADAS EDR.
It helps to start from event classes instead of individual signals. Typical categories include crash events where an airbag deploys, ADAS functional failures such as an AEB abort or loss of a critical sensor, and power-related events like brown-outs or unexpected resets. Each class has its own characteristic time scale and diagnostic goals, which lead naturally to a range for t_pre and t_post.
| Event class | Typical intent | Suggested t_pre | Suggested t_post |
|---|---|---|---|
| Crash / airbag deployment | Reconstruct driver inputs, vehicle dynamics and ADAS actions | 3–8 s | 2–4 s |
| ADAS functional failure (AEB/LKA/LCC) | Understand why a function disengaged or failed to trigger | 2–5 s | 1–3 s |
| Power brown-out / reset | Tie system resets to load changes and supply behaviour | 0.5–2 s | 0.5–1 s |
Once you have a candidate window for each event class, you can translate it into storage capacity. For each signal group in the EDR, determine its sampling rate and bytes per sample. Summing these across the selected signals and multiplying by the total window length yields a first-order estimate of the required capacity:
Required_capacity ≈ Σ( sampling_rate × bytes_per_sample ) × ( t_pre + t_post ) × margin
where margin accounts for protocol overheads, metadata and implementation details.
As an example, consider 50 Hz of CAN data at 16 bytes per sample, 200 Hz of IMU data at 12 bytes and 10 Hz of power status at 8 bytes, with a crash window of 6 s pre-event and 3 s post-event. The aggregate data rate is roughly: 50×16 + 200×12 + 10×8 ≈ 3.8 kB/s . Over 9 s and with a safety margin of 1.5 this corresponds to roughly 51 kB for this subset. Adding ADAS summaries and optional metadata can raise this into the multi-hundred kilobyte or low megabyte range, which must then be checked against the hold-up and storage choices derived in the previous sections.
This calculation does not attempt to restate every regulatory clause. Instead it provides a practical way to size windows for crash, functional and power events using the signal sets that matter most to your ADAS platform, while detailed field lists remain aligned with the applicable EDR standards.
Safety, regulations and data integrity hooks
An ADAS EDR is only useful if the data it records is trustworthy and still available when a vehicle reaches a workshop or test lab. That means the logger must do more than write bytes to storage. It needs explicit hooks for data integrity, anti-tamper measures and regulatory expectations, so that homologation and safety teams can sign off with confidence.
Data integrity: from records to complete windows
At the lowest level each record should be self-checking. That typically means a CRC or checksum per record, sequence numbers or monotonic counters and clear start/end markers for each event window. Above that, a simple journal or log-structured layout helps the logger distinguish between fully committed windows and partial fragments left behind by a power interruption. On the next power-up the logger can scan the journal, discard incomplete tails and present only consistent windows to downstream tools.
- Per-record integrity: CRC or checksum, timestamp and sequence number for each record make bit-level corruption easy to detect.
- Window framing: explicit headers and trailers for each event window allow the logger to reconstruct where a flush started and ended.
- Journal / commit markers: a simple commit flag or index protects against half-written windows when power fails mid-flush.
Security and anti-tamper considerations
Access to EDR data is usually restricted to authorised diagnostic tools and, in some cases, law-enforcement or certified labs. The logger should therefore rely on secure channels for reading and clearing events, and on cryptographic protection to make tampering detectable. Keys, certificates and signature operations belong in dedicated security hardware such as a safety island or HSM; the logger consumes those services to sign windows or compute authenticity tags.
- Controlled access: EDR read-out should go through a secure diagnostics or OTA gateway, not a raw file-system interface.
- Tamper evidence: event windows can be signed or covered by a MAC so that post-processing tools can detect changes or missing segments.
- Erase and overwrite policies: clearing events should follow defined diagnostic procedures rather than ad-hoc file deletion to avoid accidental or malicious loss of evidence.
Regulatory and OEM expectations: questions to align on
Regulatory frameworks and OEM standards define which events must be recorded, for how long data must be retained and under which conditions it may be accessed or cleared. Instead of restating full legal texts, it is more practical to use the EDR architecture as a checklist driver when talking to homologation, legal and safety teams.
- Event coverage: which crash, functional and power events must be captured, and how many occurrences of each must be retained?
- Field set: which signals are mandatory per event type according to the target regulations and OEM standards?
- Retention: how long must data remain available, and under what conditions may it be overwritten or cleared?
- Read-out process: which tools, connectors and formats are required for post-crash data extraction?
- Owner and privacy expectations: what constraints apply to personally identifiable information, and how is access logged or audited?
Cryptographic algorithms, key lifecycles and broader memory hierarchy choices are handled in the Safety Island / HSM and Memory Planning topics. Here the focus is on the integration hooks an EDR must provide so that integrity, security and regulatory teams can rely on its output.
Thermal, mechanical and placement considerations
Storage for an ADAS EDR rarely lives in a comfortable environment. It shares a housing with a high-power SoC, sits in a confined area with limited airflow and must survive years of temperature cycling and vibration. Even if bandwidth and capacity look acceptable on paper, the actual placement and cooling strategy can make or break the long-term reliability of the logger.
When choosing where to place NVMe, UFS or eMMC devices, consider both proximity to the ADAS compute and local hot spots. Mounting storage directly in the hottest corner of a domain controller may simplify routing but will accelerate ageing and increase the risk of thermal throttling. Many designs offset the storage slightly, add thermal pads or heat spreaders and treat the EDR log area as a critical component in thermal simulations and validation tests.
- Thermal: verify that storage temperature over drive cycles and worst-case conditions remains within the intended operating range, including derating for lifetime and TBW.
- Mechanical: use mounting and PCB layouts that tolerate vehicle vibration and shocks without fretting, connector issues or solder fatigue.
- Serviceability: decide whether the logger or its storage is a field- replaceable unit and design access, connectors and fastening accordingly.
These considerations do not replace full ECU mechanical and thermal design, but they act as a reminder that EDR reliability depends as much on placement and cooling as it does on bandwidth and capacity calculations.
Design checklists for sourcing & RFQ
When I prepare an RFQ for an ADAS data logger or EDR, I do not want to re-read technical design documents every time. Instead, I keep a short checklist that turns the architecture into concrete questions for suppliers. The points below are the items I want to capture in my sourcing templates so that storage, logger firmware and system integration are all sized correctly from the beginning.
1. Bandwidth and data type overview
Before asking for logger or storage proposals, I sanity-check which signal groups must be recorded and at which rate. This table can go straight into my RFQ so that suppliers see the expected payload, not just the words “ADAS EDR”.
| Signal group | Typical rate (Hz) | Bytes per sample | Mandatory / optional | Source domain |
|---|---|---|---|---|
| Vehicle dynamics (speed, steering, brake) | 10–50 | 8–16 | Mandatory | Chassis / vehicle ECU |
| ADAS mode and decision summaries | 10–50 | 16–64 | Mandatory | ADAS ECU / domain controller |
| IMU / acceleration / yaw | 100–200 | 12–24 | Recommended | Sensor / fusion ECU |
| Power rails, IGN, brown-out flags | 5–20 | 8–16 | Mandatory | Power / body ECU |
- In my RFQ I include a table like this so suppliers can size bandwidth and storage correctly.
- For each project I mark which groups are truly mandatory and which are nice-to-have.
2. Event types and logging windows
Different event classes justify different pre- and post-event windows. I capture these ranges in my RFQ so suppliers can check whether their proposed logger and storage can support them with margin.
| Event class | Target t_pre (s) | Target t_post (s) | Min. events retained |
|---|---|---|---|
| Crash / airbag deployment | 3–8 | 2–4 | At least 1–2 |
| ADAS functional failure (AEB/LKA/LCC) | 2–5 | 1–3 | Recent 5–10 |
| Power brown-out / reset | 0.5–2 | 0.5–1 | Recent 10–20 |
3. Storage medium, endurance and temperature class
In the RFQ I ask suppliers to spell out the storage type and how it behaves under my logger workload, not just its peak benchmark numbers. For each proposal I expect at least:
- Storage type and interface (NVMe over PCIe, UFS version, eMMC version and density).
- Endurance rating (TBW or equivalent drive writes over lifetime) and data retention at elevated temperature.
- Operating temperature range and any thermal derating or throttling behaviour relevant to logger workloads.
- Recommended maximum sustained write rate for a ring-buffer + burst-flush pattern.
4. Hold-up time and flush requirements
Because the logger typically writes the event window during a power-fail hold-up interval, I make hold-up and flush behaviour explicit in my RFQ. I describe:
- The minimum hold-up time I expect the power system to provide for logger flush (for example 50 ms or 100 ms).
- The typical and maximum event window size to be flushed (in kilobytes or megabytes).
- The required guarantee level (for example, “full window flush at minimum hold-up time in worst-case temperature and aging conditions”).
I then ask suppliers to respond with the amount of data their solution can reliably flush under these hold-up conditions, highlighting any assumptions about write bandwidth or prioritisation of signals.
5. Diagnostics, health reporting and self-test
Finally, I want to know how the logger and storage will report their own health so that field issues can be detected before data is lost. In my sourcing templates I reserve fields for:
- Health metrics exposed by the storage (remaining life percentage, error counts, bad-block statistics).
- Logger-side diagnostics (event log overflow counters, missed flush counters, consistency check results).
- Interfaces and protocols used to expose these metrics (for example UDS identifiers, log formats or Ethernet diagnostics).
- Any built-in self-tests and their recommended execution interval in the vehicle.
IC roles & vendor mapping (lightweight)
The EDR path touches several IC families, from storage controllers to power-loss protection and secure elements. I use this section as a lightweight map of the main roles, so that when I browse vendor portfolios I know which device classes I am actually looking for.
Key IC roles in the EDR logging path
- NVMe / UFS / eMMC controllers: manage the underlying NAND flash, expose logical storage to the ADAS SoC and determine how well ring-buffer and flush patterns are handled over lifetime.
- Bridges, PHYs and retimers: extend PCIe or UFS links, compensate for long traces or connectors and preserve signal integrity between the ADAS compute and the storage device.
- Power-loss protection and hold-up controllers: monitor supply rails for brown-out, coordinate hold-up capacitors or dedicated power stages and provide signals that trigger emergency flush in the logger.
- EEPROM or secure elements for metadata: store EDR configuration, software versions, calibration data and, when needed, cryptographic keys and authenticity tags associated with event windows.
- Temperature and health monitoring ICs: measure local temperature around storage and support diagnostics for long-term reliability.
Datasheet fields I pay attention to
When I open a datasheet for any IC on the EDR path, I scan for a small set of fields first. These tell me quickly whether the device is likely to survive the logger workload and fit into the thermal and safety budget of the ADAS domain.
- Temperature and lifetime: operating temperature range, endurance or cycle ratings, TBW equivalents and data retention versus temperature curves.
- Power-fail and error behaviour: specified power-loss protection features, shutdown and flush timings, error reporting mechanisms and any guarantees for data integrity under brown-out conditions.
- Diagnostics and monitoring interfaces: availability of health counters, error logs, remaining life indicators and how these are exposed to the host (registers, SMART-like attributes, UDS mappings or vendor-specific diagnostics).
- Security and integration hooks: support for secure erase, authenticated firmware updates and integration with external safety islands or HSMs for keys and signatures used in EDR workflows.
Detailed selection of PCIe switches, TSN switches, Ethernet PHYs and security ICs is handled in their own topics. Here I only keep a concise view of the IC roles and datasheet fields that matter directly for an ADAS EDR logging path.
ADAS Data Logger / EDR — FAQs
These are the twelve questions I use to sanity-check my ADAS data logger and EDR design. Each answer is short enough that I can reuse it in design reviews, RFQs and structured data, and together they cover storage choice, event windows, hold-up, integrity, security and sourcing.
When do I actually need an NVMe-based ADAS logger instead of a simple eMMC EDR?
When my ADAS platform needs to log many high-bandwidth signals, long windows and repeated events, I move to an NVMe-based logger. A simple eMMC EDR is only realistic when my signal set is small, my windows are short and my write rate plus TBW stay well inside what automotive-grade eMMC can handle.
How do I decide the pre- and post-event logging window for crash and brown-out events?
I start by listing the event classes I care about, such as crash, ADAS functional failure and brown-out. Then I ask how much history I need to understand each one, and how long recovery takes. That gives me a reasonable pre- and post-event window range to verify against storage capacity.
How much hold-up time do I really need to guarantee my EDR window is fully flushed?
I estimate the total bytes I need to flush per event window and divide that by a conservative, worst-case write bandwidth at end-of-life and high temperature. That tells me the minimum hold-up time. I then add margin and validate it with real power-fail tests on representative hardware.
What data types are usually mandatory versus optional in an ADAS EDR?
I treat some data types as non-negotiable: vehicle dynamics, ADAS mode and decision summaries, power state and basic acceleration data. Optional layers include richer sensor summaries, detailed diagnostics and development-only traces. For a cost-sensitive EDR I start from the mandatory set and only add fields that clearly change investigations.
How do I estimate storage endurance (TBW) for continuous ring-buffer logging?
I approximate daily written data from my ring buffer and event flushes, then project that over the vehicle lifetime. Dividing by the TBW rating tells me how much margin I have. If the margin is small, I shrink windows, reduce sampling or move up to a higher-endurance storage option.
How should I coordinate time synchronisation between my ADAS domain and the logger?
I make sure the logger consumes timestamps that already sit on a common time base, usually from the ADAS domain’s PTP or 802.1AS stack. In my requirements I clearly state that every stored record must carry this unified time, so post-processing can align logs from different ECUs without guessing.
What failure modes should my logger detect and report to the vehicle?
I expect my logger to detect and report its own problems, not just application faults. At minimum I want counters for failed flushes, incomplete windows, storage write or erase errors, health or lifetime warnings and any consistency check failures on start-up, all exposed through the vehicle’s diagnostics path.
How do I protect EDR data from tampering while still allowing service access?
I combine technical and process controls. Technically I rely on signatures or MACs from a safety island or HSM and I restrict read-out and erase operations to authenticated diagnostic tools. Process-wise I define who is allowed to access EDR data, how long it is retained and how access is logged.
What’s the real difference between a development data logger and a series-production EDR?
A development logger is designed for exploration: huge bandwidth, long sessions, flexible configurations and frequent manual interaction. A series-production EDR is much more constrained. It focuses on specific events, short windows, fixed field sets, strict power-fail behaviour and long-term reliability, all tailored to regulatory and OEM requirements instead of lab experiments.
How granular should my event triggers be so I don’t fill storage with minor events?
I start with a generous trigger list and then simulate how often it would fire on real drive data. If storage would fill quickly with minor events, I tighten thresholds, add hysteresis or require multiple conditions. The goal is to keep enough sensitivity for real issues without drowning the EDR in noise.
How do I test that my logger really captures the last milliseconds before power-loss?
I build a repeatable test where I drive the logger into a heavy write scenario and then drop supply using realistic power profiles from the vehicle. I capture timing with scopes or logic analysers and compare the stored data to the expected window, repeating at hot, cold and aged conditions.
What should I put into an RFQ to make sure suppliers quote the right type of ADAS logger or EDR?
In my RFQ I spell out signal groups, event windows, storage type, TBW expectations, temperature class, hold-up time and diagnostic hooks. I ask suppliers to show how their proposal meets these points and to highlight any assumptions. That way I get quotes for logger solutions that really fit my ADAS project.