Edge Security Probe for Verifiable Network Evidence
← Back to: IoT & Edge Computing
An Edge Security Probe is an “evidence box” that captures traffic/events and turns them into verifiable, time-credible, tamper-evident records with durable retention. Its success is measured by provable completeness (loss visibility), explainable timestamps, cryptographic integrity, and power-loss-safe recovery—not by blocking or policy decisions.
What is an Edge Security Probe
An Edge Security Probe is an evidence-focused node placed on mirrored or inline links to turn network activity into time-ordered, cryptographically verifiable, and durably persisted audit records. Its job is to make incidents explainable with a provable chain of custody—not to enforce policy or block traffic.
The practical meaning of “probe = evidence box” is a closed loop: capture what happened, prove the records are authentic and untampered, and keep them intact through power loss and device faults. If any one of these steps is missing, the output becomes an opinionated log rather than verifiable evidence.
Typical inputs include mirrored packets or flow/metadata, ingress counters (to expose loss), and device measurements used as integrity context (firmware/config digests for attestation). Typical outputs are sealed log segments (hash-chained entries, periodic roots, signatures/MACs), plus a compact audit summary that lets a third party verify ordering, integrity, and completeness without trusting the probe’s runtime state.
| Dimension | Edge Security Probe | Firewall / IDS / IPS (boundary only) |
|---|---|---|
| Primary goal | Produce verifiable evidence (time order + integrity + durability). | Detect/block/enforce policy; evidence is secondary. |
| Traffic role | May be out-of-path or inline; does not decide allow/deny. | Often inline; decisions and enforcement are core. |
| Outputs | Signed/sealed logs, audit trail, gap/loss reports, evidence bundles. | Alerts, policy hits, allow/deny actions, threat scoring. |
| “Quality” metric | Third-party verification, tamper evidence, power-loss recoverability. | Detection efficacy, false positives, throughput/latency. |
| Engineering focus | Timestamp placement, integrity sealing, durable storage + PLP, loss visibility. | Inspection depth, signatures/rules/models, enforcement reliability. |
Diagram note: text is intentionally minimal; labels indicate standard functional blocks for SEO-friendly skimming and mobile readability.
Where it sits in the network
Placement is a choice between evidence completeness and fault domain. Out-of-path (TAP/SPAN) avoids impacting production traffic but must make mirror loss visible. Inline (bump-in-the-wire) can preserve full-path evidence but introduces availability and bypass requirements.
Out-of-path (TAP/SPAN) is preferred when uptime is paramount and the probe must never become a point of failure. The main technical risk is that mirrored evidence can be incomplete without obvious symptoms. A probe designed for evidence quality therefore treats loss visibility as a first-class output: ingress statistics, gap detection, and configuration context become part of the audit trail.
Inline (bump-in-the-wire) is chosen when evidence needs to reflect the actual forwarding path and mirrored links cannot be trusted. The main risk is fault domain: power loss, software hangs, or link negotiation issues can disrupt traffic unless a well-defined bypass strategy exists. Inline deployment also requires that any bypass transition is itself recorded, otherwise the chain of custody breaks exactly when it matters most.
| Decision question | Out-of-path (TAP/SPAN) | Inline (bump-in-the-wire) |
|---|---|---|
| Primary benefit | Zero impact on production traffic (separate fault domain). | Evidence follows the true traffic path; fewer mirror artifacts. |
| Main risk | Silent incompleteness (mirror loss / filtering / oversubscription). | Availability impact if bypass and failure handling are weak. |
| Must-have evidence feature | Loss visibility + mirror context recorded in logs. | Bypass transitions and failure states recorded in logs. |
| Recommended when | Operations demands “never touch the traffic path.” | Investigation demands “evidence must be path-faithful.” |
Practical boundary: this section discusses placement for evidence quality only; it does not expand into enforcement, detection algorithms, or cloud/SIEM pipelines.
Traffic acquisition: TAP/SPAN, link modes, and loss detection
Evidence capture is only useful when completeness is measurable. Mirrored traffic can be silently incomplete, so a security probe must output loss visibility: a defensible estimate of what was captured, what was dropped, and why.
Mirroring is not a binary “works / does not work” feature. It is a data path with its own contention, buffering, and configuration surfaces. The practical difference between TAP and SPAN is therefore not marketing— it is how often and how invisibly evidence can become incomplete under burst, oversubscription, or filtering.
| Topic | TAP (passive/active tap link) | SPAN (switch mirror) | Probe requirement |
|---|---|---|---|
| Burst & contention | Usually stable if link budget is correct; still subject to capture-side buffering. | Prone to silent drops when mirror path or egress port is oversubscribed. | Loss visibility must be logged as evidence. |
| Filtering / truncation | Less common; depends on tap gear and capture configuration. | Common: direction/VLAN selection, sampling, truncation, policy changes. | Record mirror context (what was configured). |
| Priority effects | Not usually a “priority” concept; depends on physical path. | Mirror replication can be deprioritized during congestion. | Expose counter closure across stages. |
| Error visibility | Physical errors show up as link/FCS/PCS counters on capture interface. | Mirror loss may not surface as link errors. | Use multi-layer evidence, not one counter. |
Loss visibility is strongest when it is built from independent layers of evidence, rather than a single “RX dropped” number. A probe-grade capture path typically maintains a closed accounting loop: source-side counters + probe ingress counters + content-level gap signals. If the loop cannot be closed, the uncertainty must be disclosed in the signed audit summary.
Layer 1 — Source counters
Mirror/TAP source link statistics used to estimate what should have been observable.
Layer 2 — Probe ingress counters
Capture-interface and buffering drops/errors that quantify what was actually ingested.
Layer 3 — Content gaps
Gap detection on sequence-like signals (event IDs, segment counters) to localize missing evidence windows.
Capture granularity is a second, separate decision. Metadata-first capture maximizes throughput and retention while reducing privacy exposure; full packet capture increases forensic detail but amplifies storage pressure, power-loss risk, and long-term retention cost. A robust evidence design often defaults to metadata and upgrades to fuller capture only for constrained windows or high-value triggers, keeping the evidence chain durable and verifiable.
Diagram note: labels are intentionally short for mobile readability; the core message is “quantify missing evidence instead of hiding it.”
Hardware timestamping: what “good time” means for evidence
“Good time” for evidence is not about chasing nanoseconds across a whole network. It is about stable ordering, consistent error bounds, and explainable time state that can be verified after an incident.
Evidence time has four engineering properties: ordering (events sort correctly), consistency (jitter and drift are bounded and stable), traceability (time-source state is recorded), and explainability (a reviewer can understand the error budget). Without those properties, timestamps become decorations rather than admissible evidence.
| Timestamp location | What it captures | Typical error contributors | When it is sufficient |
|---|---|---|---|
| PHY-edge | Closest to wire ingress; best reflects arrival time. | Minimal queue effects; dominated by PHY processing variation. | Small error windows and strong ordering requirements. |
| MAC boundary | Frames as they enter MAC logic. | Internal arbitration, buffering, MAC scheduling variability. | Moderate accuracy with manageable implementation cost. |
| Ingress pipeline / DMA | Packets as they are handed to capture pipeline. | Queueing, burst buffering, interrupt/DMA timing, software pressure. | Ordering-first evidence where jitter bounds are acceptable and disclosed. |
Power-loss and restart behavior must be part of the time story. Holdover (RTC + energy reserve such as a supercap) is used to preserve time continuity long enough to seal the final log segment and to label the probe’s time state after reboot. If time is uncertain (drift beyond bounds, step events, or a reset), the audit trail must mark that interval explicitly so event ordering remains defensible.
Measurable jitter
Track p99/p999 timestamp jitter as an evidence-quality statistic.
Measurable drift
Record drift rate and holdover status to bound uncertainty windows.
Detect step events
Time jumps and source changes must be logged as signed state transitions.
Continuity under PLP
Seal final segments on power loss and recover deterministically after reboot.
Diagram note: the probe does not need to explain a full network timing system here; it must explain its own timestamp error bounds and time-state transitions.
Identity & attestation: proving “who generated this log”
Evidence logs must be attributable. Identity proves which device produced the log, while attestation proves what trusted state the device was running when the log was created. Both must be independently verifiable after the fact.
A practical identity stack has three layers. A device identifier makes the source addressable, a certificate chain makes it verifiable, and a protected signing key (anchored in TPM/HSM/SE hardware) makes it hard to impersonate the device. The goal is not naming— it is producing a signature that reviewers can validate without trusting the runtime environment.
Layer 1 — Device identity
Stable device identifier bound to a signing capability.
Layer 2 — Certificate chain
Verification path to a trusted issuer root (offline-verifiable).
Layer 3 — Key protection anchor
TPM/HSM/SE prevents key export and signs inside a hardened boundary.
Attestation evidence should be described as fields rather than narratives. The verifier needs a compact package that reports firmware and configuration state as measurements (digests), and binds those measurements to a fresh challenge so that old claims cannot be replayed. This page focuses on the evidence fields and verification logic only.
| Evidence field | What it proves | Verifier check |
|---|---|---|
| Firmware / version ID | Which software build is running. | Matches expected version policy for this device class. |
| Boot state / secure boot flag | Whether measured/verified boot requirements were met. | Flag is signed and consistent with allowed boot states. |
| Measurement digest | Boot/firmware components as hash measurements. | Digest matches a known-good allowlist for this device. |
| Config digest | Security-relevant configuration summarized as a digest. | Digest matches an approved configuration baseline. |
| Monotonic / boot counter | Hints against rollback and stale state replay. | Counter is non-decreasing across evidence segments. |
| Time window / time state | When the claim is valid and whether time was stable. | Window bounds are plausible; “time-uncertain” is disclosed. |
The minimum verifiable statement is intentionally small: a signed measurement plus a nonce and a time window. The nonce makes the claim fresh, the time window bounds its validity, and the signature binds the claim to the device identity. Without this minimum set, the log source and state become disputable.
Diagram note: labels are minimal for mobile readability; TPM/HSM/SE is shown only as a key anchor (no platform workflow).
Log integrity: hash chains, signatures, and tamper evidence
Integrity does not assume storage is perfect. A robust evidence log is tamper-evident: modifications, truncation, replay, and rollback attempts become detectable through chained hashes and signed seals.
An append-only log links each record to the previous record via a hash reference. This structure makes insertion and deletion detectable. For high-rate capture, sealing records in segments improves throughput and verification efficiency. Segment sealing computes a compact root over a chunk of entries and signs that root, producing a verifiable checkpoint.
Append-only entries
Each entry includes a reference to the previous hash plus a monotonic counter.
Segment sealing
Compute a segment root over many entries and sign the root as a checkpoint.
Verification
Recompute chain/roots and validate signatures against the device identity.
| Signing strategy | Benefit | Cost / risk | Good fit |
|---|---|---|---|
| Per-record signing | Finest-grain non-repudiation; each record stands alone. | High CPU/power; frequent key use; throughput sensitivity. | Low-rate critical events and small logs. |
| Segment signing | High throughput; fewer signatures; efficient verification checkpoints. | Unsealed interval depends on chain/counters; segment length is a trade-off. | High-rate capture and long retention with periodic sealing. |
Tamper evidence must explicitly cover common failure and attack patterns: counter rollback (log truncation), time rollback (step events), and storage replacement (cloning or swapping media). The audit trail should record monotonic counters and sealed-root indices, and it must log time-state transitions so that timeline uncertainty is visible rather than hidden.
Rollback / truncation
Detect non-monotonic counters or missing sealed-root indices.
Time step / replay
Detect time rollback or step events and mark “time-uncertain” windows.
Storage swap
Detect inability to continue the signed chain under the same device identity.
Diagram note: “Merkle root” is presented as a segment seal concept (no math); labels are short for mobile readability.
Durable storage & power-loss protection (PLP)
Evidence must survive failure. PLP is not “never lose power” — it is a guarantee that a clear commit point exists, so crash recovery can prove what was durably stored and what remains uncertain.
Storage choices should be evaluated through an evidence lens: can partial writes be detected, can sequential append be sustained for long retention, can endurance be predicted under high-rate logging, and can crash recovery rebuild a trustworthy index without hidden corruption. The media name matters less than the ability to implement detectable and recoverable persistence.
| Media | Why it can fit evidence logs | Key risk surface | Selection emphasis |
|---|---|---|---|
| eMMC | Common embedded storage; can support sequential append with careful commit markers. | FTL behavior under power loss; partial-program ambiguity if not structured. | Detectability: strong segment envelope + scan-based recovery. |
| UFS | Higher performance for sustained logging and large retention. | Complex write pipeline; still needs a clear commit model for evidence. | Throughput: segment sealing cadence + disclosure of unsealed window. |
| Serial NAND | Cost-effective bulk retention; natural fit for sequential writes. | Bad blocks and wear: recovery must tolerate gaps and remaps. | Recoverability: scan, validate, rebuild index, disclose gaps. |
| FRAM | Fast and robust for small critical state (counters, last-commit index). | Capacity limits; not a bulk payload store. | Use as “truth anchor” for commit metadata and recovery state. |
PLP should protect two actions: flush (drain capture buffers to a durable write path) and commit (finalize a segment with a verifiable marker and checksum/root). A well-designed log accepts that an in-flight segment may be incomplete during sudden power loss, but it ensures that incomplete data is detectable and that the last completed commit remains provable.
Flush under hold-up
Drain buffers to storage so “missing evidence” is bounded to a known window.
Commit as a checkpoint
Write a segment tail marker and integrity check so the checkpoint is verifiable.
Crash recovery disclosure
On reboot, scan to the last valid commit and record the uncertainty window.
A recoverable data structure is built around sequential append and segment envelopes. Each segment carries a compact header (version, segment index, counter/time window) and a tail commit marker. On restart, the device performs a linear scan to locate the last valid segment commit, then rebuilds indexes from segment headers. The recovery outcome becomes an evidence event: last valid commit, detected gaps, and time-state at restart.
Diagram note: PLP is shown as “hold-up to finish flush + commit”; recovery is scan-based to avoid hidden corruption.
Privacy & data minimization: capture what you need (and no more)
More capture is not better evidence. A probe should maximize evidence value density: keep durable, verifiable metadata by default and only elevate detail within bounded, auditable windows.
A privacy-resilient capture design starts with a metadata-first baseline that still supports loss visibility, ordering, and verification. The baseline should preserve flow identity, direction, size, counters, and compact digests — enough to explain “what happened” without persistently storing sensitive payloads. Capture policy itself should be recorded in the audit trail so reviewers know exactly what was collected.
Flow key
5-tuple / session identity to group evidence into explainable contexts.
Direction & size
Ingress/egress plus length and byte counters for burst and volume evidence.
Counters & timestamps
Monotonic counters and timestamps to support ordering and loss closure.
Compact digest
Hash/sketch summaries to validate consistency without retaining full payload.
When payload visibility is required, redaction should be explicit and bounded. Redaction methods trade interpretability for privacy. The safest approach is to choose the minimum method that still preserves verifiability, then disclose that method in the signed audit summary.
| Method | What is stored | Privacy posture | Boundary / disclosure |
|---|---|---|---|
| Hash-only | Content digest without raw payload. | Strong minimization. | Cannot reconstruct content; disclose digest algorithm and scope. |
| Truncation | Short header or fixed-length prefix. | Moderate exposure. | Parsing may be incomplete; disclose truncation length and fields. |
| Selective window | Fuller detail only for a bounded time window or trigger. | Controlled exposure. | Disclose trigger rules and window bounds; keep windows small. |
Access control is part of evidence quality. Logs that are widely readable tend to leak. A probe should enforce least-privilege read access and produce a read-audit trail: who accessed what, when, and under which authorization context. Read audits should be tamper-evident, so “who saw the evidence” is itself provable.
Diagram note: “metadata-first” is the durable baseline; any added detail is bounded and auditable to reduce privacy risk.
Performance engineering: burst, buffering, and backpressure
Evidence quality fails first under bursts. The goal is not peak throughput claims, but a measurable chain where drops, jitter, flush latency, and commit time remain bounded and explainable.
A probe’s data path can be treated as three linked stages: ingress capture (port → DMA → ring), record packaging (timestamp + labeling + segment builder), and persistence (flush → segment commit → storage). Bursts stress each stage differently; bottlenecks are identified by where counters move and where latency expands.
Ring buffer under burst
Track occupancy watermarks and overrun drops. When full, the drop policy must be explicit and counted.
DMA batch & contention
Memory/bus competition can amplify latency variance. Provide counters for descriptor shortage and reclaim delay.
Zero-copy intent
Reduce copy pressure so tail latency does not explode under burst. Validate via flush latency and commit time trends.
Backpressure is a safety mechanism: if persistence slows, the system must avoid silent loss. Instead, it should apply bounded reduction (rate-limited intake, sampling, or metadata-only mode) and record a backpressure event containing cause, time window, and impact. This keeps missing coverage explainable and auditable.
“Logging is slower than expected” is typically explained by write amplification: a record is not just data, but evidence structure. Segment headers/footers, commit markers, checks/digests, and recovery-friendly boundaries add work. The correct engineering approach is to measure amplification via flush latency and segment commit time, then tune the segment sealing cadence to keep the uncommitted window bounded.
| Metric | What it indicates | Evidence risk if uncontrolled |
|---|---|---|
| Ingress drop | Coverage loss at capture/queues (explicitly counted). | Missing evidence window; must be disclosed with counters and time bounds. |
| Timestamp jitter | Ordering stability under load and contention. | Ambiguous event ordering; can undermine correlation and timeline explanations. |
| Flush latency | Backpressure precursor and queue buildup severity. | Expanding uncertainty window; elevated drop probability during bursts. |
| Segment commit time | How quickly a provable checkpoint is created. | Long uncommitted tail; crash can invalidate more in-flight evidence. |
Diagram note: metrics are bound to stage boundaries; backpressure is recorded as an evidence event, not a silent throttle.
Validation & test setup: how to prove evidence quality
Evidence quality is a claim that must be reproducible. A practical test bench provides ground-truth traffic, a time reference, and power-fail injection to validate coverage, timing, tamper evidence, and crash recovery.
A minimal validation setup is built around three instruments. First, a traffic generator provides deterministic sequences and controlled bursts so measured drops and gap detection can be compared against known truth. Second, a time reference provides an external baseline to interpret timestamp consistency and load-dependent jitter. Third, power-fail injection forces failures at different points of flush/commit so the last valid commit, detectable incomplete segments, and recovery disclosure can be verified.
Traffic generator
Controlled rate/burst + known sequences to compare against capture counts and gap events.
Time reference
External baseline to quantify timestamp consistency and interpret jitter under contention.
Power-fail injection
Repeatable cut during flush/commit to validate last valid commit and disclosure behavior.
Validation should produce explicit proofs, not screenshots. Each proof item should have an input condition, a measured outcome, and a machine-checkable artifact. The goal is to show that missing coverage cannot be hidden, time uncertainty is disclosed, tamper attempts are detected, and crash recovery rebuilds a verifiable state.
| Proof item | What to measure | What “pass” looks like |
|---|---|---|
| Drop rate | Generator truth vs captured counts + ingress drop counters. | Drops are bounded, counted, and disclosed with time windows. |
| Gap detection | Injected sequence gaps vs detected gap events and range. | Gaps are detected with correct bounds and low false positives. |
| Time consistency | Jitter under load; step/uncertain windows against time reference. | Uncertainty is explained and disclosed; ordering remains interpretable. |
| Tamper evidence | Modify/truncate/rollback attempts vs verification failures. | Tampering cannot be silently accepted; failure point is explainable. |
| Crash recovery | Power cuts at multiple phases; last valid commit and index rebuild. | Recovery finds last commit, discards incomplete tails, writes disclosure. |
Record format consistency is a test target by itself. Logs should be versioned and parseable so verifiers can recompute integrity checks. A minimal contract includes schema version, stable field types for counters and timestamps, segment boundaries that survive partial writes, and unambiguous rules for recomputing digests/roots. Policy and state events (backpressure mode, redaction method, recovery disclosure) should be represented as structured records, not free-form text.
Diagram note: the verifier recomputes integrity checks and validates measured outcomes against ground truth, producing auditable artifacts.
H2-11 — Design checklist & IC/BOM roles (selection mindset)
This chapter converts “evidence quality” into a bill-of-materials checklist: coverage (no silent loss), time (explainable ordering), integrity (provable origin + tamper evidence), durability (power-loss safe), and auditability (minimal access + read trails).
Start from evidence goals, then pick BOM roles
Evidence goals → what must be measurable
- Coverage: burst-safe ingest; if loss happens, it must be visible (counters + gaps).
- Time: packet/event ordering must remain explainable under load (timestamp position + jitter).
- Integrity: logs must be verifiable (signatures / hash chains / anti-rollback signals).
- Durability: power-loss must not create silent corruption; last commit must be recoverable.
- Auditability: who read/exported evidence must be recorded; capture policy must be versioned.
BOM roles (what to buy, not what to “market”)
Tip: avoid “PTP feature” checkbox thinking—require measurable jitter, time-state disclosure, and loss visibility under burst.
TAP / mirror ingest, PHY/MAC stats, switching silicon
What to require (capability checklist)
- Loss visibility: ingress/drop/overrun counters must be readable and logged; gaps must be detectable.
- Burst handling: small-packet Mpps and microbursts must not silently overflow internal buffers.
- Mirror integrity: SPAN can be incomplete under congestion—design must prove completeness (or quantify loss).
- Timestamp proximity: if timestamps are taken near MAC/ingress, the error terms are explainable.
Concrete material numbers (common patterns)
- NIC / capture MAC (1G): Intel I210-AT (hardware IEEE 1588 / 802.1AS timestamping).
- Managed switch w/ 1588 (mirror fabric): Microchip KSZ9567RTXI (7-port GbE switch with IEEE 1588v2 support).
- Timing-capable switch alternative: Marvell 88E6390X (product brief lists IEEE 1588v2 one-step PTP support).
Selection note: for higher rates (2.5G/10G), keep the same “proof obligations”: drop counters + gap detection + timestamp jitter under load.
Hardware timestamping, RTC/holdover, low-jitter reference
What to require (evidence-grade time)
- Explainable timestamp position: PHY vs MAC vs ingress pipeline changes what contributes to error.
- Measurable jitter under load: log timestamp jitter/latency metrics alongside evidence segments.
- Time-state disclosure: if holdover is lost or time steps, emit “time-uncertain” events (tamper-evident).
- Backup time continuity: RTC + backup supply supports continuity across brownout / power cycling.
Concrete material numbers (time building blocks)
- HW timestamp at NIC level: Intel I210-AT (hardware timestamping for IEEE 1588 / 802.1AS packets).
- RTC (battery/backup input): DS3231M (I²C RTC with backup supply input).
- Low-jitter clock generator: Si5341 (low-jitter clock generator family; used as clean reference distribution in appliances).
Keep the narrative non-algorithmic: the key is timestamp stability and disclosure of uncertainty windows, not protocol internals.
Identity anchor, attestation proof, crypto throughput
What to require (prove “who generated this log”)
- Hardware key protection: keys must be non-exportable; signing occurs inside the anchor.
- Minimal attestation set: signed measurement + nonce + time-window statement (verifier replay-resistant).
- Anti-rollback signals: detect counter rollback / time rollback / storage swaps via signed state.
- Crypto under burst: segment signing rate must not collapse capture pipeline (measure commit time).
Concrete material numbers (anchors and secure elements)
- TPM 2.0 anchor (SPI): Infineon OPTIGA™ TPM SLB 9670VQ2.0 TPM2.0
- IoT secure element (I²C, Plug & Trust family): NXP EdgeLock SE050 (example orderable: SE050C2HQ1/Z01SDZ).
- Secure element for auth/secure channel: ST STSAFE-A110
Use TPM/SE as the root of provenance: “log origin” is a cryptographic claim, not a software label.
Durable media, power-loss protection, commit semantics
What to require (durability is evidence, too)
- Append-only segments: sequential write + segment envelope + commit marker (recover last valid commit).
- No silent corruption: power-loss turns into a detectable state (recovery event + gap window).
- Wear & consistency: avoid write amplification surprises; measure flush latency and commit time.
- PLP window: hold-up protects flush + commit; evidence records “commit succeeded/failed”.
Concrete material numbers (media + PLP power path)
- Supercap backup controller: LTC3350 (supercapacitor backup power controller/monitor). Alternative class: BQ33100 (supercapacitor health manager over SMBus).
- Hot-swap / eFuse (24V-class front-end): TI TPS25982 (smart eFuse with current monitoring class).
- Durable storage examples: Macronix MX35LF1GE4AB (Serial NAND), Infineon FM25V10 (SPI FRAM), Micron MTFC16GAPALBH-IT (eMMC example PN).
Avoid recommending a single “perfect” medium: pick by recoverability, commit guarantees, endurance, and available power-loss window.
Minimal management surface, read-audit, and acceptance checklist
Minimal management (what “must exist”)
- Read-audit trail: every export/read action emits an append-only audit record (tamper-evident).
- Policy versioning: capture/redaction/backpressure policies are recorded with evidence segments.
- Least privilege: management interface exposes only what is required for retrieval and verification.
Acceptance checklist (copy/paste for bring-up)
BOM selection is “evidence-driven”: any component that blocks measurability (loss/time/commit/audit) is a design risk, even if throughput looks fine.
H2-12 — FAQs (evidence-chain focused)
These FAQs stay inside the Edge Security Probe boundary: capture completeness, timestamp credibility, identity/attestation, tamper-evident logging, durable storage with PLP, privacy minimization, burst performance, and testable validation.
Answers stay inside the evidence chain
Security probe vs firewall/IDS — what is the practical boundary?
Why does out-of-path (TAP/SPAN) often “hide loss”, and how can loss be made visible?
Same mirrored traffic, different switch/SPAN config — which three counter categories should be checked first?
For forensics, what does “timestamp accuracy” really mean: absolute time, relative ordering, or consistency?
If time sources drift or are rolled back, how can logs remain acceptable to audit/forensics?
What does attestation prove, and what is the minimum set of evidence fields?
Signing every record is too slow — what risks come with segment signing, and how to balance?
What are the most common “unrecoverable” log storage failures, and how do segment commit and index rebuild prevent them?
In power-fail-heavy environments, what two PLP details are most often missed?
Packet capture touches privacy — how to keep evidence useful while minimizing data collection?
Under Gbps bursts, why is the bottleneck often not the port speed — and how to pinpoint it?
How can a single test bench prove: no loss, credible time, tamper detection, and power-loss recovery?
Implementation note: keep each answer evidence-centric. When a guarantee cannot be made (e.g., SPAN completeness), the design must record explicit uncertainty windows and verifiable counters, not silent assumptions.