123 Main Street, New York, NY 10001

Production Test & Documentation for Digital Isolation

← Back to: Digital Isolators & Isolated Power

Core Takeaway

Production compliance is won by an auditable evidence chain—tests, documents, and traceability must stay aligned under a fixed Pass/Fail policy. When results conflict, normalize conditions first, eliminate fixture-driven false fails, and gate every PCN with bridging evidence before release.

Scope Audit-ready Production-ready

Scope & Deliverables

This page standardizes production isolation testing and the documentation evidence chain required for audits, customer safety reviews, and field disputes. It is written to prevent scope creep and to keep each topic inside its proper subpage.

Primary goal: provide a repeatable path from device → test → document → traceability → acceptance.

Deliverables (What this page produces)

Each deliverable below is defined as a hand-off artifact that can be used in production, supplier quality, and audit workflows without re-interpretation.

Production Test Plan

  • Purpose: convert DWV/Hi-pot, PD, and related checks into a repeatable SOP.
  • Includes: test type, setup rules, sampling model, retest policy, data fields.
  • Used by: Manufacturing, QA, SQE, and customer audit teams.

Pass / Fail Criteria

  • Purpose: make “pass/fail” reproducible under defined conditions.
  • Includes: trip criteria, guardband logic, fixture-vs-DUT decision rules.
  • Used by: production stations, FA teams, and field dispute resolution.

Evidence Pack

  • Purpose: answer “prove it” questions with a single audit-ready bundle.
  • Includes: certificates, CB reports, test reports, traceability, calibration records.
  • Used by: compliance reviews, supplier audits, and customer PPAP-like gates.

Change Control Checklist

  • Purpose: keep compliance intact across PCNs and manufacturing changes.
  • Includes: change triggers, impact assessment, minimum re-qualification actions.
  • Used by: SQE, quality systems, and engineering change boards.

Typical Use Cases (Disputes this page closes)

  • Supplier / factory audit: “Where is the evidence, and can a shipped lot be traced to it in minutes?”
  • Customer safety review: “Reinforced isolation is claimed—what documents prove it under the stated conditions?”
  • Regulatory compliance check: “Why are parameters set this way, and what is the retest policy?”
  • Field rework dispute: “Production Hi-pot fails—does it implicate the DUT, or the fixture/environment first?”

A dispute is considered closed when the path from claim → evidence → test record → shipped lot is traceable and repeatable.

Hard Boundary (Not covered on this page)

The topics below are intentionally excluded to avoid overlap with sibling pages. Only brief references are allowed.

  • Creepage/Clearance geometry and calculations: see Creepage & Clearance.
  • Impulse/Surge waveform classes and system protection design: see Impulse / Surge Withstand.
  • Fail-safe state design rules for interfaces/drivers: see Fail-Safe State.
  • Device internal architecture (capacitive/magnetic coding, etc.): see relevant Device Class subpages.
  • Interface protocol timing details (USB/Ethernet/CAN semantics): see Isolated Interface subpages.

Diagram · Evidence Chain Map (Traceability Spine)

The diagram summarizes how production tests and documents form an audit-grade evidence chain, anchored by lot-level traceability.

Device Part / Lot Tests Hi-pot (DWV) PD CTI evidence Docs Certificates CB report Test report Audit Field Traceability Spine Lot / Date Code Test Record ID Report Rev / Conditions Cert / CB ID + PCN
Evidence chain from device tests to documents, anchored by lot-level traceability for audit and field disputes.
Vocabulary Test taxonomy Dispute-proof

Vocabulary & Test Taxonomy

Terminology alignment prevents false conclusions and mismatched acceptance criteria. The definitions below standardize what each test stresses, what it detects, what it does not prove, and how it is commonly misused in audits and production lines.

Rule of thumb: if a term can be interpreted in two ways, it will be used against acceptance during review.

Core Tests: What They Stress vs What They Prove

This comparison is written to eliminate three common failures: (1) using the wrong test for a claim, (2) assigning lifetime meaning to a short-duration screen, and (3) accepting a report without normalized conditions.

Test What it stresses What it detects What it does NOT prove Common misuse
DWV / Hi-pot Short-duration high-voltage stress across the isolation barrier (AC or DC). Gross insulation weakness, contamination/fixture leakage paths, setup errors that cause current trip. Lifetime endurance, creepage geometry compliance, surge/impulse robustness. Interpreting a single pass as “safe for life” or treating any fail as DUT fault without fixture isolation.
IR
(Insulation Resistance)
Low-energy DC measurement intended to quantify leakage through insulation paths. Moisture-related leakage, surface contamination trends, gross insulation degradation over time. Absence of partial discharge, ability to survive high dv/dt environments, surge class performance. Using IR as a substitute for DWV, or mixing meter ranges/settling times and comparing results incorrectly.
PD
(Partial Discharge)
Voltage conditions that can trigger discharge activity within voids/defects before breakdown. Early discharge activity (PDIV/PDEV behavior) under defined conditions; evidence of internal insulation quality. Guaranteed long-term lifetime (needs models), guaranteed immunity to external EMC events, barrier geometry compliance. Chasing “higher is better” without specifying conditions, or accepting PD numbers without normalized setup parameters.
Type / Routine / Sampling Test layering by intent: qualification vs 100% screen vs statistical monitoring. Type: design/process capability; Routine: gross defects; Sampling: drift and trend detection. A routine screen cannot replace qualification; sampling cannot justify a missing certificate/report. Asking production for type-test evidence, or treating sampling data as a compliance certificate.

Normalization fields that must appear in any meaningful report: waveform type, ramp/dwell, trip criterion, environment, and sample size.

Test Taxonomy: Who Owns Which Evidence

Many audit failures are not technical failures—they are ownership failures. Each test layer below is mapped to the party that typically owns the evidence and the artifacts that should be expected.

Type Test (Qualification)

  • Owner: vendor qualification team / accredited third-party lab.
  • Artifacts: CB report, test report, certificate, stated conditions and scope.
  • Answers: “Is the device/process capable under the claimed standard?”

Routine Test (Production Screen)

  • Owner: factory production line (vendor or contract manufacturer).
  • Artifacts: station log, pass/fail record, equipment calibration reference.
  • Answers: “Did this shipped unit/lot pass the defined production screen?”

Sampling Test (Monitoring)

  • Owner: QA/IQC/SQE (buyer) and/or vendor quality.
  • Artifacts: sampling plan, trend charts, deviation/containment records.
  • Answers: “Is there drift, contamination, or process change risk?”

Dispute Closure (Evidence Chain)

  • Owner: cross-functional (SQE + QA + engineering).
  • Artifacts: traceable mapping from lot → test record → report revision → certificate ID.
  • Answers: “Can the claim be defended and traced to shipped hardware?”

Myth vs Reality (Preventing Misuse)

Myth: Hi-pot pass means lifetime is guaranteed.

Reality: Hi-pot is a short-duration stress screen. It cannot replace lifetime models, PD evidence, or change control.

Ask instead: require a documented mapping from claim → certificate/report fields, and ensure lot-level traceability for shipped units.

Myth: A higher PD number is always “better”.

Reality: PD interpretation depends on the full condition set (waveform, frequency, environment, threshold). Without normalization, results are not comparable.

Ask instead: request PDIV/PDEV with complete condition fields and a clear pass criterion tied to the intended insulation class.

Diagram · Test Taxonomy Tree (Layer + Owner + Artifact)

The diagram organizes testing into qualification, production screening, and monitoring. Each layer is labeled with ownership and expected evidence.

Isolation Test Program Layered Evidence Type Test Owner: Lab/Vendor CB/Cert/Report Routine Test Owner: Factory Station Record Sampling Test Owner: QA/SQE Trend + CAPA Examples PD evidence VIORM scope Report fields Examples DWV screen Trip criteria Calibration link Examples IR trend Humidity link Containment
Test taxonomy tree that maps each layer to its owner and audit-grade artifacts.
Standards Certificates Evidence-first

Standards & Certificate Landscape

Vendor claims are accepted only when they map to auditable evidence. This section explains how to read datasheet statements without copying standard text—focusing on evidence shape, where key fields appear, and validity and scope limits.

Evidence rule: a claim is defensible only when document type + field location + conditions + scope match the shipped part.

How to Read Datasheet Claims (3-step Method)

  • Step 1 — Identify the claim type: insulation class (basic/reinforced), working voltage (VIORM/VIOTM), PD evidence, or certification wording.
  • Step 2 — Demand the correct evidence shape: certificate, CB report, test report, and/or production test record (depending on claim).
  • Step 3 — Validate conditions and scope: pollution degree, altitude, temperature class, package/material/site scope, and report revision.

Marketing risk pattern: a large numeric rating without an attached condition set is not an acceptance criterion.

Evidence Shape Differences (What to Request)

The items below are listed only to guide evidence requests and reading patterns. The goal is to know which document to ask for and what section to locate.

  • UL 1577: typically supports dielectric withstand claims with certificate-style evidence; focus on scope and ratings tables.
  • VDE 0884-11: often provides structured insulation coordination evidence; focus on working voltage terms and conditions/limitations.
  • IEC 62368-1: system safety framework that may drive CB reporting; focus on CB report sections and applicability notes.
  • IEC 60601-1: medical safety; frequently tight on leakage and documentation rigor; focus on conditions and evidence completeness.
  • IEC 61010: test/measurement/control equipment; evidence is often presented in report form; focus on rated conditions and limitations.

Wording filter: “Certified to …” should map to certificates/reports; “Designed to meet …” requires extra verification and may be insufficient for audit closure.

Where to Find VIORM / VIOTM / Working Voltage Evidence

Working-voltage terms are meaningful only when tied to document location and a condition set. The table below standardizes what must be checked.

Term Usually appears in Must-match conditions Common pitfall
VIORM Insulation coordination section, ratings table, or appendix in a certification/report document. Pollution degree, altitude/derating, temperature class, and package/material/site scope. Using the number without verifying scope limits and standard edition.
VIOTM Over-voltage or transient-related ratings fields in report/certificate tables. Test conditions, waveform assumptions (if stated), and limitation notes that change applicability. Assuming the transient rating replaces surge/impulse qualification.
Working Voltage Certificate/CB report rating tables and “conditions/limitations” sections. Environment class, installation assumptions, and exact part family/package coverage. Mixing datasheet “typical” with report-rated values.

Validity check: any working-voltage statement should be accompanied by document ID, revision, and applicability scope.

Certificate Validity Checklist (Revision · Conditions · Scope)

  • Revision: standard edition, document version, and certificate/report revision match the referenced claim.
  • Conditions: pollution degree, altitude/derating, temperature class, and stated test conditions are consistent with intended use.
  • Scope: part family, package, molding/material/process, and manufacturing site coverage match the shipped hardware; PCN history is included.

Audit failure pattern: correct certificate type but incorrect scope (package/material/site) after a change notice.

Diagram · Claim → Evidence Mapping

The diagram maps common datasheet claims to evidence document types and the report sections where the verifying fields typically live.

Claims Evidence Docs Report Sections Reinforced VIORM / VIOTM PD Tested Certified to … Production Tested Certificate CB Report Test Report Production Record PCN History Ratings Table Conditions Limitations Scope Revision Control
Claims become defensible only after mapping to the correct evidence document type and report sections under matching conditions and scope.
Hi-pot DWV Production execution

Hi-Pot / DWV Fundamentals

Hi-pot (dielectric withstand) is a short-duration stress screen across the isolation barrier. It is powerful for catching gross weaknesses, but it can also generate false fails when the test chain, environment, and parameters are not controlled.

Core interpretation: a hi-pot result is meaningful only when setup + ramp/dwell + trip criteria + environment are defined and repeatable.

AC vs DC Hi-Pot (Why Trip Behavior Differs)

AC and DC hi-pot can produce different outcomes on the same hardware because the measured current is composed of different dominant parts. The table below focuses on interpretation, not on fixed numeric settings.

Mode Dominant current Trip behavior drivers Common false-fail cause
AC Hi-Pot Capacitive current through barrier capacitance (plus leakage components). Frequency, ramp shape, measurement window, filtering, and fixture capacitance. Trip threshold set below normal capacitive current, or fast ramp producing transient spikes.
DC Hi-Pot Leakage and absorption behavior after settling (more sensitive to moisture/contamination). Ramp time, dwell time (settling), and leakage path stability under environment. Dwell too short (no settling), or humidity/ionic contamination creating unstable leakage paths.

Comparison rule: AC and DC results are not comparable unless the condition set is normalized and recorded.

Ramp · Dwell · Trip (Parameter Logic for Production)

The three parameters below shape both yield and defect detection. The objective is to (1) avoid killing good parts via measurement artifacts, and (2) preserve sensitivity to real leakage and surface discharge paths.

Ramp time: control transient spikes and stabilize the measurement window.

  • Too fast → transient current spikes and false trips (especially AC).
  • Too slow → cycle time inflation and increased exposure to environmental drift.
  • Target behavior: a ramp that reaches the setpoint without creating a measurement spike larger than the steady-state envelope.

Dwell time: allow settling so the measurement reflects the intended current component.

  • Too short → DC absorption/settling not complete; pass/fail becomes non-repeatable.
  • Too long → cycle time and unnecessary stress increase; humidity sensitivity can increase.
  • Target behavior: a stable current reading within a defined time window.

Trip current: separate normal capacitive current from abnormal leakage/discharge paths.

  • Too low → yield collapse due to normal capacitive current or fixture variability.
  • Too high → weak screening; gross defects and contamination paths may escape detection.
  • Target behavior: trip threshold sits above the known-good envelope but below the known-bad envelope (established by controlled experiments).

False-Fail Sources (Fast Isolation Before Blaming the DUT)

Most production hi-pot escalations start as fixture/environment problems. A fast isolation approach prevents incorrect disposition and unnecessary rework.

Source Typical symptom Quick check Containment action
Fixture leakage Fails cluster on one station or one socket position. Run an “empty fixture” test and swap sockets/fixtures. Clean/replace fixture materials; enforce fixture maintenance and inspection.
Humidity / contamination Fail rate tracks weather or storage time; intermittent behavior. Record RH and repeat after controlled drying/bake protocol. Set RH guardrails; improve cleaning and storage controls; add ionic contamination checks if needed.
Cable routing / shielding Outcome changes after cable movement or replacement. Swap cables and fix routing; check insulation integrity. Standardize cable type and routing; add strain relief and periodic inspection.
Parameter mismatch Trips occur instantly on ramp or only near setpoint. Verify ramp/dwell/trip and measurement window settings vs controlled baseline. Lock settings with revision control; require sign-off for any parameter change.

Disposition rule: classify failures only after isolating fixture and environment contributions.

High-Voltage Safety & SOP (Minimum Requirements)

Hi-pot testing must be executed as a controlled process. The minimum set below is written for production readiness and audit defensibility.

  • Interlocks & enclosure: guarded fixtures, interlock switches, and controlled access during HV enable.
  • Discharge path: defined discharge hardware and verification steps before handling any DUT/fixture.
  • Operator control: authorization, training records, and PPE policies appropriate to the HV system.
  • Calibration chain: calibration reference for the tester and a daily/shift self-check log.
  • Stop-and-isolate rules: containment workflow for spikes in fail rate; prevent “retest until pass” behavior.

Diagram · Hi-Pot Equivalent Model (Cbar + Rleak + Trip Path)

The diagram shows the measurement chain and the dominant current paths that drive trip behavior, including fixture leakage paths that cause false fails.

HV Source Meter / Trip DUT Barrier Cbar Rleak AC: Cbar Path Trip sensitivity DC: Rleak Path Humidity sensitive False-Fail Path Fixture Leakage Surface Path Trip
Hi-pot measurement is shaped by barrier capacitance and leakage paths; fixture and surface paths often dominate false failures.
Production Guardband Retest policy

Hi-Pot in Production

Production DWV must balance yield, cycle time, and risk containment. This section standardizes when to run 100% vs sampling, how to convert type-test evidence into production settings (guardband), and how to prevent “retest-to-pass” behavior while maintaining audit-grade traceability.

Production rule: a DWV result is accepted only when the parameter pack, setup verification, and record fields are controlled and repeatable.

100% vs Sampling (Decision Matrix)

The choice between 100% screen and sampling should be justified by consequence and evidence strength—not by habit. The decision below is designed to be defensible in audits and customer safety reviews.

Driver When 100% is typically justified When sampling is typically justified
Consequence High consequence of insulation failure (safety / compliance / critical uptime). Lower consequence, or system-level mitigations reduce hazard exposure.
Evidence demand Customer/regulatory gate expects unit-level screening evidence. Audit accepts statistical monitoring when type evidence + traceability are complete.
Process maturity New process, frequent changes, or unstable yield indicates need for stronger screening. Stable process with controlled change history and trending controls in place.
Risk model Risk containment requires immediate detection at unit level. Risk is better controlled via sampling + trend alarms + containment triggers.

Evidence trap: sampling is not acceptable if traceability cannot connect lot → station → settings revision → record.

Guardband (Type Evidence → Production Settings)

A production DWV setting is not a single voltage number. It is a controlled parameter pack that translates qualification evidence into a repeatable production screen while avoiding unnecessary overstress.

Inputs to anchor: evidence document ID, revision, waveform, ramp/dwell, trip criterion, and stated conditions.

  • Conversion goal: catch gross defects and contamination paths while staying within controlled stress limits.
  • Normalization fields: AC/DC mode, ramp, dwell, trip, measurement window, filtering, environment gates.
  • Locking rule: parameter pack must have a revision ID and change-control approval.

Common failure mode: copying type-test settings into production without accounting for fixture variance and measurement window behavior.

Retest Policy (Prevent “Retest-to-Pass”)

Retest exists to isolate fixture and environment effects—not to erase a failure. The policy below is structured for audit defensibility and consistent disposition.

Stage Required action Required record field Stop condition
Fail event Immediate containment: tag unit/lot, freeze station if fail spikes. Fail code + timestamp + station/fixture ID + parameter pack rev. Fail rate exceeds threshold → stop and isolate station/lot.
Fixture check Empty-fixture test, socket swap, cable inspection, verify guards. Fixture maintenance action + cable/fixture serial + check result. Fixture suspected → quarantine station until resolved.
Environment check Verify RH/temperature within gate; apply controlled dry/clean step if required. Temp/RH values + any clean/dry/bake action ID. Out-of-gate environment → halt until within limits.
Retest (max once) One retest only under controlled conditions with unchanged parameter pack. Retest count + retest result + verification checklist completion. Second retest is not allowed; escalate disposition workflow.
Disposition Classify: fixture issue / environmental / DUT suspect; trigger sampling escalation if needed. Disposition category + containment actions + escalation trigger. Unclassified failures cannot be shipped; hold lot for review.

Audit expectation: retest is acceptable only when the record proves why the retest was allowed and what changed (fixture/environment) before repeating.

Production Records (Minimum Field Set)

Records must support rapid traceability from a shipped lot back to the exact test chain and settings revision. The field set below is designed for MES/station logs and audit evidence packs.

  • Traceability: lot/date code, unit ID (if applicable), work order, station ID.
  • Test chain: tester asset ID, calibration reference, fixture ID/revision, cable ID.
  • Parameter pack: AC/DC mode, ramp, dwell, trip, measurement window, filter/window settings, pack revision.
  • Environment: temperature, relative humidity, altitude category (if tracked), gate pass/fail.
  • Accountability: operator/shift, exception code, retest count, disposition category.

Traceability target: lot evidence should be resolvable in minutes to who / where / with what settings / under what conditions.

Audit Q&A (Why This Value and This Policy)

These answers are written as consistent audit language to explain screening strategy and parameter choices without invoking standard text.

Why 100% screening here (or why sampling is acceptable)?
The screening strategy is selected by consequence and evidence demand. High-consequence isolation failures require unit-level screening evidence. Sampling is used only when type evidence is complete and the process is stable with trend alarms and containment triggers.
Why are ramp/dwell/trip values set this way?
The parameter pack is guardbanded from qualification evidence to remain repeatable in production while avoiding unnecessary overstress. Ramp controls transient spikes, dwell ensures settling, and trip separates the known-good envelope from abnormal leakage/discharge paths.
Why is retest limited to once?
Retest is allowed only to isolate fixture/environment effects. Unlimited retest enables “retest-to-pass” and breaks audit defensibility. A single controlled retest with completed verification steps preserves traceability and integrity of the screen.
How is calibration validity proven?
Each station log links the tester asset ID to a calibration reference and includes shift self-check records. Any calibration exceptions trigger station quarantine until resolved.
How is environment influence controlled?
Temperature and RH are treated as explicit gate conditions. Out-of-gate environments require stop-and-isolate actions. Any clean/dry/bake actions are logged with IDs to preserve repeatability and audit traceability.
How is unauthorized parameter change prevented?
Settings are locked as a revisioned parameter pack under change control. Station logs record the pack revision for every unit/lot. Any deviation requires an approved change record before resuming shipment.

Diagram · Production DWV Flow (Setup → Test → Fail Branch → Disposition)

The flow below is written as a production-ready evidence chain: setup verification and environment gates precede testing, and the failure path is controlled to prevent retest abuse.

Incoming Setup Verify Cal / Fixture / Cable Env Gate Temp / RH DWV Test Pack Rev Result PASS Record & Release FAIL Contain Fixture Check Clean / Dry Retest Once Disposition Records Lot / Station / Pack / Env
Production DWV flow with setup verification, environment gates, controlled retest, and disposition—built to preserve yield and audit traceability.
PDIV PDEV Normalization

Partial Discharge Basics

Partial discharge (PD) evidence indicates early discharge activity under defined conditions. It is not the same as breakdown. This section focuses on how to interpret PDIV/PDEV, what fields must be normalized to compare results, and how to handle lab-to-lab variation without losing evidence integrity.

PD rule: PD numbers are meaningful only when measurement chain + thresholds + environment + excitation are stated and repeatable.

PDIV / PDEV (Report Reading, Not Physics)

  • PDIV: the voltage level where detectable discharge activity starts under the stated detection threshold and noise floor.
  • PDEV: the voltage level where discharge activity disappears when voltage is reduced, often showing hysteresis under the stated conditions.
  • Apparent charge: a measurement-chain quantity; interpretation requires the coupling network and detector settings.

Misuse to avoid: treating a PD report as a direct replacement for surge/impulse qualification or creepage/clearance compliance evidence.

What Moves PD Numbers (Condition Sensitivity)

PD outcomes change with materials, environment, excitation, and the measurement chain. Comparisons are valid only after normalization.

Factor group Examples Typical impact Record field to demand
Material / process Resin, voids, interfaces, curing variability. Changes discharge inception behavior and repeatability. Material/process scope, lot correlation, report revision.
Environment Humidity, temperature, altitude. Shifts PDIV/PDEV; alters noise and surface behavior. Temp/RH/altitude stated and controlled.
Excitation Waveform, frequency, ramp/dwell. Moves inception/extinction points and discharge patterns. Waveform/frequency, ramp/dwell, dwell window.
Measurement chain Coupling capacitor, measuring impedance, detector bandwidth, gating. Changes apparent charge and detection threshold. Coupling network values, detector settings, noise floor.

Why Two Labs Disagree (Normalization Checklist)

Lab-to-lab differences are often caused by setup and threshold mismatches rather than by hardware changes. The table below standardizes what must be normalized before declaring a disagreement meaningful.

Mismatch source What it changes Normalization field to require
Noise floor / threshold PDIV shifts up/down; weak activity becomes invisible/visible. Detector threshold, gating settings, measured noise floor, and pass rule.
Coupling network Apparent charge magnitude changes; detection sensitivity shifts. Coupling capacitor value, measuring impedance, calibration injection details.
Excitation definition Different discharge behavior under different waveform/frequency. Waveform type, frequency, ramp rate, dwell window, and sequence.
Environment control Humidity/altitude alters inception behavior and surface effects. Temperature/RH/altitude values, conditioning time, and control method.

Minimum normalization set: waveform/frequency, ramp/dwell, threshold/noise floor, coupling network, environment, sample size, and pass rule.

How PD Evidence Enters the Evidence Pack

  • Document control: report ID, revision, issuing lab, and standard reference fields.
  • Condition set: excitation + measurement chain + environment captured as explicit fields.
  • Claim mapping: which datasheet claim the PD evidence supports and under what stated conditions.
  • Traceability: part family/package/material/site scope and linkage to lot or qualification batch.
  • Change triggers: PCN/material/site changes that require PD re-evidence under the same normalized condition set.

Diagram · PD Measurement Chain (Source → Coupling → Detector → DUT)

The diagram highlights the measurement chain and control points (noise and calibration injection) that drive PD comparability.

HV Source Coupling C Cc Zm Meas Imp Detector Filter / Gate DUT Return / Ground Path Shielding / Routing / Noise Control Noise Cal Inject
PD comparability depends on the full measurement chain, thresholds, and environment—not on a single number.
Procurement Evidence Acceptance

PD in Reality

Partial discharge becomes valuable only when it is written as a requirement, supported by the right evidence, and verified by a repeatable acceptance rule. This section converts PD into purchase-ready clauses and review language, and keeps PD and DWV in their correct complementary roles.

Practical rule: PD evidence is acceptable only when conditions + measurement chain + thresholds + scope are explicitly stated.

When PD Evidence Is Typically Required

PD evidence is most defensible when audits and claims demand proof of early discharge activity under controlled conditions. The triggers below are written as review-ready criteria rather than standard text.

Trigger Why PD is demanded How to scope the request
Reinforced claim Audit expects evidence beyond a datasheet statement; PD indicates early discharge activity sensitivity under defined settings. Specify device family/package/material scope and normalized conditions.
High working voltage Near-boundary operation increases scrutiny of void/interface activity and comparability across production variations. Require PDIV/PDEV with explicit environment and detector threshold fields.
Long-life programs Programs often demand early activity evidence and stable process controls, not a single stress test number. Request sample size, pass rule, and change triggers for re-evidence.
Strict audit industries Evidence completeness becomes a gate: missing fields break traceability and invalidate comparisons. Prefer third-party report sections or controlled vendor reports with full field set.

Evidence trap: a “PD tested” claim without conditions and thresholds cannot be used as an acceptance basis.

How to Specify PD (Fields That Must Be Locked)

A PD requirement must lock a minimum set of fields to keep results comparable across labs and suppliers. The list below is the smallest set that prevents “non-comparable” disputes.

  • Metrics: PDIV, PDEV, apparent charge (if reported), and pass rule.
  • Excitation: waveform, frequency, ramp rate, dwell window, and test sequence.
  • Measurement chain: coupling network (Cc), measuring impedance (Zm), detector bandwidth, gating method.
  • Threshold controls: detection threshold and stated noise floor (or equivalent statement).
  • Environment: temperature, relative humidity, and altitude category (or stated conditioning method).
  • Statistics: sample size, lot/date code (or qualification batch), and part variant/package.
  • Document control: report ID, revision, issuing lab, date, and scope limitations.

Acceptance precondition: if any field above is missing, the PD report is not comparable and cannot close the requirement.

Requirement → Evidence → Acceptance (Review Logic)

A PD clause is closed only when evidence matches the stated condition set and the acceptance rule can be evaluated without assumptions.

Stage What must be stated What is checked
Requirement PDIV/PDEV targets, normalized conditions, sample size, pass rule. Condition set is complete and references a document revision policy.
Evidence Report ID/rev, lab, scope (package/material/site), full field set. Fields are present; scope matches the supplied parts; revisions are valid.
Acceptance Explicit evaluation rule (pass threshold + pass rate + conditions match). Conditions match; pass rule is satisfied; change triggers are defined.

Boundary rule: PD and DWV are complementary. PD does not replace production DWV screening, and DWV does not replace PD comparability evidence.

Spec Template (Copy Into PRD / AVL)

The template below is written to be pasted directly into procurement documents and supplier qualification checklists. Replace bracketed terms with project-specific values.

PD Evidence Requirement 1) Supplier shall provide PD evidence for [device family / package / material scope] supporting [reinforced / high working voltage program]. 2) Evidence shall be a report with document ID, revision, issuing lab, date, and stated limitations. Mandatory Fields – Metrics: PDIV, PDEV, apparent charge (if reported), pass rule – Excitation: waveform, frequency, ramp rate, dwell window, sequence – Measurement chain: coupling network (Cc), measuring impedance (Zm), detector bandwidth, gating method – Threshold: detection threshold and stated noise floor (or equivalent) – Environment: temperature, RH, altitude category (or conditioning statement) – Statistics: sample size, lot/date code or qualification batch, part variant/package – Scope: package/material/site coverage and exclusions Acceptance & Change Control – Acceptance is valid only when conditions match the requirement and the pass rule is evaluable without assumptions. – Any PCN impacting material, package, site, or process triggers re-evidence under the same normalized field set.

Use case: vendor comparison, audit defense, and preventing “field-missing” disputes during reviews.

Diagram · Requirement → Evidence → Acceptance

The diagram shows the three gates required to close a PD clause: the requirement definition, evidence completeness, and acceptance evaluation.

Requirement PDIV / PDEV Conditions Evidence Report ID / Rev Scope Acceptance Match Pass Rule Normalized Fields Sample Size Field Check Scope Match Evaluate Close Clause PCN Trigger Re-Evidence
A PD clause closes only when requirements, evidence completeness, and acceptance evaluation align—with change triggers defined.
CTI Material Docs Audit

CTI & Material Evidence

CTI is often treated as a number, but audits treat it as a documentation and process control problem. This section organizes where CTI evidence lives, how it is tied to real production materials and processes, and how to answer common audit questions without drifting into creepage geometry rules.

Boundary rule: creepage/clearance geometry and distance calculations belong in the Creepage/Clearance page. This section covers evidence and traceability only.

CTI in One Page (Evidence View)

  • What CTI represents: a material classification input used in surface tracking risk discussions and documentation.
  • Where it appears: laminate/material documents, PCB selection records, and pollution-related audit conversations.
  • What matters for reviews: the evidence scope (material, supplier, revision) and its linkage to production usage.

Review framing: CTI evidence is valid only when the document revision and production material traceability are known.

Where CTI Evidence Lives (Minimum Document Set)

Evidence should cover materials and the processes that keep the surface condition stable in production. The list below is structured as an evidence pack index.

Evidence layer Minimum documents Fields to capture
Laminate / material Laminate datasheet or certificate covering CTI classification and material identification. Material grade, supplier, document ID/rev, scope/limitations, batch linkage method.
Coating (if used) Coating material evidence + process window + inspection records. Coating part number, coverage definition, cure window, thickness/verification, record IDs.
Cleanliness control Cleaning process spec + ionic contamination checks + exception handling records. Process parameters, sampling frequency, pass rule, out-of-control actions, traceability fields.
Change control PCN policy and re-review triggers for laminate, coating, and cleaning changes. Trigger list, re-evidence scope, approval workflow, revision history.

Common Audit Questions (Answer Skeletons)

The questions below reflect how audits probe CTI-related evidence. Each answer should point to a controlled document, a record field, and a change trigger.

Where is the CTI evidence, and what is the document revision?
CTI evidence is referenced from laminate/material documentation with document ID and revision. The evidence pack links the revision to the approved material grade used in production.
How is production material tied back to the CTI document scope?
Production traceability links laminate supplier/grade and batch identifiers to the approved material entry. Receiving records and work orders preserve the linkage through the build history.
If coating is used, where is coating evidence and control data?
Coating evidence includes material identification, process window, and inspection records. Coverage definition and cure controls are documented, and records are stored with revision control.
How is cleanliness controlled to prevent surface tracking risk escalation?
Cleanliness control uses a defined process spec and a measurable check (e.g., ionic contamination rule). Sampling frequency, pass criteria, and out-of-control containment actions are recorded for traceability.
What changes trigger re-review or re-evidence?
PCNs affecting laminate grade, coating materials, site/process changes, or cleaning parameters trigger re-review. Triggers are listed in change control and tied to evidence pack updates.
Where are creepage/clearance geometry rules handled?
Creepage and clearance geometry and calculations are handled in the dedicated Creepage/Clearance page. This section covers documentation, traceability, and process control evidence only.

Evidence Pack Checklist (Folder-Ready Index)

  • Materials/PCB-Laminate: laminate evidence, CTI field, document ID/rev, approved grade list.
  • Materials/Coating: coating material IDs, coverage definition, cure window, inspection records.
  • Process/Cleanliness: cleaning spec, sampling plan, measurement method, pass rule, exceptions.
  • Process/Records: receiving traceability, batch linkage fields, out-of-control containment.
  • Change-Control/PCN: trigger list, re-review scope, approvals, revision history.

Audit strength comes from consistent linkages: document revision → approved material → production traceability → change triggers.

Diagram · Material → Process → Risk

The diagram frames CTI as a documentation chain that connects materials and process controls to surface tracking risk and audit questions.

Material Laminate CTI Doc ID / Rev Process Cleaning Coating / Records Risk Surface Tracking Audit Questions Evidence Pack Process Records Change Control PCN Triggers
CTI is defended through a chain: material evidence and controlled processes drive stable surface conditions and audit-ready answers, with PCN triggers defined.
Fixture False Fail Safety

Test Setup & Fixture Engineering

In production isolation testing, the most expensive failures are often false fails caused by fixtures, contamination, cabling leakage paths, missing self-checks, or uncontrolled environment. This section turns setup engineering into repeatable controls that protect takt time, yield, and audit defensibility.

Principle: isolate the test chain first. Only then interpret a DUT fail as a product issue.

Fixture Basic Requirements (Design and Verification)

A production fixture must enforce consistent geometry, controlled insulation surfaces, and a predictable discharge path, while preventing operator exposure to hazardous voltage.

Geometry Clear separation and stable contact

  • Maintain controlled spacing with rigid positioning and repeatable clamping.
  • Eliminate sharp edges and burrs that concentrate electric field.
  • Use insulating materials with appropriate dielectric strength and stable surface finish.

Safety Enclosure, interlock, and discharge

  • Safety enclosure with interlock; HV enable requires closed and verified state.
  • HV indicator and emergency stop; define fail-safe behavior on interlock open.
  • Controlled discharge path with verify-zero step before access.

Safety note: fixture design must prevent access to energized nodes and must enforce a verified discharge step before handling.

Cables & Connections (Leakage Path Control)

Cabling and connectors frequently form unintentional leakage paths. Control is achieved by routing discipline, consistent shielding strategy, and a defined cleaning and fastening routine.

  • HV lead routing: keep leads away from grounded metal edges, sharp corners, and parallel proximity to sensitive wiring.
  • Shield strategy: document the shield termination method and keep it consistent across stations and shifts.
  • Connector integrity: define fastening torque and strain relief; prevent looseness-induced micro-arcing.
  • Clean surfaces: schedule cleaning of sockets, guards, and cable ends; track frequency and exceptions.

False-fail signature: failures cluster by station/fixture and disappear when cable routing or cleaning is corrected.

Calibration & Shift Self-Check (Minimum Set)

Self-check is not a yield tool. It is a data-validity gate that makes results defensible. The minimum set below is designed for every shift.

1) Empty-fixture baseline

Measure leakage baseline with no DUT. Any drift indicates contamination, cabling leakage, or fixture damage.

2) Reference part / standard check

Run a known-good reference to validate repeatability and confirm the current trip logic behaves as expected.

3) Discharge verification

Verify discharge path and verify-zero step before access. Record discharge time and confirmation result.

4) Interlock + safety function check

Confirm interlock disables HV, indicator behavior is correct, and emergency stop forces a safe state.

5) Record control fields

Log timestamp, station ID, operator, equipment ID, software revision, and pass/fail of the self-check.

If the self-check is missing or incomplete, test data should be treated as non-defensible in audits and investigations.

Environmental Gates (Humidity, Dust, Ionic Contamination)

Environment turns surface leakage and discharge into a random variable. Production testing requires defined gates and exception handling.

  • Humidity: treat RH as a gate variable; define action when RH exceeds the allowed window.
  • Dust control: enclosure and cleaning prevent conductive films that cause intermittent surface discharge.
  • Ionic contamination: process control and periodic verification prevent silent leakage escalation.
  • Environment records: log temperature/RH and link to batch records for traceability.

Gate behavior: if environment is out of window, pause testing or execute conditioning and re-verify baselines before resuming.

False Fail Triage (Containment Without “Re-test to Pass”)

A controlled triage sequence prevents costly scrap and prevents “re-test to pass” behavior that destroys audit credibility.

Step A: Pattern sanity check

Check clustering by station, fixture ID, operator, shift, or environment window before touching the DUT.

Step B: Empty-fixture baseline + reference part

If baseline or reference fails, isolate the station and service fixture/cabling before any DUT disposition.

Step C: Clean and inspect

Inspect socket, guards, and lead ends; clean; verify routing and shield termination; re-run baseline.

Step D: Discharge path verification

Confirm discharge function and verify-zero behavior; residual charge can create misleading trip events.

Step E: One controlled re-test (max)

Allow one re-test only after a documented corrective action. Log what changed and why it isolates the test chain.

Step F: Disposition

If failure persists under a verified chain, treat as DUT failure per policy; otherwise treat as station issue and contain.

Re-test rule: repeat testing without documented station correction is not acceptable for audit defense.

Diagram · Fixture Block Diagram

The block diagram shows where false fails originate: HV routing, enclosure safety, socket surfaces, and the discharge and verification path.

Tester Program Rev HV Lead Routing Safety Enclosure DUT Socket Interlock HV Indicator E-stop Discharge Verify Zero Self-check Env Gate Temp / RH Leakage Paths
False fails typically originate from the test chain: routing, enclosure controls, socket surfaces, discharge verification, and gated environment.
Evidence Pack Archive Audit

Documentation Pack

A production-ready evidence pack is not a document pile. It is a version-controlled archive that links claims to reports, makes key fields easy to retrieve, and answers audit questions in minutes. This section defines what to request, what to archive, and how to structure evidence for traceability and change control.

Evidence pack success metric: a reviewer can locate key fields for a part number within 10 minutes.

Must-Have Documents (Audit-Defensible Minimum)

  • Certificates: certificate ID, revision, validity window, scope and limitations.
  • CB report / test report: report ID, revision, key sections that support insulation claims and conditions.
  • Traceability: device family, part number, package/variant, lot/date code linkage, test station records.
  • PCN/PDN policy: change triggers, notification timing, re-evidence rules, superseded document handling.

Without document ID, revision, and scope coverage, a certificate or report cannot close a claim in audits.

Optional Documents (High Value for Fast Reviews)

  • Process control summaries: station self-check policy, environment gates, containment rules.
  • Incoming inspection: receiving checks and acceptance criteria for critical variants.
  • Storage and handling spec: moisture and contamination controls for sockets, fixtures, and sensitive materials.
  • SOP excerpts: operator steps for discharge verification and exception handling.

Optional evidence reduces audit cycle time by making process stability and traceability visible.

Evidence Pack Table (What to Request and Where to Find Key Fields)

This table converts audit requirements into a retrieval index. It clarifies who issues each document, how long it is valid, and where key fields typically appear in the document.

Doc name Who issues Validity Where key fields appear Key fields to extract
Certificate Certification body Validity window + scope Front page + scope section ID, rev, scope, limitations
CB report Accredited lab Report rev controlled Ratings + conditions sections Working voltage evidence, conditions, coverage
PD report Lab or vendor Rev + condition set Setup + threshold + results PDIV/PDEV, threshold, chain, sample size
DWV plan Manufacturing Program rev controlled Test program + trip logic Ramp/dwell/trip, retest policy, records
Traceability log Manufacturing Per lot/batch MES / station records Lot/date code, station ID, program rev, env
PCN/PDN policy Vendor Rev + history Change trigger section Triggers, notification, re-evidence rules

Archiving Rules (Structure and Version Control)

  • Indexing: Device family → Part number → Revision → Validity window → Conditions and scope.
  • Mandatory metadata: source, date, doc ID, revision, scope, superseded-by reference.
  • One-page summary: maintain a summary sheet per part/variant to list extracted key fields and where they came from.
  • Superseded handling: keep older documents read-only with explicit superseded markers and replacement links.

Audit failure pattern: valid documents exist but cannot be linked to the exact part variant, revision, and condition set used in production.

What Auditors Ask (Fast Retrieval Targets)

  • Which report section supports the insulation claim and under what conditions?
  • What is the document revision and validity window, and what is explicitly excluded?
  • Does the report scope cover the package/material/site used in production?
  • Where are PD conditions and thresholds stated, and are they comparable across evidence?
  • Where is production screening policy (100% vs sampling) documented, including retest containment?
  • What are PCN triggers and how do they force re-evidence and re-approval?
  • Can a specific field return be traced to a lot/date code and a station program revision?
  • How is superseded evidence prevented from being used for new builds?

Retrieval target: part number + variant + rev → key fields → evidence source within 10 minutes.

Diagram · Doc Pack Folder Map

The folder map shows an archive structure that scales with part families, controls revisions, and keeps auditors focused on the right evidence.

Evidence Pack Device-Family Part-Number Revision Rev Folder Contents Certificates Reports Traceability Change-Control Summary Sheet Key fields + sources
Archive by family/part/revision and keep certificates, reports, traceability, and change control together with a summary sheet for fast audits.
H2-11 · Change Control & Re-qualification

Change Control & Re-qualification (PCN happens—how compliance stays intact)

Why this matters (audit & field failure modes)

  • Evidence can become invalid after package/material/site/process changes, even if the device still functions electrically.
  • Breaks typically occur in traceability: mixed inventory, unclear date-code mapping, or missing “impact assessment” documents.
  • Re-test without control (retest-to-pass) creates audit exposure and hides fixture contamination or setup drift.

Outcome: every PCN must map to (1) impacted claims, (2) impacted evidence scope, (3) required bridging/re-qualification actions, and (4) batch segregation & release gates.

Change types that usually trigger re-qualification

  • Mold compound / encapsulant → may alter partial discharge behavior and lifetime applicability of prior reports.
  • Leadframe / internal geometry → may shift electric-field distribution; report scope may no longer match construction.
  • Barrier process → direct impact to isolation claim evidence (routine/type test correlation may break).
  • Subcontractor / assembly flow → process window changes require re-linking consistency evidence to production reality.
  • Site transfer → requires proof that the qualified construction & process are preserved at the new site.

What to demand in a PCN package (minimum set)

  • Advance notice window (weeks/months) plus last-order / last-ship dates.
  • Impact assessment: which claims (reinforced / working voltage / PD tested / DWV) are affected and why.
  • Bridging plan: tests, conditions, sample size, and delivery schedule for updated evidence.
  • Traceability mapping: old vs new date codes / site codes / marking differences and how receiving can detect them.

Audit-friendly wording: “Evidence scope remains valid under the new construction and process window, proven by bridging results and traceable segregation of affected lots.”

Batch segregation & field disposition (non-negotiables)

  • Incoming segregation by date code / lot / site marking; no mixed bins until approval is explicit.
  • WIP segregation in MES/routers: build history must preserve pre/post-change lineage.
  • Release gate rule:
    • Pre-change lots → released under the pre-change evidence pack.
    • Post-change lots → released only after bridging / re-qualification evidence is approved.
    • Unverifiable lots → hold; trigger supplier clarification or controlled re-test per policy.
  • Substitution control: alternate sources require claim-to-evidence mapping and documented equivalency bridging.

Re-qualification checklist (close the loop)

  • PCN received, logged, and assigned an internal change ID.
  • Claims impacted are listed (reinforced / VIORM / PD / DWV / insulation system).
  • Evidence scope checked: package, material set, site, pollution degree/altitude conditions.
  • Bridging tests defined with conditions & sample size; results linked to report ID and revision.
  • Receiving/MES updated to recognize new markings/date codes; old documents marked superseded.
  • Mixed inventory disposition defined; hold/release gates documented and trained.
  • Audit binder updated: certificates, CB report references, test reports, traceability evidence, PCN/PDN policy.
Change → Impact → Action (PCN decision map) Three-stage box diagram: Change types feed Impact checks then branch to Actions and batch release gates. Change → Impact → Action (PCN Decision Map) 1) Change 2) Impact 3) Action Mold / Encapsulant Leadframe / Geometry Barrier Process Site / Subcon Change Claims affected? Evidence scope match? Traceability changed? Bridging required? Re-qual (Major) Bridging (Moderate) Doc update (Minor) Hold / Segregate lots Release Gate: Pre-change lots → pre-change evidence | Post-change lots → approved bridging/re-qual evidence | Unknown → HOLD
Diagram intent: convert PCN notifications into an auditable decision path with explicit release gates and lot segregation.
H2-12 · Engineering Checklist

Engineering Checklist (Design → Bring-up → Production Gates)

Design Gate (spec & sourcing — lock the evidence chain)

  • Isolation claims are written as requirements (reinforced/basic, working voltage, lifetime conditions).
  • Production DWV policy is defined: 100% vs sampling, guardband logic, and controlled retest policy.
  • PD requirement decision is explicit (required / not required) with report fields and conditions listed.
  • Evidence pack requirements are embedded into PRD/AVL (certificate/CB report sections, validity, scope).
  • Traceability fields are defined: lot/date code, test station ID, program revision, environment record.
  • PCN/PDN terms are contracted: notice window, impact assessment, bridging evidence deliverables.
  • Second-source strategy includes claim-to-evidence mapping and bridging rules before substitution approval.

Bring-up Gate (pilot runs — validate fixture & judgement)

  • Fixture safety is proven: interlock, guarding, discharge path, and E-stop behavior are verified and recorded.
  • Baseline repeatability is measured using a reference unit and a “blank fixture” leakage baseline.
  • Environment controls are enforced (humidity/contamination controls) with a stop-the-line threshold.
  • Fail handling is rehearsed: fixture check → controlled single retest → disposition with corrective action logged.
  • Evidence indexing template is created (device family/part number/revision/validity/conditions).
  • Training is executed: operators can locate key evidence fields within a fixed audit script.

Practical objective: false fails are eliminated before ramping volume, and the Pass/Fail definition is frozen for audits.

Production Gate (sustainment — calibration, traceability, and closed-loop action)

  • Shift-level self-checks are mandatory: leakage baseline, discharge function, interlock test, and program revision check.
  • Calibration control is enforced: instrument cal status, seal labels, and “out-of-cal” containment procedure.
  • SPC monitoring is active: fail clustering by fixture, station, time window, environment, and operator.
  • Evidence pack governance is maintained: revisions, superseded documents locked out, and validity windows tracked.
  • PCN response is routinized: affected inventory auto-hold, segregation verification, release only by approved bridging evidence.
  • Field return workflow is aligned: lot mapping, evidence snapshot, and change-boundary verification first.

Reference fixture BOM (example material part numbers)

Example items commonly used to build a safe DWV/PD station. Verify voltage category, local safety rules, and availability before adoption.

Category Vendor Part number Use in a production test station
Safety relay Pilz PNOZ X2.1 (774306) Monitors E-stop / safety gate channels; enables HV only when interlocks are valid.
Door/guard interlock switch Omron D4N-412G Fixture enclosure door state sensing (guard closed) as a hard interlock input.
Emergency stop (E-stop) Schneider Electric XB4BS8445 Operator-accessible latching stop; forces HV disable and safe discharge action.
HV relay / HV enable TE Connectivity (Kilovac) K70A841 (1618277-1) Controlled HV switching under safety relay logic (HV on/off state management).
SHV straight plug Kings 1705-1 High-voltage coax interface for HV lead connection with safer recessed contact design.
SHV bulkhead adapter Kings 1709-1 Bulkhead feedthrough/adapter for enclosure panel routing while maintaining outer-ground continuity.
SHV jack Amphenol RF 31-221-RFX Panel jack option for SHV interface; supports clean enclosure wiring and serviceability.
SHV plug (cable attach) Pasternack PE4497 Crimp/solder cable-end plug option for controlled HV lead builds and replacements.
Discharge/bleeder resistor (HV) Caddock Type MX (e.g., MX485-100M-1%) Controlled discharge path after HV disable; reduces residual energy before fixture access.
Traceability & segregation labels Brady B33-91-499 Lot/date-code segregation labels for bins, trays, and fixtures; supports audit traceability.

Station rule: the HV relay is permitted only when interlock + E-stop + safety relay chain is valid, and a verified discharge path exists.

Quality gates timeline (Design → Pilot → MP → Audit/Field) Timeline block diagram showing gates and evidence artifacts required at each stage. Quality Gates Timeline Design Gate Bring-up Gate Production Gate Audit/Field Spec clauses Evidence list PCN terms Fixture verify Fail handling Index template Shift checks Cal control Traceability Evidence pack PCN history Lot mapping Single rule: tests + documents + traceability must advance together; any gap becomes an audit finding or a field dispute.
Diagram intent: treat production test and documentation as a gated system—requirements, fixtures, records, and PCN handling move together.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs (Review / Acceptance / Field Disputes)

No new domains 4-line fixed format Data-first

Each answer is constrained to the production evidence chain: DWV/Hi-pot, PD, CTI evidence, fixture/setup, traceability, and PCN control. Numeric placeholders are standardized as X (threshold), Y (time window / sample size), N (count limit).

Hi-pot passes at R&D, fails on the line—first suspect DUT or fixture?

Likely cause: False fail from fixture leakage/contamination, HV lead routing, missing discharge verification, or environment drift—not a DUT defect.

Quick check: Run blank-fixture baseline at the same test voltage; baseline leakage ≤ X µA for Y min. Verify a reference unit pass-rate ≥ (100 − X)% and check clustering across N stations/shifts.

Fix: Clean socket/guards, re-route HV lead away from grounded edges, verify discharge/verify-zero, then re-run baseline; enforce one controlled retest max N after corrective action.

Pass criteria: Blank baseline ≤ X µA stable for Y min; reference unit passes; retest count ≤ N; no station-specific fail clustering.

AC hi-pot trips instantly—how to separate capacitive current vs leakage?

Likely cause: Capacitive inrush during ramp triggers the trip, or the trip threshold is set below normal capacitive/background current.

Quick check: Compare current vs time: a spike that decays within Y ms is capacitive; sustained current > X µA indicates leakage. Repeat with a slower ramp and check if instant trips drop below N per 1000 units.

Fix: Increase ramp time, delay trip-enable until after settle, and keep dwell separate from ramp; if needed, use DC to isolate steady leakage while keeping retest limit ≤ N.

Pass criteria: Instant-trip rate ≤ N/1000 with ramp ≥ Y ms; steady leakage ≤ X µA under the defined dwell window.

DC hi-pot passes, but audit asks for PD—can DWV substitute PD?

Likely cause: Requirement mismatch: DWV is a withstand/screening proof, while PD is evidence of discharge activity under defined measurement conditions.

Quick check: Confirm whether the contract/regime explicitly requires PD evidence. If PD fields (PDIV/PDEV, threshold, conditions, sample size) are missing, treat it as an evidence gap with unresolved items count N.

Fix: Request a PD report or run bridging PD with a locked condition set; do not claim DWV “covers PD” unless the acceptance document explicitly allows that substitution (set a deadline window Y for closure).

Pass criteria: PD report provided with threshold ≤ X pC, sample size ≥ Y units, and nonconforming count ≤ N under the stated conditions.

Same part, different lab results—what test-condition fields must be normalized?

Likely cause: The condition set is not aligned (waveform/frequency, detector bandwidth, threshold, humidity/altitude, sample size), so results are not comparable.

Quick check: Build a “must-match” list and verify ≥ Y fields are present in both reports; if missing-field count > N, the comparison is invalid. Enforce numeric alignment within ±X% for key electrical settings.

Fix: Re-test with a single normalized condition sheet (same threshold/bandwidth/waveform/environment) or mark the pair as non-comparable and request an accredited bridging report within Y days.

Pass criteria: Required fields completeness = 100%; mismatch count ≤ N; numeric conditions within ±X%; results referenced by report ID + revision.

PD result is “good” but field failures happen—what PD can’t tell you?

Likely cause: PD is not a full-field reliability guarantee; it cannot prove absence of contamination/handling damage, mixed-lot issues, or missing change-control boundaries.

Quick check: Trace a return to lot/date code + evidence revision within Y min. If trace time > X min or PCN boundary is unknown in N cases, the evidence chain is the actual gap.

Fix: Tighten traceability, segregation, and release gates; treat PD as one evidence element and close the process gaps (PCN handling, archival index, fixture gates) before escalating PD limits.

Pass criteria: Trace time ≤ X min for Y sampled lots; mixed-lot count N = 0; PCN boundary is explicit for every shipped lot.

Trip current was reduced to “be safe” and yield collapsed—what changed?

Likely cause: The trip threshold dropped below normal capacitive/background current, converting normal behavior into fail decisions (false fails dominate).

Quick check: Compare program revisions and measure blank-fixture peak/steady current. If baseline exceeds X µA or instant trips exceed N/1000 within Y minutes, the station setting—not the DUT—is responsible.

Fix: Set trip above the measured baseline with a defined guardband, delay trip-enable until after settle, and enforce baseline gates each shift; cap retest to N.

Pass criteria: Baseline ≤ X µA (stable for Y min); yield recovery meets target; false-fail rate ≤ N/shift with revision-controlled settings.

Retest makes fails disappear—what’s the correct retest policy?

Likely cause: Uncontrolled retesting masks fixture contamination, environment drift, or operator variance (“retest-to-pass” destroys audit defensibility).

Quick check: Audit retest logs: if retest count per unit > N or corrective action is missing in the last Y failures, the policy is broken.

Fix: Allow at most N controlled retest only after documented station corrective action (clean/reroute/verify discharge + baseline pass). Otherwise hold the lot and service the station.

Pass criteria: Retest count ≤ N; corrective-action log coverage = 100%; station baseline passes before release; closure within Y minutes for each event.

Humidity spikes correlate with false fails—what’s the fastest containment?

Likely cause: Elevated humidity increases surface leakage/flashover probability; contamination amplifies sensitivity in fixtures and sockets.

Quick check: Align RH log with fail timestamps; re-run blank baseline at current RH. If RH > X% and baseline drift appears within Y min, stop-line containment is required; track impacted units count N.

Fix: Pause testing, dehumidify/condition, clean fixtures/sockets, then re-verify baseline and reference unit before restart; segregate N impacted units for controlled disposition.

Pass criteria: RH ≤ X% for Y min before restart; post-restart false fails ≤ N/1000; baseline returns to within threshold.

Vendor says “reinforced” but can’t provide CB report—what do you do?

Likely cause: A marketing claim without acceptable third-party evidence, or the evidence scope does not cover the exact PN/package/conditions used.

Quick check: Request certificate/report ID, revision, scope, and limitations. If any key artifact is missing or unverifiable in Y business days, flag as evidence gap count N.

Fix: Hold approval or downgrade the claim in the spec; require the missing report or choose an alternate part with a complete evidence pack and explicit scope coverage.

Pass criteria: Evidence pack completeness = 100%; scope matches PN/package/conditions; unresolved gap count N = 0; retrieval time ≤ X min.

CTI evidence missing—what documents are acceptable substitutes?

Likely cause: CTI is a materials evidence-chain problem (documented material class + process control), not only a “design number.”

Quick check: Collect acceptable evidence: material datasheet with CTI, recognized material listings, or CoC showing the material set and revision. If CTI class/value cannot be proven to ≥ X with ≥ N sources, treat as non-acceptable within Y days.

Fix: Lock an approved material evidence pack (source + revision + scope) and link it to the part family; require incoming documentation to match the approved revision before use.

Pass criteria: CTI evidence sources ≥ N; CTI proven ≥ X; document revisions match; evidence is retrievable within Y min.

PCN issued for molding compound—what re-qual tests are minimal?

Likely cause: Encapsulant change can shift insulation-system behavior; prior evidence scope may no longer be valid for PD and related claims.

Quick check: Obtain PCN impact assessment + traceability mapping. If affected-claim list is incomplete or lots cannot be separated in Y days, set hold count N for affected inventory.

Fix: Minimal bridging set: PD under the matched condition set + correlation to production DWV screening behavior; update evidence pack revision and enforce pre/post-change lot segregation.

Pass criteria: PD threshold ≤ X pC with sample size ≥ Y and nonconforming count ≤ N; lots are segregated and released only by approved evidence revision.

How to archive evidence so auditors can trace a shipped lot in 5 minutes?

Likely cause: Scattered files, missing index, superseded docs not locked out, and no explicit mapping from shipped lot → evidence revision/conditions.

Quick check: Pick a shipped lot and time the retrieval. If trace time > X min for any of Y sampled lots, the archive is not audit-ready; count retrieval failures N.

Fix: Enforce folder schema (family/PN/rev/validity/conditions) + a one-page summary index; mark old documents superseded and prevent use in new builds; keep traceability logs linked by lot/date code.

Pass criteria: Trace time ≤ X min for Y sampled lots; retrieval failure count ≤ N; superseded-doc usage count = 0.