Cable Diagnostics (TDR, Return-Loss, SNR) for Industrial Ethernet
← Back to: Industrial Ethernet & TSN
Cable diagnostics turns “link problems” into measurable evidence: where the fault is (TDR distance), how good the channel is (return-loss margin), and how stable it runs online (SNR/counters).
With a consistent setup and pass criteria, field teams can quickly separate real cable/harness defects from setup artifacts, and produce a one-page report that is repeatable from lab to production to service.
H2-1 · What Cable Diagnostics Can and Cannot Solve
Cable diagnostics turns “link instability” into repeatable evidence by separating hard faults, quality degradation, and runtime instability. The goal is to produce three deliverables: distance, margin, and logs.
Problem types (classify first)
- Hard faults: opens/shorts, broken conductor, intermittent contact (bend/vibration/plug triggers).
- Quality degradation: impedance discontinuities, reflections, loss/crosstalk reducing margin (often speed/temperature sensitive).
- Runtime instability: noise bursts, sporadic errors, training/margin dips seen only under load or events.
Fast rule: no-link → suspect hard fault first; runs but errors → margin/runtime evidence is required.
Outputs (deliverables, not just numbers)
Distance: “discontinuity at X m from port (±Y m).”
Margin: “return-loss worst band / SNR margin minimum is X (threshold X placeholder).”
Logs: settings + timestamp + environment + counters window definition + screenshots (audit-ready).
When to run (minimal-cost escalation)
- Field service: start with port self-test / online counters → quick TDR for location → deeper RL only if needed.
- Production: quick screen for gross faults; escalate to quality tests on sample/triggered failures.
- Lab: build golden baselines (known-good cable/fixture/port) and define thresholds before deployment.
Heavy tools belong after quick evidence: first capture repeatability, then capture precision.
Common traps (avoid false conclusions)
- Protocol error ≠ bad cable: confirm physical evidence (distance/margin) before replacing hardware.
- EMC event ≠ return-loss failure: use a time-aligned log (event → counters → symptom) before labeling the cable.
- Counter “spikes” can be accounting artifacts: standardize time window and denominator across tests.
- Fixture can dominate the measurement: always capture a fixture baseline and apply the same setup.
Out-of-scope reminder: grounding/ESD/surge return-path design and protection component placement belong to sibling pages; this page focuses on diagnostic evidence.
H2-2 · Field Symptoms → Which Test First
The fastest field workflow uses escalation levels: start with online evidence, apply quick location tests, and reserve deep quality tools for cases that demand proof. Each symptom entry below defines a first test, a stop rule, and the next action.
Escalation levels (keep expensive steps last)
Level 0: online counters / training status / event timeline.
Level 1: quick TDR / open-short / segment isolation.
Level 2: return-loss / deeper frequency-domain quality proof.
Quick decision tree (symptom → first test → next action)
Symptom: Link down / no link
First test: Level 1 quick TDR + open/short checks.
Stop rule: discontinuity distance repeats within ±X (setup fixed).
Next action: repair/replace the segment nearest the detected discontinuity.
Symptom: CRC / PCS errors
First test: Level 0 counters + margin/training status (same time window).
Stop rule: error rate stays below X per Y minutes across identical windowing.
Next action: if errors track speed/temperature → Level 2 return-loss; otherwise isolate by segment.
Symptom: Link flap / re-negotiation
First test: Level 0 timeline (events → counters → link state).
Stop rule: a consistent trigger is identified (motion, connector touch, temperature step).
Next action: if trigger suggests contact → Level 1 TDR repetition + segment swap to confirm.
Symptom: Only under motion / temperature
First test: Level 0 event-aligned evidence (same window/denominator).
Stop rule: margin dips or counter bursts align with the trigger within X seconds.
Next action: capture repeatability, then use Level 1/2 to assign location or quality cause.
Scenario: Field service
- Goal: 10–30 minutes to a defensible conclusion.
- Path: Level 0 → Level 1 → Level 2 only if proof is required.
- Output: one-page evidence (distance/margin/logs) with timestamps.
Scenario: Production screening
- Goal: throughput with low false rejects.
- Path: fast hard-fault screen + triggered deep quality tests.
- Rule: standardize fixtures and window definitions across stations.
Scenario: Lab baseline
- Goal: define thresholds and golden references.
- Path: fixture baseline → known-good cable → controlled defects.
- Output: pass criteria placeholders (X/Y) ready for field and production.
H2-3 · Measurement Map: What Each Metric Really Means
Cable evidence comes from three domains: time (where the discontinuity is), frequency (how good the match/loss is), and runtime (how much margin remains under real traffic). A consistent pass criteria format keeps results comparable across tools and teams.
TDR (time domain)
Meaning: reflection vs time → impedance discontinuity → distance (needs VF).
Best for: open/short, connector step, branch/stub location, segment isolation.
Common pitfall: distance drift from VF/temperature/fixture; intermittent contact needs repetition.
Pass criteria format: “discontinuity at X m (±Y m) under setup S.”
Return-Loss (frequency domain)
Meaning: frequency-dependent mismatch → reflections → eye closure risk.
Best for: quality comparison, acceptance, “which cable/segment is worse,” speed-sensitive failures.
Common pitfall: mixing bands/setups; a good RL curve does not guarantee runtime margin.
Pass criteria format: “RL ≥ X dB from f1–f2 under setup S.”
Insertion Loss (frequency domain)
Meaning: signal attenuation vs frequency → less eye amplitude → heavier EQ burden.
Best for: long runs, aging cables, cases where RL looks acceptable but runtime margin collapses.
Common pitfall: ignoring fixture loss; compare only under identical setups/baselines.
Pass criteria format: “IL ≤ X dB at f* under setup S.”
SNR / Margin (runtime)
Meaning: runtime noise/margin estimate (PHY-dependent) → stability under real traffic/events.
Best for: event correlation (temp/motion/load), “works on bench but fails in system.”
Common pitfall: inconsistent time window/denominator; different PHYs use different margin definitions.
Pass criteria format: “margin ≥ X for Y minutes (window W).”
BER / CRC / PCS errors (runtime outcome)
Meaning: observed failures; useful for correlation, not root cause by itself.
Best for: proving improvement/regression under identical conditions and windowing.
Common pitfall: counter spikes from inconsistent sampling; “more errors” can be a denominator mismatch.
Pass criteria format: “error rate ≤ X per Y (window W).”
Unified wording rules (keep metrics comparable)
- Always record setup S: fixture, reference plane, VF (if used), and instrument profile.
- Always record window W: time range, denominator, and sampling interval for counters/BER.
- Pass criteria should read as a single sentence with placeholders: X / Y / W.
H2-4 · TDR Deep Dive: From Reflection to Distance
TDR converts reflections into a repeatable distance estimate. The most useful field result is not a single trace, but a stable pattern under a fixed setup: same fixture, same window, and a recorded VF assumption.
Propagation speed & VF (distance accuracy)
- Distance depends on VF; VF shifts with cable type, batch, temperature, and moisture.
- Use a golden reference to calibrate VF before quoting distances.
- Always report: “X m (±Y m)” where Y reflects VF uncertainty (placeholder).
Stop rule: accept a distance only when it repeats across N runs under identical setup S (N placeholder).
Resolution & near-end blind zone (avoid self-deception)
- Faster edges and tighter sampling improve resolution, but increase sensitivity to fixture artifacts.
- Near-end blind zone can hide connector defects; move the reference plane with a controlled extension only with a baseline.
- Use windowing/gating to separate primary reflections from multi-bounce echoes.
Pass criteria format: “blind zone ≤ X m” and “fixture baseline stable within X” (placeholders).
Differential pair TDR (diagnostic meaning)
- Differential reflections indicate signal-path discontinuities; use them for location and segment ranking.
- Asymmetric responses suggest imbalance; treat this as a diagnostic hint, not a full EMC analysis.
- When imbalance correlates with external events, capture logs and escalate to grounding/protection pages (out-of-scope here).
Output focus: location + repeatability, not component-level root-cause claims.
Reflection fingerprint library (use the same 4-line structure)
Open
Signature: large reflection at cable end, stable across runs.
Action: confirm distance, inspect/replace nearest termination/connector.
Pass: location repeats within ±X m (setup S).
Short
Signature: strong reflection with opposite polarity at the fault point.
Action: isolate by segments; inspect crushed cable / pin-to-pin shorts.
Pass: fault distance repeats within ±X m.
Impedance step
Signature: small but repeatable step at a connector or transition.
Action: compare against golden baseline; rank segments by step magnitude.
Pass: step stays below X (threshold placeholder).
T-branch / stub
Signature: split reflection and secondary echoes from the branch point.
Action: measure branch distance; remove/shorten stub to confirm improvement.
Pass: branch location and spacing repeat under gating.
Intermittent contact
Signature: reflection appears only under trigger (bend/touch/temp step).
Action: log trigger + run repeated captures; compare distance distribution.
Pass: trigger-to-reflection alignment within X seconds (placeholder).
Near-end blind zone
Signature: early reflection masked by launch/fixture response.
Action: capture fixture baseline; adjust reference plane or test from the other end.
Pass: baseline remains stable within X across runs.
H2-5 · Return-Loss Deep Dive: Quality, Not Just Pass/Fail
Return-loss (RL) is a frequency-dependent view of mismatch. A curve shape often carries more diagnostic value than a single pass/fail label, especially when stability changes with speed, temperature, or fixtures.
“Good” curve (baseline)
- RL stays high across the band of interest.
- Use as a golden reference for regression checks.
- If runtime still fails, look at IL/XTALK or online margin next.
Local notch (single weak region)
- Often points to a local discontinuity (connector, crimp, transition).
- Instability may correlate with a specific speed/feature using that band.
- Swap ends / replace the suspect transition for quick confirmation.
Global drop (systemic shift)
- Suggests overall mismatch/loss change (material, process, aging, wrong cable).
- Compare to golden baseline under the same setup S.
- Fix typically requires changing cable/assembly or reducing length/speed.
Inference cards (use the same 4-line diagnostic wording)
Connector vs cable (first split)
Hypothesis: a local notch is driven by a single transition (connector/crimp).
Quick check: swap ends or replace the suspected connector and re-measure under setup S.
Fix: re-crimp / replace transition / remove extra adapters.
Pass: RL ≥ X dB in band f1–f2 under setup S.
Ripple / waviness
Hypothesis: multi-bounce reflections from multiple discontinuities or end mismatch.
Quick check: change reference plane/fixture and confirm the pattern moves with the setup, not the cable.
Fix: simplify adapters, tighten termination quality, segment the harness to isolate the dominant contributor.
Pass: RL curve repeatability within X dB across N runs (placeholders).
No VNA available (field approximation)
Hypothesis: runtime instability is driven by mismatch near a specific band/speed.
Quick check: use port self-test + training results + margin trends to rank “likely bad segments.”
Fix: replace suspect transition/cable, then confirm improvement with the same online window W.
Pass: margin ≥ X for Y minutes (window W) + error rate ≤ X (placeholders).
H2-6 · SNR / Margin / Counters: Online Diagnostics
Online diagnostics provides a non-invasive evidence chain: events → margin/training state → error counters. The first priority is consistent windowing to avoid misleading comparisons.
Margin / SNR estimate
Field definition: runtime margin estimate (algorithm depends on the PHY).
Sampling window: W (length), Δt (interval), reset policy.
Threshold: margin ≥ X for Y minutes (placeholders).
Interpretation: event-linked dips are more actionable than raw absolute numbers.
Training / EQ state
Field definition: link training outcome, EQ level, re-train count.
Sampling window: count per link-up interval or per fixed W.
Threshold: retrain ≤ X per hour (placeholder) or stable EQ band (placeholder).
Interpretation: frequent renegotiation + rising EQ demand often tracks channel degradation.
Physical/PCS error counters
Field definition: PCS/block errors, symbol errors, FEC corrections (if present).
Sampling window: W (fixed) with a consistent denominator; log resets explicitly.
Threshold: error rate ≤ X per Y (placeholders), plus “burst” detection (placeholder).
Interpretation: pair with margin to separate “channel weak” vs “measurement window artifact.”
Higher-layer outcomes (CRC / drops)
Field definition: observed failures; use for correlation, not root cause alone.
Sampling window: same W and denominator as PHY/PCS for fair comparison.
Threshold: CRC ≤ X per Y and stable over Y minutes (placeholders).
Interpretation: always pair with margin/training to avoid “false blame” on cables.
Windowing rules (avoid misleading statistics)
- Always log W (window length), Δt (sampling), and denominator.
- Explicitly log reset boundaries (reboot, link-down, counter clear).
- Compare only runs with the same setup S and the same window W.
H2-7 · Crosstalk & Pair Faults: NEXT/FEXT and Pair Swap
Pair faults and crosstalk problems often look like “random link instability,” but they have distinct fingerprints. Fast diagnosis focuses on symptom shape and segmentation, not detailed EMI theory.
True coupling crosstalk (NEXT/FEXT)
- Often event-driven bursts (motion, load changes, nearby switching).
- May correlate with a specific harness bundle segment.
- A/B isolation: separate the suspect segment and compare the same window W.
Pair swap (pair-to-pair mismatch)
- Link may not come up or training fails consistently.
- Symptoms are strongly port/connector mapping dependent.
- Fast confirmation: pair mapping / pinout sanity check.
Split pair (broken twist integrity)
- RL can look “not terrible,” yet margin collapses and errors spike at speed.
- More sensitive to environment and bundle adjacency.
- Fast confirmation: pair integrity test or replace a short suspect segment.
Connector-induced coupling
- Issues localize to one port / one adapter / one batch.
- A/B test: replace the connector/adapter only.
- Record the adapter as part of setup S for reproducibility.
Segmentation steps (Step 1–5)
Step 1 · Lock the window
- Define W (time window) and setup S (fixture + mode).
- Log margin + training state + counters in the same denominator.
- Pass: stable baseline within X (placeholders).
Step 2 · Port vs cable split
- Swap ports (same device) or swap the remote endpoint.
- Check if failures follow the port or follow the harness.
- Pass: behavior classification is consistent across N trials.
Step 3 · Segment replacement
- Replace the closest adapter/short patch first (fast A/B).
- Move outward until the dominant segment is isolated.
- Pass: error rate ≤ X in window W (placeholders).
Step 4 · Pair integrity check
- Rule out pair swap / split pair early for “mystery” instability.
- Use mapping tests or a known-good short segment.
- Pass: mapping matches spec; no split pair signatures.
Step 5 · Hotspot A/B
- Temporarily separate the suspect bundle segment.
- Re-run the same window W and compare burst density.
- Pass: burst reduction ≥ X% (placeholder).
H2-8 · Test Setup & Fixtures: Port, Magnetics, and Access Points
Test setup choices can dominate results. Use a consistent setup S definition (access point + fixture + mode) and treat fixtures as part of the measurement environment.
Access points (priority)
- A: service connector / dedicated test point (best repeatability).
- B: direct cable-end measurement (most direct, may disrupt service).
- C: device port measurement (most convenient, most port-dependent).
Do / Don’t (fixture discipline)
Do: minimize adapters; log fixture model/length as part of setup S.
Do: keep a fixture baseline for repeatability checks (same mode, same window).
Don’t: compare results across different adapter chains without re-baselining.
Port / magnetics influence (principles)
- Near-end blind zones can hide the first defect; change the access point if needed.
- Port magnetics/protection can distort measured signatures; treat as setup S context.
- Use port measurements for trends and ranking, not absolute acceptance alone.
Fixture calibration card (4-line wording)
Goal: make measurements comparable across runs (same setup S).
Quick check: run the fixture-only baseline and verify repeatability.
Correction: shorten adapter chain, re-baseline, and record reference plane.
Pass: baseline drift ≤ X and setup S fully logged (placeholders).
H2-9 · Diagnostic Algorithms: Windowing, De-embedding, and Confidence
Distance drift usually comes from mixed reflections, inconsistent reference planes, or unstable velocity factor assumptions. Practical algorithms separate reflection clusters, control setup S, and attach an explicit confidence grade to every conclusion.
Windowing / gating (separate reflection clusters)
Window-0 · Near-end (port/fixture zone)
Goal: isolate port/fixture signature to avoid false “first defect”.
How: lock setup S and align the reference plane before comparison.
Pitfall: swapping adapters without re-baselining.
Output: baseline fingerprint (repeatability ≤ X, placeholders).
Window-1 · Dominant event cluster
Goal: locate the dominant step/open/short/branch signature.
How: gate by amplitude/slope threshold to capture a reflection cluster.
Pitfall: treating multiple reflections as one defect point.
Output: defect distance d1 ± X (placeholders).
Window-2 · Far-end / termination zone
Goal: validate total length and remote-end state (end reflection).
How: place a terminal window around expected end reflections.
Pitfall: wrong VF shifts the window and breaks consistency.
Output: length sanity check (error ≤ X, placeholders).
De-embedding (calibrate → align → compare-to-baseline)
Step A · Fixture baseline
- Run the same mode on fixture-only or a known short sample.
- Store baseline waveform/curve under the same setup S label.
- Pass: baseline repeatability ≤ X (placeholder).
Step B · Reference plane alignment
- Make “t=0 / d=0” consistent across runs.
- Keep access point and adapter chain recorded as setup S.
- Pass: aligned near-end clusters overlap within X.
Step C · Delta signature
- Compare against baseline rather than trusting absolute magnitude.
- Use gated windows to attribute deltas to segments.
- Pass: dominant delta is stable across N repeats.
Distance error budget (fill-in template)
Inputs: VF_nom = X; ΔVF_temp = X; ΔVF_batch = X; sampling/rise-time = X; window placement = X.
Outputs: typical error = ±X; worst-case = ±X; conditions = temp range / cable type.
Use: report distance as “d ± X” rather than a single absolute number.
Confidence grade (A / B / C)
A · High confidence
Stable across N repeats, delta vs baseline is consistent, segmentation confirms the same dominant segment.
B · Medium confidence
Dominant cluster is visible, but distance drifts with temperature/fixture; report as a suspect zone with next checks.
C · Low confidence
Noise dominates or denominators are inconsistent; avoid hard conclusions and request additional captures under fixed setup S.
H2-10 · Pass/Fail Criteria and Report Template
Field handoff succeeds when results are reproducible and traceable. Use a one-page report with a consistent setup S, explicit placeholders (X/Y/Z), and a minimal evidence chain (raw captures + settings + timestamp + environment).
One-page report (copy-ready structure)
Header
- Cable ID / Asset / Port
- Operator / Timestamp
- Location / Work order
Setup S
- Access point (port / cable end / service)
- Fixture/adapter chain (model + length)
- Mode + window W definition
Measurements
- Length estimate ± X
- Fault distance d ± X (confidence A/B/C)
- RL worst band + online margin/counters snapshot
Conclusion & Actions
- Pass/Fail (X/Y/Z placeholders)
- Recommended action (replace segment / re-terminate / re-test)
- Evidence links (raw captures + settings + environment)
Pass criteria (placeholders X / Y / Z)
- Electrical quality: RL ≥ X in band Y (placeholders).
- Operational margin: margin/SNR ≥ X or error rate ≤ X in window W.
- Localization reliability: distance error ≤ X and confidence ≥ B.
Minimal evidence chain (checklist)
- Raw capture (waveform/curve) with markers and axes visible.
- Setup S recorded (access point + fixture + mode + window W).
- Timestamp + environment (temp / power state / load).
- If counters used: denominators and sampling window W documented.
Field dictionary (write-once, reuse everywhere)
Cable ID
Unique identifier for traceability across repairs and re-tests.
setup S
Access point + fixture/adapter chain + mode + window W definition.
Fault distance
Report as d ± X with confidence A/B/C; avoid single-number claims.
RL worst band
Worst frequency region aligned to the target rate/application (Y placeholder).
Counters snapshot
Include window W and denominator; otherwise comparisons are misleading.
Evidence
Raw capture + setup S + timestamp + environment; link or embed references.
H2-11 · Engineering Checklist (Design → Bring-up → Production → Field)
Goal: build diagnostics into the product so measurements are repeatable, traceable, and actionable across lab, line, and field. Each checklist item ends with a Pass criteria placeholder (X) to keep thresholds consistent site-wide.
Phase A · Design (Design-for-Diagnostics)
- Industrial RJ45 PHY w/ TDR: TI DP83869HM (TDR + BIST), TI DP83867 (TDR app-note family)
- Industrial switch w/ cable diagnostics: Microchip KSZ9477S (LinkMD cable diagnostics)
- Industrial PHY cable diagnostics suite: Microchip VSC8541-05 / VSC8541ET (VeriPHY cable diagnostics)
- Multi-port copper PHY diagnostics: Marvell Alaska 88E1545 (Advanced VCT)
- SPE / Automotive examples: TI DP83TD510E (10BASE-T1L diagnostics toolkit), TI DP83TG720S-Q1 (1000BASE-T1 w/ cable diagnostics), Marvell 88Q2112 (VCT feature)
Note: part numbers are listed to anchor “what capability lives where”; they are not a purchasing list.
Phase B · Bring-up (Golden baseline + initial thresholds)
- Copper certification (return-loss/insertion-loss suite): Fluke Networks DSX-8000 / DSX-5000
- Portable TDR fault location: Megger TDR2000/3 (dual-channel TDR)
- Lab VNA / time-domain option: Keysight E5061B (ENA VNA; time-domain/fault-location options)
- Field link/PoE visibility (triage companion): NetAlly EtherScope nXG
Phase C · Production (Screen → Retest → Upload → Escalate)
- In-system copper cable diagnostics (SMI/MDIO-driven): Microchip VSC8541 family (VeriPHY), Microchip KSZ9477S (LinkMD)
- Multi-port copper PHY diagnostics: Marvell 88E1545 (VCT)
- SPE line screening hooks: TI DP83TD510E diagnostics toolkit (TDR/SQI/ALCD)
Phase D · Field (Soft-first triage → Segment isolate → Decide replace)
- Portable link triage: NetAlly EtherScope nXG
- Portable TDR locate: Megger TDR2000/3
- Certification-level cabling verdict (when policy requires): Fluke Networks DSX-8000
H2-12 · Applications & IC Selection Logic (Diagnostics Capability Planning)
This section maps use cases to a tool chain and to on-chip diagnostics capabilities. It avoids brand-driven design and focuses on coverage / time / evidence strength.
Escalate when: marginal SQI/SNR or burst counters exceed X.
Evidence: one-page report fields + configuration profile ID.
Example IC anchors: TI DP83869HM, Microchip VSC8541, Microchip KSZ9477S, Marvell 88E1545.
Escalate when: cable-likeness is high but location is unknown → run portable TDR locate.
Evidence: snapshot + timestamp + Setup S + confidence grade (A/B/C).
Example tools: NetAlly EtherScope nXG, Megger TDR2000/3.
Escalate when: repeated ambiguity → lab VNA to settle frequency-band quality.
Evidence: “worst band” + “distance estimate” + de-embedded note.
Example lab anchor: Keysight E5061B.
Escalate when: suspected intermittent opens/shorts → schedule repeated measurements + confidence grading.
Example IC anchor: TI DP83TD510E (10BASE-T1L diagnostics toolkit).
Speed: typically fastest (screening tier).
Evidence strength: medium unless traces/curves are exported.
Example part numbers: TI DP83869HM (TDR/BIST), Microchip VSC8541 (VeriPHY), Microchip KSZ9477S (LinkMD), Marvell 88E1545 (VCT), TI DP83TG720S-Q1 (cable diagnostics), TI DP83TD510E (diagnostic toolkit).
Speed: medium (requires access and sometimes disconnect).
Evidence strength: medium–high depending on exportability.
Example models: Megger TDR2000/3, NetAlly EtherScope nXG, Fluke Networks DSX-8000.
Speed: slowest but most decisive for “which band fails which rate” questions.
Evidence strength: highest (curves + settings + calibration record).
Example model: Keysight E5061B.
Steps: (1) lock window W → (2) read SQI/SNR + EQ summary → (3) read counters → (4) align with events → (5) grade confidence (A/B/C).
Stop rule: stable margin and low burst rate for X minutes → cable unlikely root cause.
Escalate rule: margin collapse or repeat bursts → run locate tier (TDR/handheld).
Example IC anchors: TI DP83869HM, Microchip VSC8541, Marvell 88E1545.
Steps: (1) record Setup S → (2) segment isolate once → (3) run portable TDR locate → (4) confirm with retest → (5) capture evidence bundle.
Stop rule: consistent fault distance d with confidence ≥ X → execute repair/replace gate.
Escalate rule: distance drifts across runs → move to lab tier with calibration/de-embed.
Example models: Megger TDR2000/3, NetAlly EtherScope nXG.
Steps: (1) calibrate reference plane → (2) measure RL band(s) → (3) correlate to speed/mode → (4) export curves + settings → (5) update thresholds X and baseline.
Stop rule: worst band identified with clear margin threshold → update production/field gates.
Example model: Keysight E5061B.
Recommended topics you might also need
Request a Quote
H2-13 · FAQs (Cable Diagnostics: TDR / Return-Loss / SNR / Crosstalk)
Scope: close out long-tail troubleshooting without expanding new topics. Format is fixed per FAQ: Likely cause / Quick check / Fix / Pass criteria (threshold placeholders X).
TDR shows a fault at 12 m, but replacing the cable did not fix stability — what is the first sanity check?
Quick check: re-run with the same access point + same fixture chain + same window W; compare against a known-good baseline under identical Setup S.
Fix: lock VF per cable type/batch, rebuild fixture baseline, and apply consistent de-embed / gating rules before interpreting distance.
Pass criteria: distance repeatability within X across X runs (same Setup S).
Return-loss “passes”, but high-speed CRC still spikes — what is the first diagnosis split?
Quick check: confirm RL band coverage matches the operating mode; then correlate CRC bursts with SNR/margin and counter windows W (same denominator).
Fix: align measurement band to the actual mode, standardize window W, and use “band + margin + counters” together for the verdict.
Pass criteria: worst-band margin ≥ X (placeholder) and burst error rate ≤ X per X minutes (windowed).
Link drops only at certain temperature or during bending — how to catch an intermittent-contact “signature”?
Quick check: run repeated TDR snapshots (N runs) under controlled “stimulus” (temperature step or bend) and look for a moving/appearing reflection cluster in a fixed gate window.
Fix: capture before/after evidence with timestamps, then segment-isolate (swap connector/segment) to localize the unstable section.
Pass criteria: reflection cluster stability across X cycles; no new discontinuity above X threshold (placeholder).
Production test passes, but the harness fails after installation — what is the first topology suspicion?
Quick check: compare TDR traces before vs after installation (same Setup S), and look for an added reflection step or secondary reflection sequence consistent with a branch.
Fix: isolate by disconnecting suspected branch points one at a time; update the acceptance workflow to include a branch-detection screen.
Pass criteria: no new branch signature above X; error bursts remain ≤ X per X minutes.
Only one pair shows frequent issues — how to quickly screen NEXT/FEXT vs split-pair wiring error?
Quick check: compare per-pair SNR/margin and error counters; if available, run a quick pair-fault/crosstalk screen and re-test with a known-good patch/harness segment.
Fix: re-terminate or correct pairing; if coupling is suspected, localize by segment isolation (connector-to-connector) and confirm symptom moves with the segment.
Pass criteria: per-pair margin difference ≤ X (placeholder) and pair-specific error rate ≤ X per X minutes.
Counters look “huge”, but each report uses a different time window — what is the first fix?
Quick check: normalize counters to a fixed window W (same duration and sampling cadence) and store the denominator fields explicitly.
Fix: standardize telemetry: window W, sampling rate, and reset rules; enforce Setup S versioning for counter definitions.
Pass criteria: normalized error rate stable within X across X windows (same W).
The same cable shows very different “fault distance” on different testers — where to look first?
Quick check: align VF, reference plane, and gating window; validate both testers using the same known-good calibration artifact/fixture baseline.
Fix: lock a site-wide “Setup S profile” per cable class and require exporting tester configuration with every report.
Pass criteria: cross-instrument distance agreement within X after alignment (placeholder).
A near-end blind zone hides a connector defect — what is the fastest workaround?
Quick check: add a known short extension lead (documented length) or shift gating to move the connector response out of the blind zone; retake with the same Setup S otherwise.
Fix: standardize a “near-end access kit” (extension + profile) and keep its baseline for de-embed consistency.
Pass criteria: connector discontinuity becomes visible and repeatable; measurement spread ≤ X.
RL curve shows periodic ripple — how to localize which segment creates the standing-wave pattern?
Quick check: segment-isolate (remove/swap one section) and re-measure RL; also compare TDR for matching spaced reflections that imply two reflectors.
Fix: localize via binary segmentation (halve the harness path); once identified, treat that segment as the replacement/repair target with evidence.
Pass criteria: ripple amplitude reduces below X (placeholder) in the target band after segment action.
SNR margin occasionally collapses, but RL does not change — is it a noise event or unstable training?
Quick check: correlate SNR drops with event timeline (power, temperature step, link renegotiation) and check whether EQ/training status changes at the same timestamp.
Fix: lock window W and improve black-box logging (event + counters + margin); use repeated runs to classify “external event” vs “training state”.
Pass criteria: no margin collapse below X for X minutes under stable conditions (placeholder).
Open/short checks are fine, yet the link still drops — what is the first “pairing/coupling” diagnostic check?
Quick check: compare per-pair counters and margin; swap in a known-good segment/patch at the suspected end to see if the symptom moves with wiring.
Fix: correct pairing/termination and enforce a pairing check in the production/field playbook; keep evidence with Setup S.
Pass criteria: per-pair symmetry restored and drop events ≤ X over X minutes (placeholder).
After changing fixtures/adapters, everything fails — what is the first “fixture baseline” recovery step?
Quick check: measure a known-good cable with old vs new fixture under matched Setup S; quantify the delta as “fixture signature”.
Fix: rebuild fixture baseline, re-apply consistent de-embed rules, and version the fixture chain ID in Setup S for every report.
Pass criteria: known-good cable returns to pass with fixture delta within X (placeholder).