Traction Motor & Axle Speed Sensing
← Back to: Rail Transit & Locomotive
Rail traction does not rely on “a speed number” but on trustworthy time evidence that stays valid under high dv/dt, long cables, and large common-mode noise. This page explains how resolver/encoder and Hall/MR axle chains are built, cross-checked, time-stamped, and maintained so speed remains reliable for traction, braking stability, and protection decisions over a long service life.
Why Speed Truth Matters in Rail Traction
Speed/position is a safety-critical evidence signal shared by traction, braking, and protection logic.
In rail traction, the “speed” signal is not merely a sensor output. It is the physical truth that multiple closed loops trust at the same time: traction effort control, anti-slip decisions, regenerative braking stability, and protection gating. The engineering goal is not only to measure speed, but to preserve a speed/position evidence chain that remains trustworthy under high dv/dt switching, long harnesses, large common-mode disturbances, and harsh drift over temperature and lifetime.
Core requirement: speed feedback must remain credible and explainable (provable via logs/waveforms/thresholds) even with ~kV-level common-mode noise, tens of meters of cabling, and high dv/dt environments.
Traction control does not “use speed” in isolation. It uses speed evidence to decide:
- Traction force controlBiased speed evidence skews torque commands, producing force ripple, current stress, and comfort issues during acceleration.
- Anti-slip / wheel-spin decisionsIf axle or motor speed truth is compromised, slip detection becomes overly conservative (performance loss) or unsafe (loss of adhesion control).
- Regenerative braking stabilitySpeed jitter injects disturbance into torque/current coordination, causing oscillation, jerks, or unstable energy return during braking transitions.
- Protection & safety gating outcomesMissing or inconsistent speed evidence triggers safety fallback modes, protective trips, or unnecessary traction cut-offs that directly impact service availability.
Failure consequences in rail context (symptom → mechanism → operational impact):
- Low-speed reads too highTiming uncertainty dominates at low speed → braking distance estimation becomes optimistic → stop-accuracy margin is reduced.
- High-speed reads too lowTorque demand is overstated → adhesion margin is exceeded → wheel slip, rail/wheel wear, and torque shock increase.
- Jitter / speed “flutter”Noise converts amplitude disturbance into timing noise → closed-loop oscillation and ride discomfort → nuisance protection events.
- Dropout / speed goes to zero intermittentlyPulse loss or threshold ambiguity → logic assumes loss of evidence → traction/brake safety fallback or trip.
Measurement Topologies Used on Rolling Stock
Rolling stock uses multiple physical speed sources and cross-checks them to avoid single-point “truth”.
Rail traction platforms rarely “pick one sensor” for speed. Instead, they build at least two independent physical measurement chains and continuously cross-audit them. A motor-shaft channel provides high-resolution commutation/torque control evidence, while an axle channel better represents wheel-to-rail motion. Cross-check logic detects slip, sensor drift, wiring faults, and intermittent dropouts before they propagate into unsafe torque/brake decisions.
Design rule of thumb: treat motor speed and axle speed as separate evidence sources. Use correlation windows and fault states to decide when disagreement indicates slip vs when it indicates a measurement integrity problem.
Three topology families commonly seen on rolling stock:
- Motor shaft sensingResolver or absolute encoder near the traction motor. Best for commutation/torque control evidence, but not always identical to wheel-to-rail motion under slip or gearbox dynamics.
- Axle sensing (independent of motor)Incremental encoder, Hall, or MR sensing on axle/gear-tooth targets. Provides motion evidence closer to wheel behavior, but is sensitive to mechanical gap, contamination, and long-cable interference.
- Redundant cross-checkMotor vs axle correlation and slip detection. Disagreement can be “expected” (adhesion events) or “diagnostic” (pulse loss, threshold ambiguity, drift), depending on operating state.
What each topology must provide as an integrity signal (kept high-level here; deeper methods appear in later chapters):
- Evidence quality indicatorsSignal present/absent, saturation/overrange, edge ambiguity, plausible range, and continuity across time windows.
- State-aware interpretationAcceleration, braking transitions, and low-speed operation require different confidence windows than steady cruise.
- Actionable fault outputsWhen evidence is degraded, the system needs clear fallback states (de-rate, switch source, or raise a trip) rather than silent bias.
Resolver Interfaces in High-Power Drives
Resolver sensing is valued for rail-grade robustness: analog amplitude encoding, long-cable tolerance, high-temperature reliability, and predictable loss-of-evidence behavior.
On rolling stock, resolver-based sensing is often selected not because it is “more complex,” but because it behaves like a robust evidence source when the electrical environment is hostile. Resolver outputs carry position information as analog sin/cos amplitudes, which helps preserve usable information when common-mode disturbance and EMI would otherwise corrupt edge timing. Over long harnesses and across temperature extremes, gradual amplitude drift tends to remain observable and diagnosable, enabling clear “confidence states” rather than silent bias.
What matters in rail: a resolver chain can degrade in a predictable way (amplitude/lock indicators), supporting controlled fallback instead of intermittent false edges.
Rail-driven reasons resolvers are often preferred:
- Analog amplitude encodingSin/cos amplitudes provide “shape” and health cues; interference is more likely to be seen as distortion or SNR loss than as believable-but-wrong edge timing.
- Long-cable stabilityWith long routes and mixed grounding references, amplitude-based signaling can remain interpretable when edge-based decoding becomes ambiguous.
- High-temperature reliabilityTemperature drift tends to appear as amplitude/phase changes that can be monitored as health indicators instead of sudden discontinuities.
- Predictable loss-of-evidenceUndervoltage or wiring faults often manifest as lock loss / amplitude collapse, which supports clear diagnostic states and safe fallback actions.
Interface challenges that define performance in high-power traction environments (kept at system level):
- Excitation drive integrityExcitation amplitude/frequency stability affects demod consistency; supply dips can bias the chain unless “confidence states” are explicit.
- Synchronous demod robustnessDemod must tolerate EMI and common-mode injection; reduced SNR should surface as a measurable quality drop, not as angle jitter.
- Phase error & quadrature healthPhase offsets translate into position/velocity bias; low-speed operation is especially sensitive to phase-related uncertainty.
- Temperature drift consistencyDrift impacts cross-temperature repeatability; monitoring amplitude balance and lock state is key for diagnosis and maintenance decisions.
Encoder & Digital Position Interfaces
Encoders remain common in rail platforms for high-speed precision, serviceability, and deterministic startup position—while requiring careful integrity design under long-cable and grounding realities.
Rail platforms keep encoder-based sensing because it can deliver very high resolution at speed, simplifies replacement workflows in maintenance cycles, and enables deterministic startup position when absolute sensing is required. The tradeoff is that digital edge integrity can be fragile in traction environments: ground reference shifts, common-mode injection, and long-cable reflections can turn clean edges into jitter, missing pulses, or believable glitches. The engineering objective is to make the link fail “loudly” with measurable counters and confidence states—rather than silently biasing speed evidence.
Rail focus: preserve edge integrity + explainability (jitter/miss/glitch counters) across grounding variation and long harness effects.
Why encoders are still selected in rolling stock programs:
- High-speed precisionFine edge timing supports stable speed estimation at high rotation rates, improving smooth torque and braking transitions.
- ServiceabilityReplacement and field maintenance workflows often favor modular encoder assemblies with clear pass/fail checks.
- Deterministic startup position (absolute)Known position at power-up can reduce ambiguous initialization states in safety-sensitive sequences.
Where encoder chains commonly fail on rolling stock (mechanism → symptom signature):
- Jitter (edge timing flutter)Threshold margin varies with noise and reference movement → speed estimate fluctuates, especially at low speed and during switching transients.
- Missing pulsesAttenuation/reflection and noise can erase weaker edges → intermittent “speed dips” or sudden zero-speed events appear in logs.
- Ground loops / shield current injectionVehicle ground potential differences and mixed shield termination drive current on shields → interference couples into receiver thresholds.
- Long-cable reflectionsImpedance mismatch creates edge ringing or double transitions → false counts or intermittent glitch patterns emerge under certain temperatures/humidity.
Hall & MR Axle Speed Sensing AFEs
Axle sensing is an independent wheel-motion evidence chain used for anti-slip and safety decisions, not a “nice-to-have” measurement.
Axle speed sensing is commonly treated as a safety-relevant evidence source because it tracks wheel-to-rail motion more directly than motor-side signals during adhesion events. In rolling stock, this chain is expected to remain interpretable in the presence of vibration, contamination, and harsh electromagnetic conditions. The key challenge is that the input is a physical-world waveform: tooth geometry, magnetic target properties, and sensor placement all shape the analog signature long before any digital edge is produced.
Key idea: Hall/MR axle sensing measures physical state (air gap, target condition, debris), so integrity must be managed as waveform quality → edge decision → time evidence, not as a drop-in encoder replacement.
Rail-relevant physical mechanisms that drive axle-signal variability:
- Tooth wheel / speed ring realitiesRunout, missing teeth, and surface condition change the waveform shape and periodicity, creating “structured” anomalies rather than random noise.
- Amplitude variation with speed and target conditionEdge slope and peak amplitude can vary across speed and magnetics, shifting comparator timing unless margins are designed for worst-case.
- Temperature driftMagnetic strength and sensor sensitivity drift across temperature, changing amplitude balance and raising the risk of borderline threshold behavior.
- Air-gap variation (vibration, bearing, installation)Dynamic air-gap movement modulates amplitude; the same tooth can look “strong” then “weak” within a short window.
- Metal debris contaminationIron particles alter the magnetic path and distort the waveform, often producing slow degradation and intermittent edge ambiguity.
What the AFE must accomplish (system-level functions; no part numbers):
- Condition the analog signatureFilter and scale the signal to keep tooth-induced variation inside a robust decision window across drift and air-gap changes.
- Make edge decisions repeatableComparator thresholds (with suitable stability) convert a variable analog waveform into stable digital timing evidence.
- Expose integrity as counters/flagsMissing/glitch/jitter indicators should surface “degraded evidence” states rather than allowing silent bias in speed estimation.
- Produce time-based evidenceTime capture turns the problem into interval statistics, enabling plausibility checks and cross-audit against motor-side speed.
Isolation & High Common-Mode Immunity
Rail traction environments distort signal shape through common-mode events; isolation separates measurement reference from power-domain chaos to keep timing evidence valid.
In rolling stock, signal integrity failures often originate from common-mode events rather than classic differential noise. High dv/dt switching in traction drives, common-mode currents on motor cables, vehicle-body ground potential differences, and lightning/ surge return paths can all shift reference levels and deform waveforms. When reference movement and coupling become large, edge timing and comparator thresholds stop being stable “measuring tools” and become moving targets. Isolation is therefore not just about safety separation; it is a system requirement to keep measurement-domain timing evidence interpretable.
Rail-specific reality: common-mode inputs can warp waveform shape (threshold shift, edge smear, glitches, saturation). Isolation is the boundary that prevents power-domain reference motion from corrupting timing evidence.
Common-mode sources that are especially rail-relevant:
- Traction inverter dv/dtFast switching couples through parasitics and raises common-mode transitions that ride on sensing lines.
- Motor-cable common-mode currentHF currents flow on shields/body return paths, injecting interference into nearby harnesses and receivers.
- Vehicle-body ground potential differencesDifferent cabinets and body nodes are not at the same potential under load; long references drift and shift.
- Lightning / surge return pathsReturn currents create fast, large reference excursions that can saturate or clip front-ends and receivers.
How common-mode turns into measurement failure (signal deformation, not part numbers):
- Threshold shiftReceiver reference moves → the “same” waveform crosses different effective thresholds → time capture becomes biased or jittery.
- Edge smearEffective amplitude/edge slope reduces → crossing time wanders → low-speed and transition regions become unstable.
- Glitch / double-edge behaviorCoupling and reflections create ringing → false toggles appear → counts and speed estimates become intermittently wrong.
- Saturation / clippingFront-end or receiver hits limits under common-mode excursions → output can look “stable” while evidence is no longer trustworthy.
Noise, Jitter & Missing Pulse Phenomena
Field issues often look like “algorithm problems,” but many originate from uncertain signal boundaries: amplitude and reference motion convert into timing errors and corrupted speed evidence.
Common rolling-stock symptoms are recognizable across platforms: low-speed speed “flutter,” sudden high-speed steps, intermittent loss of valid edges, and slip flags that appear without a corresponding physical change. These behaviors frequently share the same root cause: the measurement chain’s boundary is not stable. When waveform shape, threshold margin, or reference level moves, the same physical event can be detected early, late, twice, or not at all. The result is not just a noisy signal—it is distorted timing evidence.
Core chain: Amplitude uncertainty + reference movement → crossing-time uncertainty (Δt) → jitter / missing / false edges → speed misread and false slip indications.
Rail-relevant symptom patterns (what they usually indicate at the evidence level):
- Low-speed jitterSmall crossing-time shifts become a large fraction of the interval → speed estimate “shakes” and traction/brake transitions feel unstable.
- High-speed stepsDouble edges or occasional false toggles create sudden count errors → speed jumps and then recovers.
- Intermittent missing pulsesBorderline margins under certain temperature/humidity/grounding → edges disappear in bursts, producing temporary “gaps” in evidence.
- Slip false flagsCross-check sees divergence, but the divergence aligns with evidence quality collapse (jitter/missing) rather than true adhesion change.
Physical pathways that turn “noise” into timing failure (no control math, no part numbers):
- Amplitude perturbation → time perturbationChanging amplitude or edge slope shifts the threshold crossing moment, especially when margins are tight.
- Reference movement (ground/CM shift)Effective threshold moves relative to the waveform, so identical inputs yield different capture times from one moment to the next.
- Waveform deformation (ringing/clipping)Reflections and saturation create multiple crossings or suppressed crossings, causing false edges or missing edges.
Evidence-level indicators that should make degraded sensing “loud” rather than silent:
- Interval jitter statisticsRising spread of tooth/edge intervals is a direct sign of crossing-time uncertainty.
- Missing / glitch countersCounts of absent edges and abnormal extra toggles separate true motion change from measurement collapse.
- Continuity / plausibility flagsEvidence should carry a confidence state, allowing safe downgrade when the boundary becomes unstable.
- Cross-check correlation drop markersDivergence aligned with quality collapse points to sensing integrity issues, not necessarily slip.
Cross-Checking Motor vs Axle (Slip Detection Logic Basis)
Rail safety avoids single-sensor dependence: two independent evidence chains are cross-checked, and disagreements trigger structured downgrade rather than silent trust.
Motor-side speed and axle-side speed are intentionally different evidence sources. Their error mechanisms differ: motor-side sensing can be influenced by drivetrain dynamics and electrical environment, while axle sensing is shaped by target physics, air gap, and contamination. Cross-checking is therefore not only about improving accuracy; it is about exposing inconsistency. When the two chains agree within a plausible zone, confidence rises. When they diverge, the system must decide whether it is a true adhesion event (slip/slide) or a sensing integrity collapse—and then apply a conservative downgrade strategy.
Credibility source: agreement between independent physics. Disagreement is not a “bug”; it is an input to fault detection + safe downgrade.
What cross-checking enables (without exposing control algorithms):
- Traction decision sanityAgreement supports stable torque/brake transitions; disagreement triggers caution and evidence quality review.
- Slip / slide recognition basisPersistent structured divergence suggests adhesion change; short spikes aligned with quality collapse suggest sensing faults.
- Anomaly identificationPatterns that violate physical continuity (jumps, reversals, intermittent gaps) point to chain integrity issues.
- Downgrade strategyWhen evidence is degraded, the system reduces reliance (weighting/freeze/record/alert) rather than trusting corrupted speed.
Time Capture, Timestamping & Deterministic Sampling
Speed sensing becomes reliable when treated as time evidence: edge timing, bounded latency, synchronized time bases, and sampling that stays deterministic under disturbance.
In rolling stock, “speed” is rarely a pure analog measurement problem. It is a time measurement problem: teeth, pulses, or decoded edges only become speed after intervals are measured, filtered, and time-aligned. When the capture moment is unstable or the sampling schedule drifts, the system can produce plausible-looking numbers that are not trustworthy evidence. A robust design therefore focuses on timing: stable edge capture, controlled latency, synchronization between domains, and sampling that remains deterministic across EMI and load changes.
Key idea: stable speed evidence requires time integrity (edge timing + bounded latency + synchronized clocks). Analog amplitude matters mainly because it perturbs crossing time.
Timing building blocks that determine whether speed is “truth” or “guess”:
- Edge capture (timestamp at the boundary)Capture should be tied to a stable time base. The goal is repeatable event timing, not merely a clean waveform.
- Sampling latency (bounded and observable)Every path adds delay: front-end conditioning, digital filtering, transport, and software handling. Reliability improves when latency is bounded and tracked.
- Synchronization (common notion of “now”)Motor-side and axle-side evidence must be compared in the same time context. Without alignment, cross-check disagreement can be a clock artifact.
- Aliasing (sampling makes ghosts)If sampling is not fast and deterministic relative to edge dynamics, rare disturbances can fold into false periodic behavior and corrupt speed estimates.
Why this connects to TCMS / braking / protection (as evidence consumers):
- Consistency across subsystemsWhen braking, traction, and protection depend on speed evidence, time-aligned event stamps prevent contradictory interpretations of the same physical moment.
- Deterministic audit trailsEvent records become actionable when they contain timestamps, latency markers, and quality flags that explain why the system trusted (or rejected) evidence.
- Safe downgrade triggersWhen timing integrity degrades (jitter/missing bursts), subsystems can downgrade consistently rather than responding to corrupted speed readings.
Diagnostics & Field Failure Patterns
Field failures are best debugged by separating “what the evidence looks like” from “what the hardware is”: sensor, front-end, and cable faults leave different fingerprints in jitter, missing, and correlation behavior.
Rolling stock diagnostics benefit from treating speed sensing as an evidence chain with explicit quality signals. Real-world failures frequently present as intermittent anomalies: rain-day false alarms, high-temperature drift that slowly erodes margin, post-maintenance offsets due to changed air gap or harness routing, and post-lightning intermittents caused by insulation and reference disturbances. The goal is not to guess a component; the goal is to use observable fingerprints—interval statistics, missing/glitch counters, and motor-vs-axle correlation—to isolate whether the likely culprit is the sensor, the front-end decision boundary, or the cable/harness environment.
Debug principle: isolate by fingerprints. Sensor issues tend to change the analog signature; front-end issues distort threshold behavior; cable issues correlate with environment, grounding, and intermittency.
Common field patterns and what to check first (evidence-first, not part-first):
- Rain / humidity false reportsLook for bursts of glitches/missing aligned with wet conditions and grounding changes; check whether correlation collapses without a physical slip context.
- High-temperature driftTrack gradual trend: amplitude/edge slope changes → rising jitter before outright missing; verify whether the degradation is monotonic with temperature.
- Post-maintenance bias or new intermittencyAir gap, mounting alignment, and harness rerouting can shift margins; look for a step change in jitter baseline or a new sensitivity to certain operating modes.
- Post-lightning intermittent behaviorExpect reference/insulation disturbances: sporadic saturation-like behavior or sudden missing bursts; check whether failures correlate with high dv/dt moments and transient events.
How to distinguish sensor vs front-end vs cable (practical fingerprints):
- Sensor-side fingerprintsSlowly changing quality with temperature/air gap; degradation often shows as amplitude/edge-slope variability and structured interval wobble tied to mechanical conditions.
- Front-end (decision boundary) fingerprintsThreshold instability yields double edges, inconsistent crossing time, and “borderline” behavior that appears across multiple sensors or channels under the same electrical stress.
- Cable / harness fingerprintsIntermittency with vibration, moisture, connector state, or grounding; correlation collapse can be sudden and environment-driven, often producing missing bursts rather than smooth drift.
- Cross-check as the tie-breakerIf motor-vs-axle divergence aligns with quality collapse in one chain, treat it as integrity loss; if divergence is persistent without quality collapse, treat it as a physical state change candidate.
Verification & Maintenance Strategy
Rail traction is a long-lifecycle environment: speed sensing is not “tune once.” Trust comes from verifiable evidence—continuous self-test, maintainable checks, and auditable logs that explain why data was accepted or downgraded.
A reliable speed chain is an operational system. Its health must remain measurable across seasons, maintenance actions, and high-energy events. Verification should therefore focus on evidence quality (interval jitter, missing/glitch bursts, correlation stability, timebase sanity) rather than only the displayed speed value. When a boundary becomes unstable, the system should not silently “average it out”; it should mark evidence quality, record context, and trigger a structured downgrade for downstream consumers.
Hard rule: speed truth requires verifiable time evidence. The log must capture what happened (timestamps) and why it was trusted (latency + quality flags + correlation state).
Maintenance checks (field-repeatable, designed to expose boundary problems):
- Baseline jitter checkCompare interval jitter statistics to the known “healthy” baseline after sensor replacement, cable rework, grounding changes, or air-gap adjustments.
- Missing / glitch scan windowRun a defined observation window and review missing bursts, double-edge/glitch counts, and correlation drops. Borderline margins reveal themselves as bursts, not as steady errors.
- Correlation sanity (motor vs axle)Verify that normal operating states land in the normal correlation zone and that divergence is explainable by quality collapse or physical context—not by clock misalignment.
- Environmental sensitivity spot-checkWhere possible, repeat checks across temperature bands or after wet-condition exposure; margin issues often surface as increased jitter before hard failure.
Verification logic (make pass/fail explainable, not “expert intuition”):
- Green (trusted)Speed value plausible + jitter controlled + missing/glitch near-zero + correlation stable + time alignment valid.
- Yellow (degrading)Value plausible but quality metrics trend worse (jitter rising, occasional missing bursts, correlation weakening). Trigger maintenance planning and increased logging.
- Red (unsafe evidence)Quality collapse (bursts, repeated glitches, timebase issues) or persistent implausible divergence. Force downgrade, record event context, and request service action.
In-run self-test (continuous supervision without protocol details):
- Trend monitorsTrack jitter baseline drift, missing burst detectors, and correlation watchdog states over operating hours and environmental cycles.
- Time integrity checksWatch for sampling-latency anomalies and timebase sanity failures that can masquerade as slip or sensor fault.
- Action outputsWhen integrity degrades: record → downgrade → notify. Avoid silent “masking” that hides boundary collapse until a safety incident.
Logging evidence (what to record so failures are debuggable and auditable):
- Timestamp + latency markerAttach event timestamps and processing-latency markers to every captured interval or speed update.
- Quality flagsJitter statistics, missing/glitch counters, and saturation/threshold-instability indicators (concept-level) should accompany measurements.
- Correlation stateMotor-vs-axle correlation zone state (normal / slip-like / sensor-fault-like / degraded-evidence) should be logged with context.
- Service context tagsMaintenance marker, wet-condition marker, temperature band marker, and “post-event” marker enable root-cause triage.
Example material numbers (MPNs) commonly seen in speed/time-evidence chains (illustrative categories; select per rail standard, isolation level, and platform):
Use MPN examples as reference anchors for the categories. The verification plan should remain evidence-driven: the chain is trustworthy only when jitter/missing/correlation/timebase health can be measured, logged, and maintained over years.
FAQs
Each answer follows the same evidence-first structure: 1-sentence conclusion, 2 evidence checks, and 1 first fix. Mappings point back to the chapter evidence chain.
Low-speed readings jump high/low — insufficient resolution or a noisy edge boundary?
Conclusion: Low-speed instability is usually edge-time uncertainty (noise → time jitter), not “wrong average speed.”
- Evidence to check #1Interval jitter distribution at low speed: does Δt variance explode while average Δt stays plausible? (H2-7)
- Evidence to check #2Timestamp/latency determinism: do capture timestamps arrive with variable latency or occasional gaps that fold into speed spikes? (H2-9)
- First fixIncrease edge robustness before filtering: tighten the decision boundary (cleaner transition / stable threshold behavior) and log jitter + missing counters to confirm the boundary stabilized. (H2-7/H2-9)
During regenerative braking the speed suddenly spikes — motor measurement error or wheel slip?
Conclusion: Treat regen spikes as a cross-check problem first—slip-like states and sensor-evidence collapse look similar unless both chains are compared.
- Evidence to check #1Motor vs axle correlation zone: does divergence land in a slip-like region while both chains still show good quality flags? (H2-8)
- Evidence to check #2Axle AFE quality: are missing/glitch bursts present near the event (tooth sensing chain under disturbance)? (H2-5)
- First fixFreeze the diagnosis to evidence: record both channels’ timestamps, correlation state, and quality flags during regen events; only then decide if it is physical slip or a degraded axle chain. (H2-8)
High-speed is stable but low-speed jitters — resolver issue or encoder issue?
Conclusion: If only low-speed degrades, suspect edge boundary margin and time quantization effects before blaming the sensor type.
- Evidence to check #1Compare low-speed jitter fingerprints: does the issue present as pulse-time jitter (encoder) or phase/amplitude-related instability (resolver chain)? (H2-3/H2-4)
- Evidence to check #2Glitch/missing counters: low-speed “jumps” often correlate with occasional missed edges or double edges rather than continuous drift. (H2-7)
- First fixRun a controlled low-speed window test and log jitter + missing/glitch statistics per chain; stabilize the worse chain’s boundary (transition quality) before changing sensor architecture. (H2-7)
After maintenance replacement, braking distance increased — installation error or calibration loss?
Conclusion: Post-maintenance distance changes are most often verification regressions—air-gap/mounting shifts or lost parameters—rather than a new “control behavior.”
- Evidence to check #1Service marker step-change: did jitter baseline or correlation behavior shift immediately after service? (H2-10)
- Evidence to check #2Verification tier status: did the chain move from Green to Yellow/Red (quality flags or alignment issues) after replacement? (H2-11)
- First fixRe-run the maintenance check window: confirm mounting/air gap consistency and restore the validated parameter set; record “pass/fail tier” evidence as part of the service report. (H2-11)
After thunderstorms, intermittent stall alarms — insulation interference or interface damage?
Conclusion: Thunderstorm-related intermittents usually manifest as common-mode integrity loss first; true interface damage shows persistent quality collapse patterns.
- Evidence to check #1Common-mode sensitivity: do missing bursts/glitches correlate with high dv/dt operating moments or transient events? (H2-6)
- Evidence to check #2Field fingerprint: does the issue appear only after the event and remain intermittent (connector/harness/insulation), or become consistently degraded (damage)? (H2-10)
- First fixForce an event log capture on the next occurrence (timestamps + quality flags + correlation) and perform insulation/reference integrity checks before replacing components. (H2-10)
No-load bench test is fine, but on-track operation fails — cable common-mode or poor grounding?
Conclusion: If the bench passes but the train fails, suspect common-mode injection and reference shifts that distort edge timing in real dv/dt conditions.
- Evidence to check #1Jitter vs operating state: does jitter or glitch rate increase only under traction inverter activity and long-cable conditions? (H2-6/H2-7)
- Evidence to check #2Missing burst patterns: common-mode and grounding issues often appear as bursts aligned with electrical stress, not steady offsets. (H2-7)
- First fixShorten and harden the reference path: verify grounding continuity, reduce loop exposure, and validate that isolation/common-mode immunity is intact using the same on-track stress window. (H2-6)
Axle speed looks normal but traction control alarms — did cross-check strategy trigger?
Conclusion: This pattern often indicates the system rejected evidence quality or correlation context even when a single channel’s value “looks normal.”
- Evidence to check #1Correlation-zone state: did motor-vs-axle correlation briefly enter a fault-like region or lose alignment, triggering a safety reaction? (H2-8)
- Evidence to check #2Quality flags around the alarm: look for jitter spikes, missing bursts, or timebase sanity errors that would invalidate “normal-looking” speed. (H2-8)
- First fixPull the event record and verify that both channels had aligned timestamps and stable quality flags; treat it as evidence rejection until proven as a true physical anomaly. (H2-8)
Cold-start deviation is large but warm operation is normal — temperature drift or phase error?
Conclusion: Cold-start-only issues typically indicate temperature-dependent margin shifts (phase/offset behavior) rather than random noise.
- Evidence to check #1Temperature band fingerprint: does the error reduce monotonically as temperature rises, consistent with drift/phase behavior? (H2-10)
- Evidence to check #2Resolver-chain stability markers: look for phase-related inconsistency or amplitude-related edge instability at cold start. (H2-3)
- First fixLog cold-start sessions with temperature context and verify the chain remains within Green/Yellow criteria; treat cold-start as a dedicated verification scenario, not an occasional anomaly. (H2-11 via evidence practice)
Only the high-speed region is wrong — sampling latency or missing pulses?
Conclusion: High-speed-only errors are usually caused by missed edges or non-deterministic capture latency, not a slow analog drift.
- Evidence to check #1Missing/glitch counters under high edge rate: do missing bursts rise with speed, indicating boundary failures at high frequency? (H2-7)
- Evidence to check #2Latency determinism: does capture-to-availability delay vary with load, causing alias-like artifacts in computed speed? (H2-9)
- First fixRun a deterministic sampling/capture audit at the target speed band and record missing/latency markers; stabilize edge integrity and timing determinism before tuning any downstream filters. (H2-9)
After changing wheel diameter, braking becomes abnormal — parameter issue or sensor issue?
Conclusion: Wheel-diameter changes first affect the physical mapping from pulses to speed; verify parameter integrity before suspecting sensor failure.
- Evidence to check #1Cross-check consistency: does motor-vs-axle correlation shift systematically (scaling mismatch) while quality flags remain healthy? (H2-8)
- Evidence to check #2Verification record: was the new parameter set validated and logged as Green after the change, or was it applied without a verification tier update? (H2-11)
- First fixRestore a verified parameter set and re-run the maintenance verification window; only move to sensor/harness checks if evidence quality and correlation still fail. (H2-11)
Drift increases gradually over long runtime — magnetic decay or electronic drift?
Conclusion: Gradual drift is best distinguished by fingerprints: mechanical/magnetic effects often correlate with environment and contamination, while electronics drift correlates with temperature and timebase stability.
- Evidence to check #1Axle sensing signature trend: does drift track air-gap changes, debris/contamination, or amplitude margin changes typical for tooth sensing chains? (H2-5)
- Evidence to check #2Field pattern: does drift follow temperature cycles and present as jitter baseline shift, suggesting electronics/time measurement stability changes? (H2-10)
- First fixTrend-log the quality metrics (jitter, missing, correlation) and perform a targeted physical inspection/cleaning of the axle sensing environment before swapping electronics. (H2-10)
Occasionally speed becomes 0 and protection trips — missing edges or timestamp anomaly?
Conclusion: Sudden “speed=0” events are usually evidence-chain failures—either missing bursts or time integrity collapse—rather than true instantaneous stop.
- Evidence to check #1Missing burst detector: confirm whether a gap in edge arrival coincides with the zero reading (cable/edge boundary collapse). (H2-7)
- Evidence to check #2Timestamp continuity and latency markers: a timebase or capture anomaly can manufacture a zero-speed estimate even when edges exist. (H2-9)
- First fixEnable “zero-event” capture: record timestamps, missing/glitch counters, and correlation state around the event, then correct the first failing link (edge integrity or timebase). (H2-7/H2-9)
Implementation note: this FAQ section is designed for mobile-first WP rendering (single column, no side-by-side). Each item is an evidence-first diagnostic entry point and links back to the chapter evidence chain via the mapping chips.