123 Main Street, New York, NY 10001

Small Cell / DAS RU: RF Tx/Rx, Timing/Sync, Ethernet & PoE

← Back to: Telecom & Networking Equipment

Small Cell/DAS RUs succeed when RF performance, timing/sync, Ethernet robustness, and PoE power/thermal limits are engineered as one measurable system. This page shows where each function lands inside the RU and how to validate stability with evidence (counters, waveforms, and temperature correlation) for production and field troubleshooting.

H2-1 · What this page covers (Small Cell / DAS RU as a deployable box)

This page focuses on how a Small Cell / DAS Remote Unit turns three constraints—RF chain, timing/sync, and PoE-powered Ethernet—into a manufacturable, maintainable field box. The goal is not a textbook RF overview, but an engineering map of what must be inside, where failures show up, and how to validate the design before deployment.

Scope boundary (keeps the page vertically deep)
  • In-scope: RU box partitioning (RF/IF/timing/Ethernet/PoE), power & thermal limits, sync integration as an endpoint, and field validation.
  • Out-of-scope: DU/CU baseband compute, O-RU system stack (eCPRI/JESD/DPD deep dive), and AAS/massive-MIMO multi-channel beamforming.

Small Cell RU vs DAS RU (differences that change the hardware)

Design dimension Small Cell RU (typical impact) DAS RU (typical impact)
Output power & thermal density Higher risk of PA derating in outdoor enclosures; thermal path dominates EVM/ACLR stability. Power may be lower, but distributed nodes amplify maintenance/ESD exposure and long-term drift.
Channel count & RF partitioning Often 1–2 chains; RF performance is limited by PA/LNA bias, filtering, and port protection. Multiple remote endpoints make uniformity and field swapability key: calibration & monitoring must be simple.
Interface & power delivery Ethernet backhaul/fronthaul with strict uptime; PoE limits can cap Tx power or duty cycle. Longer cabling and more touch points increase surge/ESD and mis-wiring risk; protection must be tolerant.

Three bottlenecks to design around (and how they show up in the field)

1) PoE power budget (not a static number)
  • Why it bites: cable loss + temperature derating + PD current limits reduce usable power under peak load.
  • Typical symptoms: unexpected resets, Tx power caps, link flaps during bursts.
  • What to log/measure: PD state, input droop, DC/DC current limit events, reboot counters.
2) Thermal budget (fanless box physics)
  • Why it bites: PA and power stages drift with temperature; small heatsinks turn minor inefficiency into major EVM/ACLR loss.
  • Typical symptoms: EVM rises after warm-up, ACLR degrades, PA derates at repeatable case temperatures.
  • What to log/measure: hotspot sensors, derating events, Tx power vs temperature curves.
3) Timing & link reliability (sync as an RU endpoint)
  • Why it bites: jitter/wander and sync loss turn into RF impairment or service drops; recovery behavior matters as much as lock accuracy.
  • Typical symptoms: PTP lock loss, holdover entry/exit “glitches”, brief outages that look like RF faults.
  • What to log/measure: lock state, holdover active time, offset threshold crossings, link reconnect count.
What this page delivers (so designs can be signed off)
  • A one-glance block diagram (RF / IF AFE / timing & sync / Ethernet & PoE power).
  • Practical budgets: PoE usable power vs peak load, thermal derating triggers, sync holdover targets.
  • Where to place timing/sync blocks in an RU, and what to monitor for stable field behavior.
  • A validation and troubleshooting path from symptoms → evidence → root cause.
Figure F1 — Small Cell / DAS RU “one-glance” box: RF, Sync, Ethernet & PoE power
Small Cell / DAS RU overview diagram Block diagram highlighting RF chain, IF AFE, timing/sync, Ethernet and PoE power, plus where typical failures appear. Small Cell / DAS RU RF Tx/Rx Chain PA · LNA · Filters Duplex/TDD Switch IF AFE Up/Down Convert VGA/AGC · Filters Timing / Sync PTP Client · SyncE Slave Jitter Cleaner · Holdover Ethernet & PoE PHY · Isolation · EMC PoE PD · DC/DC Rails Antenna Tx/Rx RF path Ethernet PTP/SyncE + PoE Where failures show up EVM ↑ / ACLR ↑ Link drop Thermal derate PTP unlock
Use this overview to keep the design discussion anchored to the RU box: RF chain, endpoint sync behavior, and PoE-powered Ethernet reliability.

H2-2 · Deployment-driven requirements (turn installation reality into measurable targets)

Small cell and DAS remote units fail in the field for reasons that rarely appear in lab-only block diagrams: cabling losses, temperature swings, ESD/surge events, and sync recovery behavior. This section converts deployment conditions into six measurable requirements that drive every later design choice.

Deployment scenarios (what they break first)

Indoor DAS (distributed endpoints, non-expert touch)
  • Risk pattern: frequent touch points and long cable runs amplify ESD/surge exposure and mis-wiring.
  • Common “silent failure”: protection/grounding changes increase Ethernet errors or raise RF noise floor over time.
  • Early validation focus: post-ESD functional drift checks (EVM/BER vs baseline) and remote alarms/logging completeness.
Outdoor small cell (heat, weather, lightning)
  • Risk pattern: solar load and enclosure thermal resistance push PA and power stages into derating.
  • Common misdiagnosis: thermal drift looks like an RF design problem (EVM/ACLR worsens after warm-up).
  • Early validation focus: temperature sweeps with repeatable derating thresholds and stable sync recovery behavior.
Mixed deployments (constraint conflicts)
  • Risk pattern: “fix one thing, break another” (e.g., added EMI parts increase PHY loss or PoE droop sensitivity).
  • Common failure: borderline designs pass the lab but fail under combined stress (heat + load + sync disturbances).
  • Early validation focus: A/B testing of protection/filtering changes against BER, EVM, and reboot counters.

Six measurable requirements (each must have a pass/fail check)

  1. P_budget (usable PoE power): measured under peak traffic, worst-case cable, and temperature—validated by droop + PD/DC/DC event counters.
  2. T_case (enclosure thermal limit): pass/fail defined by derating onset and EVM/ACLR stability after warm-up.
  3. Surge/ESD robustness: not only “no damage”, but “no performance drift” (BER/EVM baseline compare after strikes).
  4. Holdover target: how long the RU maintains service when sync disappears, and how cleanly it recovers (no large step/glitch).
  5. Ethernet link quality: beyond link-up—BER/packet loss/reconnect rates under EMI and cable stress.
  6. Remote management coverage: alarms + logs must explain field symptoms (sync state, thermal derate, PD state, error counters).

With these requirements defined, the rest of the page maps them to concrete blocks: RF chain (H2-3/4), IF AFE (H2-5), clocking and sync (H2-6/7), Ethernet robustness (H2-8), and the PoE power tree (H2-9).

Figure F2 — Deployment scenario → requirement mapping (what drives what)
Scenario to requirements mapping for Small Cell / DAS RU Diagram mapping indoor DAS, outdoor small cell, and mixed deployments to six measurable requirements: power, thermal, surge/ESD, holdover, Ethernet link quality, and remote management. Deployment Indoor DAS Long cabling · Many touch points Outdoor Small Cell Heat · Weather · Lightning Mixed Constraint conflicts Measurable Requirements P_budget Usable PoE power T_case Thermal limit Surge/ESD No drift after hits Holdover Stable recovery Ethernet link BER · reconnects Remote mgmt Logs · alarms Primary driver Secondary driver
Treat these six requirements as the top-level acceptance criteria. Later sections show how each requirement maps to specific circuits and test points.

H2-3 · RF signal chain partition (Tx/Rx blocks, responsibility, and test points)

A remote unit RF front-end should be partitioned as measurable blocks with clear spec ownership and test points. This prevents “RF mystery failures” by tying field symptoms (EVM drift, power drop, receive sensitivity loss) to where to measure and which block is responsible.

Tx partition (driver → PA → filter/duplex → antenna)
  • Spec focus: output power, linearity, ACLR/EVM stability across temperature.
  • Failure signature: warm-up drift (EVM/ACLR worsens after minutes) often correlates with PA thermal state and supply headroom.
  • Evidence to collect: PA output coupler power vs temperature, and coupler power vs supply droop under peak load.
Rx partition (antenna → filter/duplex → LNA → downconvert → IF out)
  • Spec focus: noise figure (NF), blocker tolerance, and intermodulation behavior under strong adjacent signals.
  • Failure signature: “good in lab, bad in site” often indicates blocker-driven compression or front-end leakage rather than weak-signal issues.
  • Evidence to collect: post-LNA noise floor and gain under controlled blocker injection.
TDD/FDD selection constraints (isolation and self-interference)
  • TDD switching: switch isolation and leakage determine how much Tx energy re-enters the Rx chain.
  • Common symptom: Rx degradation that appears “random” but is repeatable at specific duty cycles or Tx power levels.
  • Evidence to collect: leakage proxy measurements (before LNA) correlated with Rx sensitivity/EVM changes.
Coupling & detection chain (power control and protection evidence)
  • Coupler placement: PA-output vs antenna-side coupling measures different “power truths” (including filter/duplex effects).
  • Detector role: provides a stable observable for closed-loop power control and protection triggers.
  • Validation action: compare detector reading vs external power meter over temperature and band to confirm calibration granularity.
Partition checklist (pass/fail oriented)
  • TPs exist and are accessible: PA output coupler, post-LNA, and IF out.
  • Each block has a primary owner: PA linearity, LNA noise, filter out-of-band rejection, switch isolation.
  • Each symptom has evidence: EVM drift ↔ thermal/power; Rx loss ↔ compression/leakage; power drop ↔ derating/protection.
Figure F3 — RF Tx/Rx partition with test points (TP) and spec owners
Small cell RU RF chain partition with test points Block diagram showing Tx and Rx branches, duplexing/switching, coupling and detection, and three test points: PA output coupler, post LNA, and IF out. Spec owners are labeled per block. RF Signal Chain Partition (Tx / Rx) Antenna Tx/Rx Duplex / Filter OOB Rejection TDD Switch Switch Isolation Split Driver PA PA Linearity Coupler + Detector Power Evidence LNA LNA Noise Downconvert Mixer / IF IF Out To IF AFE Tx Rx TP1 PA coupler TP2 Post-LNA TP3 IF out Spec: OOB Rej. Spec: Iso Spec: Linearity Spec: Noise
Keep RF discussions anchored to blocks with owners and test points. Most field issues become solvable when symptoms map to TP evidence.

H2-4 · PA/LNA biasing & protection (from “can transmit” to “stays stable in the field”)

In small cell and DAS remote units, the highest field failure rate often comes from biasing and protection, not from the RF block diagram itself. A robust design treats PA/LNA bias as a controlled system with measured sensors, debounced thresholds, and explicit recovery behavior.

PA bias control (static bias + temperature compensation + start/stop ramp)
  • Ramp behavior matters: soft enable prevents supply droop and avoids false protection triggers.
  • Temperature compensation goal: stabilize linearity and reduce warm-up drift, not maximize bias current.
  • Validation action: capture input droop, PA current peak, and time-to-stable EVM for multiple start/stop profiles.
LNA bias integrity (low-noise rails and ripple-to-noise conversion)
  • Noise path: supply ripple and ground movement can raise the post-LNA noise floor or inject modulation artifacts.
  • Blocker realism: ripple effects become visible under strong adjacent signals where headroom is reduced.
  • Validation action: measure post-LNA noise floor while sweeping supply ripple and verifying sensitivity degradation thresholds.
Protection loop (current / temperature / VSWR → action strategy)
  • Trip: immediate shutoff for safety-critical events; highest service impact.
  • Derate: controlled reduction (power, duty, bias) to keep service stable; requires smooth curves to avoid oscillation.
  • Latch-off: prevents repeated damage, but demands clear remote diagnostics and safe recovery procedures.
Field symptom mapping (evidence-first prioritization)
  • Power drop: check thermal derate events → current limit → VSWR triggers → bias stability.
  • EVM degradation: check PA bias/thermal stability → rail noise → leakage/compression evidence.
  • Intermittent resets: check input droop and protection trip loops; confirm the event log explains every restart.
Protection acceptance criteria (must be testable)
  • No false trips: thresholds include debounce/blanking, and normal transients do not trigger protection.
  • Predictable behavior: trip/derate/latch-off actions are deterministic and recover cleanly.
  • Actionable logs: every protection event writes a reason code, sensor snapshot, and a timestamp.
Figure F4 — Bias and protection loop (sensors → controller → actuators → PA/LNA)
Bias and protection loop for PA and LNA Closed-loop diagram showing sensors (temperature, VSWR, current) feeding a controller that drives bias DACs and LDOs for PA/LNA. Protection strategies include trip, derate, and latch-off with event logging. Bias & Protection Loop Sensors Temperature VSWR Current Controller Thresholds + Debounce State / Policy Event Log Actuators Bias DAC LDO / Rail Gate Ctrl PA LNA Actions: Trip · Derate · Latch-off Log reason + snapshot + timestamp
A stable RU requires debounced thresholds, deterministic actions (trip/derate/latch-off), and logs that explain every protection event.

H2-5 · IF AFE (up/down conversion + VGA/AGC) — dynamic range and interference recovery

The IF analog front end determines whether a remote unit keeps working under real interference: it sets the usable dynamic range, controls how quickly saturation recovers, and shapes which noise/spur paths become visible as EVM/ACLR degradation. This section stays on analog chain + clocks + dynamic range + lightweight calibration, and does not cover JESD or baseband protocol details.

Frequency plan (IF selection trade-offs)
  • Image vs filtering: IF placement shifts how hard image suppression and channel filtering must work.
  • LO leakage visibility: poor isolation can create fixed “signature tones” that look like unexplained spurs.
  • Spur classification: distinguish fixed-location spurs (reference/divider leakage) from configuration-linked spurs (synth/mixing products).
VGA/AGC stability (loop bandwidth and recovery behavior)
  • Loop bandwidth: too fast causes gain “hunting”; too slow causes long outage after bursts or blockers.
  • Saturation recovery: the most practical metric is time-to-stable after a strong interferer disappears.
  • Detection point choice: gain control is only as good as the measurement point used for AGC decisions.
Noise / linearity paths (how they reach EVM without deep math)
  • Noise floor path: IF noise + VGA gain distribution sets the baseline EVM floor under weak signal.
  • Compression path: blocker → IF stage compression → in-band distortion products → EVM rises under load.
  • Phase noise path (at IF): LO phase noise can translate into close-in noise skirts that look like “mysterious” EVM loss.
Lightweight calibration (minimum viable DC offset + IQ imbalance)
  • DC offset: measure a “quiet” baseline condition and apply a small correction to avoid false clipping and biased AGC decisions.
  • IQ imbalance: detect image residue and apply a minimal coefficient update (factory + periodic service window).
  • When to rerun: temperature transitions, reference changes, or repeated saturation events that shift baseline behavior.
IF AFE acceptance checklist (testable)
  • No “hidden outage”: saturation recovery time is bounded and repeatable under burst/blocker tests.
  • AGC is stable: no sustained oscillation or gain hunting across expected blocker levels.
  • Spurs are classifiable: fixed vs configuration-linked spurs can be separated and tied to a path.
  • Calibration is serviceable: DC/IQ calibration can run without deep baseband involvement and leaves logs/flags.
Figure F5 — IF AFE dynamic-range “bucket” (signal, blocker, noise floor, VGA window, clipping)
IF AFE dynamic range bucket diagram A simplified bar-style diagram showing noise floor, desired signal window, blocker level, VGA gain window, and clipping limit. Highlights compression and slow recovery regions. IF AFE Dynamic Range & Recovery Level higher ↑ ADC / Next-stage Clipping Limit Blocker / Burst Interferer Desired Signal Window Noise Floor (IF chain) VGA Gain Range Compression Slow recovery AGC loop BW Saturation + recovery Spur visibility
The practical outage risk is not “whether saturation happens,” but how fast the IF chain recovers and how stable AGC remains under bursts.

H2-6 · Frequency synthesis & clock tree — phase noise, spurs, and domain-aware distribution

In a compact RU, clocking problems often appear as “RF issues” (EVM drift, ACLR shoulders, intermittent lock loss). A workable design treats frequency synthesis as a system: reference source choice, synthesizer spur management, and a clock tree that respects domain sensitivity. This section builds a minimum jitter-cleaning chain and prepares the ground for timing/sync sections without diving into PTP/SyncE protocol mechanics.

Reference source (TCXO vs OCXO) — “RU needs” only
  • Power/thermal limits: PoE and sealed enclosures constrain warm-up power and steady dissipation.
  • Stability goal: pick the source based on required drift/hold behavior of the RU, not as a standalone “best clock.”
  • Verification: track drift and lock behavior across temperature ramps and power cycles.
PLL / synthesizer (phase-noise and spur paths)
  • Phase-noise path: LO phase noise can translate into modulation error and spectral regrowth.
  • Spur management: classify spurs by behavior (fixed vs configuration-linked) and tie them to reference/divider/mixing origins.
  • Verification: record spur presence and EVM/ACLR correlation across channel plans and temperature corners.
Clock distribution (domain-aware sensitivity)
  • RF LO domain: sensitive to phase noise and spur injection.
  • IF sampling domain: sensitive to jitter that degrades sampling accuracy and error vector stability.
  • Ethernet PHY domain: sensitive to wander and lock/relock behavior over long time scales.
Minimum jitter-cleaning chain (ref in → cleaner → fanout)
  • Ref in: stable reference delivered to a single “cleaning boundary.”
  • Jitter cleaner: provides a controlled output and (optionally) a hold behavior for short disturbances.
  • Fanout: isolates domains so noisy loads do not contaminate RF/IF sensitive paths.
Clocking validation checklist (practical)
  • Spurs are explainable: each major spur class can be tied to a known origin and configuration dependency.
  • Domain isolation works: PHY activity or management clocks do not inject spurs/jitter into RF/IF domains.
  • Lock behavior is deterministic: relock time and temperature drift behavior are measurable and repeatable.
Figure F6 — Clock domains map (RF LO / IF sampling / Ethernet PHY) with sensitivity tags
Clock domains map for small cell remote unit Block diagram showing reference source feeding PLL/synth and a jitter cleaner, then fanout to three domains: RF LO, IF sampling, and Ethernet PHY. Each domain shows its sensitivity keywords. Clock Domains Map (RU) Reference TCXO / OCXO PLL / Synth phase noise · spurs Jitter Cleaner clean boundary Holdover Fanout domain isolation RF LO phase noise IF Sampling jitter Ethernet PHY wander Verify: lock · spurs Verify: EVM / ACLR correlation
Domain-aware distribution prevents “non-RF clocks” from injecting spurs or jitter into RF/IF paths, and keeps lock behavior predictable.

H2-7 · Timing & sync integration — PTP/SyncE at the RU endpoint

A Small Cell / DAS RU must treat timing as an endpoint engineering problem: where sync lands in hardware, how lock health is monitored, and how the RU behaves under link disturbances. The focus here is placement + observability + holdover + controlled degradation inside the RU, not network-wide timing switch design.

RU as a PTP client/slave (hardware assist and landing points)
  • Hardware timestamps: timestamps must be taken at a deterministic boundary (near MAC/PHY) to avoid variable latency.
  • 1PPS / ToD hooks: a simple pulse/time interface makes step events and drift measurable during commissioning.
  • Clock health exposure: jitter-cleaner lock state and reference selection state must be readable by management firmware.
SyncE at the RU (PHY frequency reference, RU-side view only)
  • Input relationship: SyncE arrives with Ethernet PHY frequency and becomes one candidate for the RU reference selector.
  • Output relationship: once selected and cleaned, the RU clock tree distributes frequency to RF/IF and transport blocks.
  • Health signals: “present/absent,” “quality change,” and “switch events” are more actionable than protocol terminology.
Holdover (what stays valid, for how long, and when to degrade)
  • Triggering: holdover starts when SyncE/PTP inputs are lost or declared unhealthy (link-down is not the only trigger).
  • Business constraints: define acceptance using RU-visible KPIs (frequency error budget, EVM stability, relock behavior).
  • Policy: short holdover may freeze settings; prolonged holdover should enter a controlled degradation mode.
Observability (logs and counters that enable field diagnosis)
  • State: locked / holdover / free-run, plus selected reference source and cleaner lock state.
  • Events: ToD step detected, lock-loss event, reference switch event, relock attempt event.
  • Counters: PTP offset over-limit count, holdover enter count, relock attempts, ToD step count.
Endpoint timing acceptance checklist (testable in the RU)
  • Deterministic lock behavior: relock time and reference switching are repeatable across power cycles and temperature ramps.
  • ToD step visibility: time-step detection produces a log event and a counter increment with a timestamp.
  • Holdover policy is enforced: after a defined holdover duration or KPI drift, the RU enters a controlled degrade mode.
Figure F7 — RU sync chain with failure states (lostSyncE / lostPTP / holdoverActive)
RU timing and sync chain with failure states Ethernet in carrying SyncE and PTP feeds a timestamp unit and jitter cleaner, then the local clock tree distributes to RF and IF. Three failure states are shown with arrows to alarm and degradation actions. Timing & Sync Chain (RU Endpoint) Ethernet In SyncE · PTP HW Timestamp PTP slave Jitter Cleaner Lock Local Clock tree · distribution RF LO IF sampling lostSyncE lostPTP holdoverActive Actions Alarm Degrade Relock
The RU must expose sync state transitions (lostSyncE / lostPTP / holdoverActive) and tie them to alarm + degrade + relock behavior.

H2-8 · Ethernet fronthaul/backhaul interface — PHY, isolation, and surge-resilient stability

RU Ethernet design is not just “link up”: it must remain stable under outdoor ESD/surge events, unpredictable cabling, and tight EMI constraints in a sealed enclosure. This section focuses on RU-side port shape, PHY selection, isolation placement, and protection parasitics that can quietly degrade signal integrity.

Port topology (single/dual port, redundancy, ring — hardware impact only)
  • Dual ports: require two complete protection + magnetics chains to avoid cross-coupled failures.
  • Service reality: field plug/unplug events and cable unknowns demand robust ESD handling and link retraining.
  • Ring mention only: treat ring as “fast recovery needed,” without diving into ring protocol details.
PHY selection criteria (RU constraints)
  • Temperature range: stable behavior through outdoor thermal swings and hot enclosure conditions.
  • EMI behavior: common-mode noise tolerance and predictable emissions under real cable conditions.
  • Power budget: PHY power and heat must fit a sealed RU thermal budget.
  • Cable margin: tolerance to return loss and cable quality variance (field reality, not lab cables).
Isolation and common-mode control (reasons + placement)
  • Why isolate: ground potential differences and surge common-mode currents must not enter sensitive RU grounds.
  • Placement: port → protection zone → magnetics → PHY defines a clear boundary for energy diversion vs signal integrity.
  • Shield bonding: the shield-to-chassis point must be deliberate to prevent common-mode current roaming.
ESD/surge path vs signal integrity (parasitics that “bite back”)
  • TVS capacitance: protection capacitance can reduce eye margin and increase intermittent link drops.
  • CMC behavior: poor placement or selection can convert common-mode energy into differential distortion.
  • Verification: correlate link flap/CRC counters with surge tests and check for retrain storms after events.
“Stable link” evidence checklist (RU-side)
  • After ESD/surge: link retrains and returns to steady state without repeated flapping.
  • SI margin: protection parasitics do not cause chronic CRC/PCS error growth under worst-case cabling.
  • Logs: link flap count, retrain count, CRC error count, and event timestamps are collected for field diagnosis.
Figure F8 — Ethernet port protection + isolation placement (TVS/CMC/shield bond)
Ethernet port protection and isolation placement Block diagram from Port to Magnetics to PHY to MAC, with side labels for TVS, common-mode choke, and shield bond to chassis. Shows ESD/surge current diversion path away from sensitive circuitry. Ethernet Port: Protection + Isolation Port RJ45 / SFP Protection zone Magnetics PHY MAC TVS CMC Chassis shield bond ESD / SURGE current path TVS C + CMC can reduce SI margin Verify: CRC / flaps Verify: retrain
Protection must divert ESD/surge energy to chassis while keeping TVS/CMC parasitics from eroding link stability.

H2-9 · PoE PD power tree (802.3af/at/bt) — budget, sequencing, isolation, and rail ownership

In Small Cell / DAS RUs, PoE is not just “power delivery.” It sets the real ceiling for RF output, thermal headroom, and field stability. A usable design starts with worst-case available power, then builds a PD + isolated conversion chain with inrush control, multi-rail sequencing, and fault visibility.

Available power (engineering budget, not nameplate)
  • Classified vs usable: PoE class sets the upper bound, but cable loss and hot conditions reduce usable power.
  • Worst-case planning: budget with long/poor cabling and elevated ambient, then bind RF output modes to that budget.
  • Margin discipline: reserve power for inrush, transient bursts, and recovery states to avoid repeated brownouts.
PD controller capabilities (mapped to field symptoms)
  • Classification/handshake: negotiation outcomes must be readable (class, power granted, retry reason).
  • Inrush limiting: uncontrolled inrush often looks like “random reboot” during cold starts or plug-in events.
  • Thermal protection: PD/bridge temperature limiting can silently cap power and trigger cascading rail drops.
  • Power allocation: enforce rail priorities so PA bursts do not starve low-noise analog or management rails.
Isolated DC/DC boundary (layout logic)
  • Isolation boundary: keep the boundary explicit and treat it as the anchor for EMI filtering and return control.
  • EMI filtering: place filtering to control where common-mode and differential energy flows, not just to “add parts.”
  • Ground partition: high-power PA returns must not share sensitive analog return paths by accident.
Rails and sequencing (PA / low-noise analog / digital management)
  • PA rail: owns peak power, transient load steps, and derating behavior under thermal constraints.
  • Low-noise analog rails: own noise floor and spur cleanliness for LNA/AFE/clock-sensitive blocks.
  • Digital & management rails: own determinism: watchdog, reset, PG gating, fault logging, and safe retry policies.
PoE RU evidence checklist (what must be provable)
  • Startup determinism: cold/hot starts succeed without repeated brownouts or negotiation loops.
  • Rail priority holds: PA bursts do not collapse low-noise or management rails.
  • Fault visibility: PD class, inrush events, PG faults, and latch reasons are logged with timestamps.
Figure F9 — PoE → isolated DC/DC → multi-rail sequencing (inrush, PG, fault latch)
PoE powered RU multi-rail power tree Diagram showing RJ45 port and PD controller feeding an inrush limiter, then an isolated DC/DC stage, then three power domains: PA high power, low-noise analog, and digital management. Includes PG indicators and a fault latch block. PoE PD Power Tree (RU) RJ45 PoE input PD Controller Class Handshake Inrush limiter Isolated DC/DC conversion ISO Sequencer PG · reset · retry Fault latch PA High Power PG Low-noise Analog PG Digital & Mgmt PG
A PoE RU must enforce inrush control, explicit isolation boundaries, rail ownership, and sequenced PG-driven bring-up with readable fault causes.

H2-10 · Thermal, mechanics & environmental hardening — derating as a closed loop

RU reliability is decided by heat and environment. The goal is not a generic “thermal discussion,” but a closed-loop derating system: identify dominant heat sources, define thermal paths to the enclosure, place sensors that reflect both case and hotspots, and execute deterministic actions that protect RF performance without oscillating between states.

Heat sources (dominant blocks, RU view)
  • PA: highest power density, directly affects output capability and linearity under heat stress.
  • DC/DC: efficiency and switching losses create localized hotspots near magnetics and power stages.
  • PHY/SoC: management heat matters in sealed boxes, especially when cable conditions force higher transmit effort.
Thermal paths (what actually moves heat out)
  • Path chain: silicon → PCB copper → interface material → enclosure → ambient.
  • Bottlenecks: interface pressure, pad aging, and enclosure coupling often dominate long-term drift.
  • Design intent: thermal paths must be explicit so sensor readings can explain behavior under load.
Derating strategy (temperature → actions)
  • Tiered actions: reduce PA output power first, then apply broader service limitations if temperature continues rising.
  • Stability: use hysteresis and minimum dwell times to avoid rapid oscillation between states.
  • Recoverability: define clear exit conditions for each derate level and record the transition reason.
Sensor placement (case vs hotspot, avoiding false decisions)
  • Case sensor: tracks enclosure and environment, useful for long-term trend and site conditions.
  • Hotspot sensors: near PA and power stages to protect silicon and prevent runaway heating.
  • False readings: wrong placement causes false derating or missed overheating—both create field failures.
Thermal evidence checklist (must be repeatable)
  • Worst-case stability: RU reaches a stable operating point or a stable derated state without oscillation.
  • Explainable transitions: every derate entry/exit is explainable via sensor readings and logged reasons.
  • Field-aligned logging: log temperature peaks, dwell times, and derate levels for site diagnosis.
Figure F10 — Thermal map + derating loop (sensors → controller → PA power derate)
RU thermal sources, paths, and derating control loop A simplified RU enclosure shows PA, DC/DC, and PHY/SoC heat sources with arrows to the enclosure and heatsink. Case and hotspot sensors feed a controller that applies PA power derating. Thermal Map + Derating Loop RU Enclosure Enclosure / Heatsink to Ambient PA hotspot DC/DC PHY/SoC Hotspot sensor Case sensor Thermal Controller state · hysteresis Action PA power derate Ingress · corrosion · lightning
A reliable RU treats temperature as a control loop: sensors drive deterministic derating actions that protect RF performance and avoid state oscillation.

H2-11 · Validation & troubleshooting checklist — turning field failures into repeatable lab workflows

This section defines “done” using evidence: deterministic bring-up, RF stability under temperature and VSWR stress, time/sync resilience (lock → holdover → relock), and EMC survival without hidden degradation. Each checklist block follows the same format: Setup → Procedure → Pass/Fail → Evidence.

A) Bring-up (power sequencing, PoE negotiation, PG/reset, Ethernet link-up)
Setup
  • Test at cold and hot conditions (ambient extremes + sealed enclosure steady state).
  • Use at least two cable cases: short/good cable and long/worst cable.
  • Enable RU logging for PoE states, PG/reset causes, and Ethernet link events.
Procedure
  1. Power-cycle (≥10 times) and record success/failure rate and time-to-ready.
  2. Perform fast unplug/plug events and observe negotiation retries and brownout behavior.
  3. Force peak load during bring-up (management + PHY active + RF idle then RF enable).
  4. Verify sequencing: management rails stable first, then analog/PLL rails, then PA high-power rail.
Pass/Fail
  • Pass: deterministic bring-up without repeated resets or negotiation loops.
  • Pass: PG chain is monotonic (no oscillation); no “phantom” PG drop under normal transients.
  • Fail: periodic resets, repeated PoE renegotiation, or link flapping during rail enable.
Evidence to capture
  • PoE state timeline: class/allocated power, inrush event markers, retry reason codes.
  • Key waveforms: inrush current, isolated bus droop, PG edges, reset line, rail ramp timing.
  • Ethernet: link-up time, link flap count, CRC/PCS error counters during and after bring-up.
B) RF validation (output power / linearity / EVM vs temperature + VSWR fault injection)
Setup
  • Use a calibrated VSA/power meter and a controlled thermal condition (steady-state at multiple points).
  • Prepare a controlled mismatch / VSWR injection method (step-by-step, reversible).
  • Ensure PA bias/protection telemetry is readable (overtemp, overcurrent, VSWR trip/derate).
Procedure
  1. Measure output power and modulation quality at nominal temperature.
  2. Repeat after thermal soak (hot enclosure) and after cold start (if applicable).
  3. Apply VSWR steps: normal → moderate mismatch → fault injection; record RU response each step.
  4. Confirm protection policy: derate vs trip vs latch-off, and verify recovery conditions.
Pass/Fail
  • Pass: performance drifts are explainable (temperature/rail/bias) and do not cause uncontrolled oscillations.
  • Pass: VSWR events trigger the intended action (derate/trip) and are logged with clear reason codes.
  • Fail: EVM/linearity collapses with no corresponding thermal/rail evidence or protection state change.
Evidence to capture
  • RF results: power, spectrum, and modulation quality snapshots per temperature and VSWR step.
  • Bias/rails: PA rail droop, bias DAC/driver state, protection trip counters, hotspot temperatures.
  • Event correlation: timestamped link between RF degradation and thermal/PoE/sync state transitions.
C) Sync validation (PTP lock, holdover entry/exit, ToD step injection, recovery time)
Setup
  • Enable RU sync status outputs: lock state, holdoverActive, reference selection state.
  • Prepare a controlled link disturbance (PTP interruption and/or link flap) and a ToD step scenario.
  • Keep RF and Ethernet traffic at representative load (not idle-only validation).
Procedure
  1. Record steady lock health under normal link conditions (baseline counters and offset behavior).
  2. Interrupt PTP and observe: lock loss detection → holdover entry → alarm behavior → RF impact.
  3. Restore PTP and measure: relock time and stability (no repeated lock oscillation).
  4. Inject a time-step event and verify that step is detected, logged, and handled as defined.
Pass/Fail
  • Pass: holdover entry/exit is deterministic and visible; relock completes within the defined window.
  • Pass: ToD step events generate counters/logs and do not create silent time jumps.
  • Fail: repeated lock oscillation, unlogged step events, or sync failures that masquerade as “RF issues.”
Evidence to capture
  • Lock state timeline: lostPTP, lostSyncE (if used), holdoverActive, reference switch events.
  • Counters: offset over-limit count, step count, relock attempts, holdover enter count.
  • Correlation: RF/Ethernet KPI changes aligned to sync state transitions.
D) EMC resilience (ESD/surge survival + hidden degradation checks)
Setup
  • Run “before” baselines: RF quality, Ethernet error counters, sync stability, and thermal behavior.
  • Enable persistent logs with timestamps and retain them across resets.
Procedure
  1. Apply the defined ESD/surge events at the port and chassis points per the RU test plan.
  2. Re-run the same baseline measurements and compare deltas (not just “still alive”).
  3. Check for latent issues: link flap storms, CRC growth, sync offsets becoming noisy, RF quality drift.
Pass/Fail
  • Pass: no persistent KPI regression and no growth in error counters beyond defined tolerance.
  • Fail: “works” but with higher EVM, higher packet errors, or unstable sync/holdover behavior.
Evidence to capture
  • Delta report: pre/post event RF snapshots + Ethernet/sync counter deltas.
  • Event markers: ESD/surge time stamps aligned with resets, link flaps, and protection trips.
E) Field observability (minimum log fields that make root cause provable)
  • PoE / power: class/granted power, inrush events, bus undervoltage events, PG fault source, retry reason.
  • Thermal: case temp, hotspot temp peaks, derate level, dwell time per derate level, thermal limit events.
  • Sync: lock state, holdoverActive, reference selection, offset over-limit count, step count, relock attempts.
  • Ethernet: link flap count, CRC/PCS errors, retrain count, timestamped link transitions.
  • RF protection: VSWR trip/derate count, PA overtemp count, PA overcurrent count, power-backoff state.
Example IC building blocks (representative part numbers for checklist mapping)

These are examples to anchor BOM searches and validation mapping. Always verify current datasheets and availability.

PoE PD / interface
  • TI: TPS2372 / TPS2373 families
  • Analog Devices: LT4294 / LT4293 families
  • Analog Devices: LT4295 (PD + isolated conversion controller class)
Sync / clock conditioning
  • Skyworks: Si5345 family
  • Analog Devices: AD9546 family
Ethernet PHY (timestamps / robust link)
  • Marvell Alaska: 88E15xx families
  • Microchip: VSC85xx families
RF power detect (evidence + protection)
  • Analog Devices: ADL5902 (RMS detector class)
  • Analog Devices: AD8318 (log detector class)
Figure F11 — Symptom → priority checks → evidence → localization path
Troubleshooting triage map for Small Cell / DAS RU Three-column flow: left symptoms, middle priority checks, right evidence types. Arrows connect common symptoms to first checks and to the evidence required to localize root cause. Validation & Troubleshooting Map Symptoms Priority checks Evidence Site drop / link loss intermittent disconnect Power drop output backoff Bad EVM / linearity quality drift Random reboot reset storms PoE / rails class · inrush · PG Thermal / derate hotspot · case Sync state lock · holdover Port protection ESD/surge → SI Logs & counters lock · PG · CRC · trips Waveforms inrush · rails · PG Temperature points case vs hotspot Rule: no diagnosis without evidence (counters + waveforms + temperature correlation)
Use the map to prioritize checks and capture evidence before changing hardware or firmware policies.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

H2-12 · FAQs (Small Cell / DAS RU)

These FAQs target common field symptoms (EVM drift, link errors, sync loss, intermittent resets) and map each answer to the relevant section for deeper troubleshooting steps.

1) What is the practical boundary between a Small Cell RU and a DAS RU, and their key constraints? → H2-1/2
A Small Cell RU is typically a single-sector, compact radio endpoint with tight power and thermal limits, often with Ethernet fronthaul/backhaul and local timing needs. A DAS RU prioritizes distributed coverage across many remote nodes and long cabling, so robustness, ESD/surge tolerance, and maintenance simplicity dominate. The design bottlenecks are almost always power budget, thermal headroom, and link/sync stability.
2) The same PA looks fine on the bench—why does linearity collapse in an outdoor enclosure? Check thermal or power first? → H2-4/10
Start with evidence: if EVM/ACLR drifts smoothly with hotspot temperature while PA rail remains stable, thermal stress and bias/derating behavior are likely drivers. If linearity collapses during load bursts with visible rail droop, PG glitches, or PoE renegotiation, power budget and sequencing are primary suspects. The fastest triage is “temperature trace + rail droop + protection state.”
3) What “random” Rx problems can happen when TDD switch isolation is insufficient? → H2-3
Poor isolation lets Tx leakage appear as in-band “self-jamming,” raising the noise floor and causing intermittent desense. It can also drive the LNA or IF chain into compression, triggering slow recovery that looks random under bursty traffic. Typical signatures include Rx sensitivity swings, sporadic EVM degradation, and blocker-like behavior without an external interferer. Verify by correlating Rx metrics with TDD timing and switch control states.
4) How should the IF frequency be chosen to reduce image/spurs without “overbuilding” filters? → H2-5
IF selection is a trade between image separation, practical filter slopes, LO leakage management, and spur placement. A higher IF often eases image rejection but can push filtering complexity and sensitivity to LO feedthrough or harmonics. A lower IF simplifies bandwidth but risks tighter image constraints and reciprocal mixing issues. The quickest method is to map expected spurs and images against the channel plan, then pick the IF that leaves the widest “clean window.”
5) What symptoms appear when the VGA/AGC loop bandwidth is too fast or too slow? → H2-5
If AGC is too fast, gain “pumping” can track modulation or interferers, showing up as EVM wobble, amplitude ripple, and unstable power readings. If AGC is too slow, strong blockers cause compression and long recovery tails, producing dropouts or burst errors after interference events. A good loop is fast enough to protect headroom but slow enough to avoid chasing symbols; confirm by observing gain state and saturation recovery timing under blocker injection.
6) What are the “visible” symptoms of phase-noise/jitter issues, and how to quickly isolate PLL vs reference? → H2-6
Jitter/phase-noise problems often look like an EVM floor that will not improve with power, plus noise skirts or wandering constellation blur. Discrete spurs may appear as repeatable spectral lines. A fast isolation approach is: swap or bypass the reference, check whether spurs track the reference frequency, and compare lock states across clock domains. If symptoms change with the reference, the source dominates; if not, the PLL/cleaner or distribution path is suspect.
7) When RU-side PTP/SyncE is lost, what strategy avoids “one hiccup kills service”? → H2-7
The most resilient approach is deterministic degradation: enter holdover with a stable local clock, raise explicit alarms, and apply a controlled RF policy (limit modes or output) instead of uncontrolled resets. Use hysteresis and minimum dwell times so the RU does not oscillate between lock and holdover during link flaps. Most importantly, make the state observable: lostPTP/lostSyncE flags, holdoverActive, step counters, and relock attempts should be logged and correlated to RF KPIs.
8) Why can adding a TVS increase Ethernet errors, and what is the correct placement logic? → H2-8
Many TVS parts add capacitance and imbalance, which can distort the differential impedance, shrink eye opening, and worsen return loss—especially at higher data rates. Correct placement is “protect the entry, preserve the channel”: keep the TVS close to the connector with a short, controlled return path (often to chassis), and avoid routing the protected energy through sensitive PHY grounds. Choose low-capacitance devices and validate with BER/CRC counters before and after ESD events.
9) Under 802.3bt, how can power budget unify Tx power, temperature rise, and cable loss? → H2-9/10
Start from worst-case available PD power at the RU input: include cable loss, connector heating, and high-ambient derating. Then allocate power by domain (PA high power, low-noise analog, digital/management) and bind RF operating modes to that allocation. Finally, close the loop with thermal steady-state measurements and a derating policy that reduces RF output before system instability appears. The result is one consistent envelope that explains why “more Tx” equals “more heat” equals “less margin.”
10) How can isolated DC/DC EMI be reduced without sacrificing efficiency? → H2-9
Efficiency and EMI are both won by controlling current loops and common-mode paths. Keep high di/dt loops tight, manage transformer and switch-node coupling, and place filtering at the isolation boundary where noise wants to escape. Use the minimum necessary damping and common-mode suppression so losses do not explode. A practical workflow is: reduce loop area, then tame common-mode emissions, then fine-tune filtering to meet margins—each step verified by repeatable measurements rather than adding parts blindly.
11) Intermittent reboots: is it usually PoE inrush, PG sequencing, or latent damage after a surge/ESD event? → H2-9/11
Evidence usually separates these quickly. PoE/inrush issues show renegotiation, undervoltage markers, and resets clustered around plug-in or load steps. PG sequencing issues show repeatable rail-order violations or PG drop timing that precedes reset. Latent EMC damage often appears as “works but degraded”: higher Ethernet errors, noisier sync offsets, or worse RF KPIs after events, even when rails look normal. The triage order is counters → waveforms → temperature correlation.
12) What is a minimal production test plan that still covers RF + sync + Ethernet + power? → H2-11
A minimal plan tests “endpoints and evidence,” not every feature. Verify PoE negotiation and inrush stability, then confirm PG/reset determinism. Measure Ethernet integrity using BER/CRC counters and controlled link transitions. Validate sync by observing lock and forcing one controlled holdover entry/exit. Finally, capture one RF output power point plus one quality KPI (EVM or ACLR) at a defined thermal condition. Record the same log fields in production that will be used for field diagnosis.