123 Main Street, New York, NY 10001

Topology Planning for I2C, SPI, and UART Serial Buses

← Back to: I²C / SPI / UART — Serial Peripheral Buses

Topology planning turns serial-bus reliability into a deliverable: a clear topology map, a quantified budget (capacitance/fanout/length/stub), and pass/fail validation steps—so every added device, connector, or testpoint stays within controlled margins.

The goal is to avoid “uncontrolled stars” and fragile edge domains by using trunk-and-short-stubs layouts, explicit segmentation boundaries, and a repeatable bring-up plan that proves the budget has not been broken.

What “Topology Planning” really means (Scope & deliverables)

Topology planning is an engineering deliverable: a single, consistent description of how devices connect, what the bus must tolerate, and how it will be verified—before layout and before firmware tuning.

Decision this section enables

  • Choose an acceptable connection shape (bus / daisy / tree) and define where segmentation is mandatory.
  • Lock a measurable budget (capacitance, fanout, length, stubs) so later changes trigger re-checks.
  • Define verification gates so “bench OK / system fails” becomes preventable.

Must cover (and stay within this page’s scope)

Three-layer topology model

Maintain one consistent view across physical (wires/traces/connectors), logical (who talks to whom, arbitration domains), and power/reference (power domains, isolation boundaries, reference return intent).

Budget objects

  • Cbus: device input caps + trace/connector/test-point parasitics (track them as first-class items).
  • Fanout: not just node count; include load distribution and branch density.
  • Length: separate trunk length and worst-case end-point distance.
  • Stub: record each stub length; enforce a stub policy before routing begins.
  • Connector & test points: treat as “hidden loads” that frequently break margins.

Deliverables (what must exist before layout freeze)

  1. Topology master diagram: one drawing showing physical + logical + power/reference layers.
  2. Budget sheet: Cbus/fanout/length/stub/connector/test-point line items with margin placeholders (X).
  3. Acceptance checklist: design → bring-up → production pass criteria and re-check triggers.

Boundary rule (avoid cross-page overlap)

Three-layer topology master view Swimlanes show Physical, Logical, and Power/Reference topology with key budget objects: Cbus, fanout, length, and stubs. Topology Master Diagram · Physical + Logical + Power/Reference Physical Logical Power / Ref Host MCU / SoC Segment Buffer / Switch Conn Test Pt Periph A Periph B Length / Trunk Stubs Cbus load Master Owns policy Domain Arbitration Shared bus / P2P links Conflict domain & recovery scope Power A 3.3V domain ISO Boundary Power B / Ref Return intent Define domains; avoid “hidden coupling”
Diagram goal: keep physical wiring, logical domains, and power/reference boundaries consistent so budgets and verification stay stable under design changes.

Topology archetypes and why stars fail (Bus / Daisy / Tree / Star)

Treat topologies as repeatable patterns with stable constraints. The goal is not “can it run,” but can it stay predictable across devices, layouts, connectors, and environmental shifts.

Practical selection rules (topology-first)

  • Prefer one trunk + short stubs over distributed branches when multiple endpoints must share a line (common on I²C).
  • Use tree + segmentation when distance or fanout grows; keep branch points controlled and budgeted.
  • Avoid star wiring for shared buses because it creates multiple uncontrolled stubs that break repeatability.

Topology dictionary (failure modes without waveforms)

Bus

Best when many nodes share a line and stubs are kept short. Predictability comes from one defined trunk and controlled attachment points.

Daisy

Works when devices support chain semantics or when controlled hop-by-hop wiring is acceptable. Risk shifts to cumulative delay/load and serviceability.

Tree

Scales fanout and distance by creating controlled branch points. Predictability requires branch budgeting and often segmentation at boundaries.

Star

Fails by design on shared buses: multiple long stubs create inconsistent edge/phase arrival at each endpoint. The system may “work” in one build but becomes non-repeatable across changes.

Topology archetypes: Bus, Daisy, Tree, Star Four panels compare topology patterns with constraint tags for capacitance, stubs/branches, and sampling window consistency. Topology archetypes · constraints that matter (C / Stub / Window) Bus Daisy Tree Star (avoid) Host A B C C Stub Window Host 1 2 3 C Stub Window Host A B C Stub Window Host A B C C Stub Window
Use archetypes as constraints, not aesthetics: the star pattern concentrates uncontrolled stubs, which breaks repeatability for shared buses.

The universal budgeting model (Capacitance / Fanout / Length / Stub)

A “budget” is a repeatable accounting model that keeps topology decisions predictable under change. Use it to answer: how many nodes can still be added and how far the interconnect can extend—without relying on luck.

What this budget produces (deliverable)

  • A single budget sheet with line items (device / trace / connector / protection footprint / test points / margin).
  • A clear stop-go rule: when a new node, a longer route, or extra probing triggers re-checks.
  • A practical answer to “can this topology remain repeatable?” across revisions.

Budget sheet fields (track “hidden loads” as first-class items)

Cbudget (capacitance accounting)

  • Device inputs: per-pin/per-port input capacitance (each endpoint is a line item).
  • Trace capacitance: separate trunk vs stub lengths (do not merge them).
  • Connectors & cabling: board-to-board, FFC, harness, headers (often the largest hidden term).
  • Protection footprints: ESD/TVS/RC placement parasitics must be reserved as budget consumption.
  • Test points: probing pads/fixtures create repeatable parasitics and must be recorded.
  • Margin (X): reserve headroom for tolerance, variants, and future changes.

Fanout (not node count)

Fanout is the combination of total load and how that load is distributed. Two designs with the same node count can behave differently if one concentrates branches at a hub (star-like) while the other attaches short stubs along a trunk.

Practical rule: keep branch density controlled; avoid creating a “single junction” that becomes an accidental star.

Length and stub (must be split)

  • Trunk length: the backbone distance that defines the shared path.
  • Worst-case stub: the longest branch that attaches an endpoint (a dominant driver of non-repeatability).
  • Stub count: many small stubs can still accumulate hidden loading and junction complexity.

Budget re-check triggers (change control)

  • A device is added, swapped, or moved (input load and stub distribution change).
  • Routing changes length/layer/branch points (trace capacitance and branch density change).
  • A connector/cable/harness is introduced or revised (connector parasitics dominate quickly).
  • New ESD/TVS/RC footprints or test points are added (hidden loads consume margin).
  • Segmentation boundaries move (capacitance domain and fault domain are re-partitioned).

Boundary: rise-time math and pull-up sizing belong to Open-Drain & Pull-Up Network. This section only defines the input fields and the accounting model.

Budget waterfall: total budget to remaining margin A waterfall-style block diagram showing how device, trace, connector, protection footprint, and test point loads consume the total topology budget and leave remaining margin. Universal Budget Waterfall · Total → line items → Remaining Margin (X) Total Budget C / Fanout / Length Stub / Margin (X) Device inputs Cin per node Trace Trunk + Stub Connector / cable Hidden parasitics Protection ESD/TVS/RC footprint Test points / fixtures Probe pads consume margin Remain Margin (X) Accounting principle Track line items consistently; any new footprint, connector, or probing point must trigger a re-check. Keep trunk/stub separate to preserve predictability.
The waterfall is a planning tool: it forces hidden parasitics (connectors, protection footprints, test points) to be budgeted explicitly instead of discovered during bring-up.

I²C topology planning (Cbus limit, stubs, segmentation decision)

For I²C, the primary constraint is usually not the advertised mode speed—it is whether the shared wiring remains a predictable RC network. Uncontrolled branches and hidden loads break repeatability first.

Backbone + short stubs (layout-first rules)

  • Define a trunk (the backbone) before routing any endpoint attachments.
  • Enforce a stub policy: record each stub length; keep worst stub ≤ X (set X per project).
  • Avoid accidental stars: do not create a “hub” junction with many long branches.
  • Keep test points sparse: place probing where it is diagnostic, not everywhere.

Multi-device placement strategy (to keep the trunk clean)

  • Group endpoints by distance; attach each group along the trunk rather than returning every branch to a center.
  • Place “high-change” endpoints (modules, connectors, service ports) near a segmentation boundary to contain variability.
  • Treat connectors and harnesses as topology elements, not as wiring details; include them in the budget sheet.

When segmentation becomes mandatory (buffer / switch / isolator)

  • Budget exceeded: total load or worst-case stubs consume margin (as shown by the H2-3 sheet).
  • Long reach: cable/harness or multiple connectors introduce variable parasitics and noise pickup.
  • Cross-domain: power domains or isolation boundaries require partitioning the capacitance domain and fault domain.
  • Noisy zones: endpoints near motors/relays/high-current loops should be isolated as a separate segment.

Boundary: pull-up value selection and rise-time compliance are handled in Open-Drain & Pull-Up Network. This section provides the topology inputs (Cbus, trunk/stub policy, segmentation decision).

“It runs but is not stable” signatures (topology-first interpretation)

  • Occasional NACKs or retry bursts that correlate with certain endpoints or cable positions.
  • Cold boot differs from warm reset; behavior depends on power-up order or attached modules.
  • Adding a connector, protection footprint, or test point suddenly consumes the remaining margin.
  • Failures concentrate at the farthest endpoints; closer nodes remain “fine.”
I2C topology: backbone with controlled stubs and danger zones A trunk-first I2C topology diagram showing short stubs attached along a backbone, with red annotations marking star hubs, long stubs, and dense test-point regions. I²C trunk-first topology · backbone + short stubs (avoid star hubs) Host I²C master Dev A Dev B Dev C Segment Buffer/Switch Dev D A B C STAR HUB Long LONG STUB TP DENSITY Legend Trunk / Stub Danger
Trunk-first routing keeps the shared RC network predictable. The marked danger zones (star hub, long stub, dense probing) are frequent causes of “works sometimes” I²C behavior.

SPI topology planning (clock-centric routing, multi-slave fanout)

SPI behaves reliably when SCLK remains a controlled reference. Topology choices should prioritize a clean, short, and reviewable clock distribution path and a predictable return path for MISO back to the host.

Clock-centric rules (topology-level)

  • SCLK is the reference: route it as a trunk with controlled branch points; avoid accidental “hub” junctions.
  • MISO is receive-critical: keep the return path back to the host predictable and avoid long, uncontrolled fanout.
  • Topology must remain reviewable: every branch point should be visible on the topology master diagram and tracked as a budget item.

Multi-slave fanout patterns (CS tree vs direct vs daisy-chain)

CS direct (one CS per slave)

  • Best when the host has sufficient CS pins and the board is a single, controlled routing domain.
  • Keeps fault isolation and debugging straightforward (each slave is independently addressable by CS).

CS tree / decode (expanded fanout)

  • Useful when many slaves are needed but the host CS pins are limited.
  • Requires disciplined documentation: the decode stage becomes a topology element and a budget trigger.

Daisy-chain (chain-aware devices)

  • Appropriate only when devices/protocol explicitly support chain operation (shift-style or chained framing).
  • Expands the fault domain: one weak link can affect the entire chain; plan bring-up and production tests accordingly.

Multi-board / backplane: redrive or retime triggers (conditions only)

  • SCLK trunk or stubs grow beyond X (length) or branch density exceeds X (junction count).
  • Connectors/harnesses become part of the link; behavior changes with insertion, routing, or vendor variance.
  • Rising edge control becomes inconsistent across slaves (one device fails first at higher speed).
  • Noisy zones or cross-domain boundaries are unavoidable; topology needs a controlled boundary element.

Boundary: CPOL/CPHA details belong to the SPI mode page; skew/eye/termination work belongs to Long-Trace SI. This section focuses on topology decisions and trigger conditions.

SPI topology: clock-centric good vs data-star bad A two-panel diagram comparing a clock-centric SPI topology (controlled SCLK trunk, predictable MISO return) with a bad data-star style layout that creates uncontrolled branches and connector parasitics. SPI topology · ✅ Clock-centric distribution vs ❌ Data-star wiring GOOD Host SPI master SCLK trunk Slave 1 Slave 2 Slave 3 MISO return → host CS direct / tree BAD Host SPI master DATA STAR HUB Slave 1 Slave 2 Slave 3 Connector hidden load Uncontrolled branches → inconsistent edges across slaves (topology risk) Legend SCLK MISO Risk
Clock-centric SPI keeps SCLK distribution controlled and makes MISO return paths predictable. Data-star hubs concentrate parasitics and produce inconsistent behavior across slaves.

UART topology planning (point-to-point assumptions, multi-drop pitfalls)

UART is a point-to-point physical assumption by default. When multiple devices share a UART without a control element, collisions and contention are structural—not rare events.

Point-to-point assumptions (topology-level)

  • TX/RX logic levels rely on a stable reference and a predictable return path; cross-domain wiring increases sensitivity.
  • Long cables and connectors introduce variable parasitics and common-mode disturbances that are topology-visible.
  • A topology master diagram should explicitly label UART links as P2P unless a selector/bridge is present.

Why hard-wired multi-drop fails (structural contention)

  • Multiple TX on one wire creates collisions and electrical contention (not a firmware corner case).
  • Even if “only one should talk,” real systems face resets, boot logs, and fault states that violate that assumption.
  • The result is often intermittent framing/parity errors that correlate with system states or module insertion.

Correct structures for sharing UART (Selector / MUX / Bridge)

  • Selector/MUX enforces “one active TX” as hardware, preventing contention.
  • Bridge adds buffering/queueing and isolates domains, turning shared access into a managed topology element.
  • Selection/control lines and connector nodes must be recorded as budget triggers (they change parasitics and behavior).

When to exit UART physical assumptions (decision-only)

  • Cable length, connector count, or cross-chassis routing exceeds X and errors become environment-dependent.
  • Ground/reference uncertainty becomes unavoidable (multiple supplies, long returns, or isolation boundary needed).
  • EMC environment is harsh (motors/relays/high-current loops), and occasional errors cannot be bounded by topology alone.

Boundary: RS-232/RS-485 device selection and port protection belong to the PHY/Protection pages. This section defines topology assumptions and structural decision points.

UART topology: hard-wired multi-drop bad vs selector good A two-panel diagram showing why hard-wired UART multi-drop creates collisions, and how a UART selector or bridge enforces one active transmitter and prevents contention. UART topology · ❌ Hard-wired sharing vs ✅ Selector/MUX control BAD Host UART shared wire Device A Device B Device C BUS CONTENTION / COLLISION (multiple TX can drive) Connector variance GOOD Host UART UART Selector ONE ACTIVE TX TX/RX Dev A Dev B Dev C SELECT Legend Link Risk
Hard-wired UART sharing creates structural contention. A selector or bridge enforces one active transmitter and makes multi-device access predictable.

Segmentation patterns (buffers, switches, isolators) — topology as a system

Segmentation is a system-level topology strategy that splits an uncontrolled large domain into smaller, controllable domains. The purpose is to cut specific risks: capacitance growth, fault propagation, power-domain uncertainty, and isolation boundaries.

Four segmentation motivations (what risk gets cut)

Cap domain (Cap)

Splits a growing capacitance/load domain into smaller segments so budgets remain measurable and change triggers are localized.

Fault containment

Prevents a single bad branch (hot-plug, intermittent connector, ESD event) from stalling the entire bus domain.

Power domain boundary

Isolates uncertain power-up/down ordering and back-power risk so the core domain keeps consistent assumptions.

Isolation boundary

Creates a hard boundary for reference and common-mode disturbances; cross-domain propagation becomes structurally limited.

Segment placement: near host vs near load

Near host (central segmentation)

  • Improves host-domain cleanliness and makes observability centralized.
  • Best when uncertainty is low at the edges and the host domain must remain a “golden reference” segment.

Near load (edge segmentation)

  • Contains connector/harness/module variance close to the source of uncertainty.
  • Best when edges are hot-plugged, noisy, or variable across builds; fault domains stay small.

Cascade depth & maintainability (debug visibility)

  • Each added segmentation stage creates another domain boundary; the topology master diagram must label each segment and its failure domain.
  • A segment should remain diagnosable: record segment ID, boundary type, and the minimum observability hooks (probe/log/reset) required.
  • Avoid uncontrolled cascades: if segmentation count exceeds X, treat it as a maintainability risk and re-architect the domain plan.

Boundary: device comparisons belong to Buffers / Isolators / Switches. This section defines segmentation motivations, placement, cascade visibility, and acceptance intent.

Segmentation decision tree: triggers to boundary type and acceptance A flowchart showing segmentation triggers (capacitance, fault containment, power domain, isolation) leading to boundary choices (buffer, switch, isolator) and acceptance tags. Segmentation decision tree · Trigger → Boundary type → Acceptance Triggers Cap growth budget hits X Fault spread stall / hang Power domain sequence varies Isolation need boundary required Decision Primary risk? cap / fault / power / iso Uncertainty at? host / edge Cascade depth visibility ≥ X Boundary type BUFFER cap domain split SWITCH fault domain control ISOLATOR reference boundary Acceptance tags: Margin(X) · Fault contained · Power sequence stable · Cross-domain quiet
Segmentation should be chosen by the primary risk being cut (capacitance, fault spread, power uncertainty, or isolation boundary), then validated with acceptance tags.

Layout-aware topology rules (length, stub, connector, testpoint)

Topology planning becomes effective only when it is enforceable in PCB layout. Trunk/stub discipline, connector accounting, and testpoint placement must preserve the budget model and avoid creating hidden star hubs.

Trunk vs stub (relative rules with threshold placeholders)

  • Trunk first: define the trunk route, then attach short stubs; avoid accidental hubs.
  • Record trunk length and worst stub length separately; enforce stub ≤ X.
  • If junction density exceeds X or a long stub appears, treat it as a topology re-budget trigger.

Connector / header / testpoint accounting (hidden loads)

  • Connectors and headers are a parasitic bundle; budget them as explicit line items.
  • Testpoints are hidden loads: they become permanent parasitics once fixtures and production probes exist.
  • Place testpoints for diagnostic value, not convenience; keep them out of trunk-critical zones.

High-cost rework hotspots (avoid creating them)

  • Star center: forces redistribution of nodes and trunk routing—typically the most expensive change.
  • Long stub: often requires placement changes, not just routing edits.
  • Cross-board connector: impacts mechanicals, harnessing, and supply chain—treat as a topology boundary item.

Boundary: return-path and split-ground specifics belong to Clock & Grounding. This section focuses on layout-enforceable topology rules and budget preservation.

Layout-aware topology annotation: trunk, stubs, connectors, and testpoint no-go zones A board-level diagram with a thick trunk line and thin stubs to devices, highlighting no-testpoint zones near the trunk, and marking star-center and long-stub rework hotspots. Layout-aware topology · TRUNK (thick) + STUB (thin) + NO TP ZONE (shaded) NO TP ZONE keep trunk critical segment clean Host controller TRUNK Dev 1 Dev 2 Dev 3 STUB Connector budget item Testpoints STAR CENTER LONG STUB Legend TRUNK STUB HOTSPOT
Preserve budgets by enforcing trunk/stub discipline, treating connectors and testpoints as explicit parasitic items, and avoiding star centers and long stubs that cause expensive rework.

Robustness planning (fault containment, hot-plug, brown-out topology)

Robustness is strongly shaped by topology: how a single fault propagates, how hot-plug and brown-out create lockups, and whether a controlled bypass exists for service and recovery.

Fault domain (how one node can stall the whole domain)

  • Shared wiring amplifies faults: one device holding a shared signal low can freeze all peers within the same domain.
  • Domain size defines blast radius: more nodes, connectors, and stubs increase the number of victims of a single stuck condition.
  • Containment is structural: segmentation boundaries reduce fault spread and make isolation and recovery more deterministic.

Hot-plug / brown-out topology triggers (ghost powering, lockups)

  • Ghost powering is enabled by topology when an unpowered branch remains electrically tied to a powered domain through shared signal paths.
  • Brown-out lockups are more likely when a partially powered node shares lines with fully powered peers, creating inconsistent logic states across the same domain.
  • If hot-plug is expected, treat the plug edge as a separate edge domain and keep the host “core reference domain” isolated from plug transients.

Bypass & redundancy (serviceability by topology)

  • Reserve a bypass option across a segment boundary (strap/jumper footprint) for debug and emergency recovery.
  • Reserve an isolate option per risky branch (disconnect one edge domain) so a failing module does not block system bring-up.
  • Document bypass rules in deliverables: what can be bypassed safely, and what must never be bypassed (e.g., isolation boundary).

Boundary: recovery state-machine details belong to Error Handling & Recovery. This section defines fault domains, topology triggers, and bypass intent.

Fault propagation: shared-domain spread vs segmented containment Two panels: left shows one failing node stalling a shared domain; right shows a boundary containing the fault within an edge segment while preserving the core domain. Fault propagation · No containment vs Segmented containment NO CONTAINMENT Host core domain shared Device A Device B Device C FAULT: holds low SPREADS SEGMENTED Host core domain core Boundary fault domain cut Device A Device B Device C FAULT inside CONTAINS Service option bypass / isolate segment (jumper)
Containment is created by topology boundaries. Without segmentation, a single stuck node can stall the entire shared domain.

Validation & measurement plan (what to measure, where to probe)

A reproducible bring-up plan confirms that topology budgets remain intact with minimal equipment. Probe placement and a consistent pass/fail sequence provide faster confidence than advanced protocol tricks.

Validation order (minimum-to-maximum stress)

  1. Baseline: empty/near-empty core domain, establish reference behavior.
  2. Single load: add one representative peripheral; confirm margins remain.
  3. Farthest endpoint: validate longest path and worst stub case.
  4. Full fanout: all devices connected; confirm repeatability across resets.

Probe point strategy (where to probe defines confidence)

  • Host-side: defines the “core reference” baseline for the domain.
  • Farthest endpoint: reveals topology-induced degradation missed near the host.
  • Before/after boundaries: identifies which segment violates budgets and validates containment intent.

What to record (segment-tagged stats)

Reliability counters

NACK / retries / timeouts / reset events per hour (thresholds: X). Each event must include a segment ID.

Throughput & latency distribution

Throughput plus latency distribution (min/median/p99) at baseline vs full fanout (thresholds: X).

Repeatability

Repeat across resets, temperature corners, and connector insertions. “Works once” indicates topology margin is insufficient.

Links: logging format belongs to Bus Health & Stats. Protocol analyzer techniques belong to Logic / Protocol Analyzer. This section defines sequencing, probe locations, and pass criteria intent.

Validation path: bring-up steps with probe points and pass criteria A flowchart from Step 0 to Step 5 showing probe locations (host, far-end, before/after boundary) and pass criteria placeholders X at each step. Validation path · Step 0 → Step N with probe points and Pass criteria (X) Step 0 · Baseline Probe: HOST Pass: errors ≤ X Step 1 · Add 1 load Probe: HOST + LOAD Pass: margin ≥ X Step 2 · Far-end Probe: FAR END Pass: stable ≥ X Step 3 · Full fanout Probe: WORST SEG Pass: p99 ≤ X Step 4 · Stress Hot-plug / brown-out Pass: contain ≥ X Step 5 · Record NACK / retries / latency Tag by segment Probe points: HOST · FAR END · Before/After BOUNDARY · Worst SEGMENT
A minimal, repeatable bring-up sequence reveals topology budget breaks early by probing host, far-end, and boundary points and logging segment-tagged stats.

Engineering checklist (Design → Bring-up → Production)

This page is intended to be deliverable-ready: topology map + budget sheet + boundary strategy + probe plan + gate checklists. Gates make topology decisions reviewable, repeatable, and production-friendly.

Required evidence pack (attach to design reviews)

  • Topology master map: trunk/stub/junction/connector/testpoint/segment boundary clearly labeled.
  • Universal budget sheet: C / fanout / trunk length / stub length / connector + testpoint parasitics + margin.
  • Segmentation intent: why each boundary exists (Cap / Fault / Power-domain / Isolation).
  • Probe & testpoint plan: host / far-end / before-and-after boundary probe locations (no measurement tricks here).
  • Bring-up log template: segment-tagged counters + throughput/latency distribution (p99) with thresholds X.

Gate 1 · Design

Objective: lock down topology risk before layout rework becomes expensive.

  • Budget sheet complete (all parasitics counted; margin reserved = X).
  • Topology map complete (trunk/stub + red zones: star center / long stubs / cross-board connector).
  • Segmentation strategy complete (domain boundaries explicitly named).
  • Probe plan complete (HOST / FAR END / boundary A/B probe points defined).
  • Change triggers written (new device/connector/testpoint → must rebudget).
  • Bypass/isolate options reserved (jumpers/0Ω footprints; isolation boundaries marked “no bypass”).
  • Edge domain policy for hot-plug/brown-out branches (do not mix with core reference domain).
  • Review checklist attached to schematic/layout reviews (artifact pack included).

Gate 2 · Bring-up

Objective: confirm budgets remain intact under worst topology stress with minimal instrumentation.

  • Baseline passes (empty/near-empty domain; errors ≤ X).
  • Single-load passes (representative peripheral; margin ≥ X).
  • Far-end passes (longest path + worst stub; stable ≥ X).
  • Full fanout passes (all nodes; p99 latency/timeout ≤ X).
  • Boundary A/B comparison done (identify first failing segment if any).
  • Reset repeatability verified (no “works once” behavior).
  • Corner conditions sampled (temperature/voltage edges; thresholds X).
  • Hot-plug/brown-out sanity (no persistent lockup; faults localized to a segment).

Gate 3 · Production

Objective: make topology measurable, debuggable, and reproducible on the factory floor.

  • Fixture access confirmed for required probe points (or defined alternates).
  • Testpoints counted in budget (testpoints treated as real capacitive load; margin X retained).
  • Segment-tagged logging enabled (counters and events include segment ID).
  • Minimum repro topology defined (smallest wiring that reproduces the issue).
  • Bypass/isolate procedures documented (what can be bypassed safely; what must not).
  • Variant recheck plan for connector/cable alternates (topology-sensitive).
  • Fault localization path (boundary A/B checks reduce debug time).
  • Production acceptance criteria recorded (thresholds X; pass/fail objective).

Boundary: firmware retry/state-machine details belong to Error Handling & Recovery. Protection and termination details belong to Port Protection and EMC & Edge Control.

Three gates checklist for topology planning A three-column diagram: Design gate, Bring-up gate, and Production gate, each with short checklist tags. Gate checklist · Design → Bring-up → Production Gate 1 · DESIGN Gate 2 · BRING-UP Gate 3 · PRODUCTION Topology map Budget sheet Segment intent Probe plan Change triggers Bypass reserved Red zones Review pack Baseline 1-load Far-end Full fanout Boundary A/B Repeatability Temp/Volt Hot-plug sanity Fixture access TP in budget Segment ID Counters p99 latency Min repro path Bypass/isolate Acceptance X
Gates turn topology planning into a team deliverable: artifacts + pass criteria + reproducible validation.

Applications & IC selection notes (topology-only picks)

The picks below are not a parts catalog. Each part category is mapped to a topology goal: segmenting capacitance, containing faults, enforcing domain boundaries, managing fanout, and improving debug visibility. Verify package/suffix/grade (industrial/automotive), voltage rails, and power-off behavior.

Topology-focused selection metrics (use on every boundary)

  • Added delay: must be included in timing assumptions; do not hide it.
  • Directionality: bidirectional vs fixed-direction channels; impacts segmentation correctness.
  • Default state: power-up enable/channel state; avoid invisible reconnections.
  • Power-off behavior: partial-power-down, back-powering, clamp/leak paths (ghost-power trigger).
  • Fail-safe / idle behavior: avoid “holds low” or unintended bus drive in idle.
  • Cascade limit: more layers reduce maintainability; require probe points and bypass options.

I²C topology helpers

Use these to cut capacitance domains, control branches, and isolate fault/power/isolation boundaries.

Where they belong (placement rules)

  • Mux/switch at branch roots: makes a tree auditable and limits fault spread.
  • Buffers/repeaters to split large Cbus into smaller domains; keep probe points before/after.
  • Differential extenders at cable transitions: keep the core single-ended domain clean.
  • Isolators at domain boundaries: define “no-bypass” rules where required.

Concrete parts (examples)

  • I²C switch / mux (fanout + address conflict control): TI TCA9548A, TI TCA9546A.
  • Capacitance isolation via buffered mux: ADI LTC4306 (buffered multi-channel segmentation).
  • Level-translating I²C repeater / buffer (domain split): TI TCA9617A, TI TCA9517, TI TCA9511A, NXP PCA9515A.
  • Differential I²C extender (noisy / longer cabling): NXP PCA9615 (dI²C).
  • Isolated I²C boundary (galvanic isolation): TI ISO1540 / ISO1541, ADI ADuM1250 / ADuM1251.
  • Trigger to use them: Cbus growth, long cable, cross-domain power, repeated stuck-low events, or hot-plug edge domains.

Note: exact variants (Q1/industrial), package, and ordering suffix must match rail voltage, speed, and power-off requirements.

SPI topology helpers

SPI is clock-centric: protect SCLK quality and make multi-slave fanout auditable through buffering/selection and boundary isolation.

Where they belong (placement rules)

  • Clock buffers near the clock source (master) to control fanout and reduce skew variability.
  • Selectors / analog switches to avoid “invisible star” wiring and reduce MISO contention risk.
  • Digital isolators at isolation/power boundaries; keep the core reference domain clean.
  • Long-chain special case: isoSPI for long cable/chained nodes where clock/data integrity must survive noise/ground offset.

Concrete parts (examples)

  • Quad-channel digital isolators (often used for SPI boundaries): TI ISO7741 (ISO774x family), ADI ADuM1401 (ADuM140x family).
  • Simple buffering for fanout control (topology visibility): TI SN74LVC125A, Nexperia 74LVC125 family.
  • UART/SPI line selection via analog switch (topology safety): TI TS5A23157, TI TS5A3159, Nexperia 74LVC1G3157, ADI ADG772.
  • Clock fanout buffer (when SCLK must be replicated cleanly): TI CDCLVC1102 (clock buffer class; verify logic family/levels).
  • Long-chain robust physical layer (isoSPI): ADI LTC6820 (isoSPI transceiver class).
  • Trigger to use them: multi-board/backplane connectors, uncertain fanout skew, noisy ground offsets, or hard-to-debug contention.

Keep this page topology-only: do not turn it into CPOL/CPHA or termination tuning. Link those details to the dedicated pages.

UART topology helpers

UART is point-to-point by default. Multi-drop must be made explicit via selectors/muxes/bridges to avoid contention and fragile wiring.

Where they belong (placement rules)

  • Selectors/muxes at the TX driver side: ensure only one driver is ever connected.
  • Bridges when buffering/back-pressure is needed: convert physical contention into a managed queue.
  • Exit condition: if distance/noise/ground offset dominates, leave UART physical layer and move to a differential/isolated PHY (link out).

Concrete parts (examples)

  • UART sharing via analog switch / selector (structural anti-contention): TI TS3A24157, TI TS5A23157, Nexperia 74LVC1G3157, ADI ADG772.
  • I²C/SPI-to-UART expansion (adds UART ports without hard-wiring): NXP SC16IS750 (single), NXP SC16IS752 (dual).
  • USB-to-UART bridge (for debug console fanout / production fixtures): FTDI FT232R, FTDI FT231X, Silicon Labs CP2102N.
  • USB-to-UART + I²C helper bridge: Microchip MCP2221A (fixture-friendly class).
  • Trigger to use them: more than one device needs access to the same UART, or field-debug requires safe channel selection.

Keep protection and RS-232/RS-485 details on their own pages. This section is only about topology choices that prevent collisions.

Device classes mapped to topology goals Left column lists topology goals; right column lists device classes grouped by I2C, SPI, and UART; arrows connect goals to suitable device classes. Topology goals → device classes (examples) Topology goals Cap domain cut Fault containment Power-domain boundary Isolation boundary Fanout expansion Service bypass / debug Device classes I²C: Buffer / Switch / Mux / Isolator / dI²C SPI: Clock buffer / Selector / Isolator / isoSPI UART: Selector / Mux / Bridge Delay Power-off behavior Default state Cascade limit
Use device classes to reach topology goals. Always validate delay, default state, power-off behavior, and cascade limits at boundaries.

Boundary: protection parts and component-level RC/termination details remain on Port Protection and EMC & Edge Control. This section only lists topology-enabling categories and example part numbers.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs (Topology Planning)

Scope: close out long-tail troubleshooting without expanding the main text. Each answer is intentionally “first-decision + first-action”. Detailed pull-up math, SI waveforms, and protocol analyzer tactics belong to their dedicated pages.

tokens: faq
Adding one more I²C device makes it flaky—capacitance budget or stub distribution first?

Likely cause: Total Cbus margin is exhausted or the new node changes stub distribution (a “hidden star” junction appears).

Quick check: Compare counters/behavior with the new node disconnected; then probe HOST vs FAR-END to see if failures cluster at the farthest segment.

Fix: Rebudget C (device/trace/connector/testpoint) and shorten/relocate stubs; if edge domain keeps growing, split into segments (buffer/switch) and keep a clean core domain.

Pass criteria: NACK/retry rate ≤ X and “far-end only” sensitivity disappears across resets and worst-case fanout.

Works at 100 kHz, fails at 400 kHz—topology issue or pull-up sizing?

Likely cause: Topology margin (Cbus + stubs + junctions) is already tight; higher speed exposes edge/rise-time sensitivity that was “masked” at 100 kHz.

Quick check: Keep pull-ups unchanged and simplify topology (remove the farthest branch / long stub / extra connector). If stability returns, topology is the first lever.

Fix: First make topology “bus trunk + short stubs” and segment edge domains; then compute pull-up sizing on the dedicated page: Open-Drain & Pull-Up Network.

Pass criteria: 400 kHz runs with error counters ≤ X and no speed-dependent “cliff” when restoring full fanout.

Random NACKs only on the farthest node—what probe point proves stub reflection?

Likely cause: The farthest branch behaves like an uncontrolled stub (connector/testpoint parasitics included), creating distance-dependent margin loss.

Quick check: Compare A/B probe locations: (A) at the trunk before the far branch and (B) at the far node pad/connector. A large A→B degradation points to stub/edge-domain issues.

Fix: Shorten the far stub, move the junction closer to the trunk, or segment the far edge domain (switch/buffer) so the core trunk no longer “sees” that parasitic load.

Pass criteria: Far-end NACKs ≤ X and errors no longer correlate with “farthest-only” access patterns.

Star wiring “seems fine” on bench but fails in enclosure—what topology assumption broke?

Likely cause: The bench setup unintentionally reduced parasitics (shorter leads, fewer connectors, cleaner reference), while the enclosure adds edge-domain length/connector stubs and breaks “equal branch” assumptions.

Quick check: Reproduce with the full harness/connectors installed but only one branch enabled at a time. If one branch is disproportionately fragile, the star junction is not controllable.

Fix: Replace star with a trunk + short stubs, or convert branches into a controlled tree using switches/mux at branch roots; avoid junctions at connector clusters.

Pass criteria: Full enclosure wiring passes with stability ≥ X and no “branch-dependent” failure mode.

SPI multi-slave: only one device misreads—CS fanout or SCLK routing first?

Likely cause: A topology asymmetry: that slave sees worse SCLK quality/skew, or CS routing introduces timing ambiguity at the device boundary.

Quick check: Keep SCLK the same, move that slave to a “known good” CS route (swap CS pins/wiring). If the problem follows CS, check CS fanout/junctions first; otherwise check SCLK path length/junctions.

Fix: Make SPI clock-centric: keep SCLK shortest/cleanest, avoid invisible star branches, and use selection/buffering so fanout is auditable (tree, not uncontrolled star).

Pass criteria: The “single bad slave” condition disappears and readback error rate ≤ X at target throughput.

SPI passes at low speed, fails at high speed—long trace SI or topology?

Likely cause: Topology creates uncontrolled branches/junctions; high speed makes the sampling window sensitive to path mismatch and return-path discontinuities.

Quick check: Reduce the topology first (shortest wiring, only the farthest slave, remove extra connectors/testpoints). If high speed becomes stable, topology is the first fix; then proceed to SI details.

Fix: Enforce clock-centric routing and controlled fanout; for long-trace treatment and termination strategy, link out to: Long-Trace SI.

Pass criteria: Target speed passes with retries/errors ≤ X and margin holds across full fanout.

UART shared among two devices causes garbage—why hard-OR is wrong and what topology fix?

Likely cause: UART is point-to-point by default; hard-wiring multiple TX drivers creates contention and undefined logic levels.

Quick check: Disable one transmitter (or physically disconnect one device). If garbage disappears, contention is confirmed (topology, not baud math).

Fix: Make sharing explicit using a selector/mux (only one TX connected at a time) or convert to a bridged architecture with buffering/back-pressure.

Pass criteria: No framing/parity spikes during switching; error counters ≤ X across repeated channel selections.

After adding a connector/testpoint, failures start—how to account parasitics in the budget?

Likely cause: “Hidden capacitance” and stub creation: connectors, headers, and testpoints add real load and can turn a clean trunk into a branched topology.

Quick check: Temporarily remove/disable the new connector path or testpoint usage (no fixture attached) and compare error counters and far-end sensitivity.

Fix: Treat connector/testpoint as a first-class budget item: add them to the C/length/stub table and relocate them away from the critical trunk; keep testpoints out of “no-TP zones”.

Pass criteria: With the connector/testpoints present, stability remains ≥ X and the budget retains margin ≥ X.

Intermittent failures after hot-plug—what topology change contains fault domain?

Likely cause: Hot-plug turns an edge branch into an unstable domain (power sequencing, parasitics, transient disturbance) that can propagate into the core domain.

Quick check: Compare behavior with the hot-plug branch disconnected vs connected; probe before/after the boundary to confirm whether the core trunk is being disturbed.

Fix: Define an explicit edge domain and isolate it using segmentation (switch/buffer/isolator as appropriate); provide a controlled enable path so the core domain remains stable during plug events.

Pass criteria: After hot-plug events, lockups ≤ X and faults remain localized (other segments unaffected).

Daisy-chained segments: first node OK, last node bad—what cumulative budget to check?

Likely cause: Cumulative loading and cumulative topology cost: each stage adds parasitics and reduces observable margin at the far end.

Quick check: Validate progressively: first node only → first+second → … → last node. The “first failing stage” identifies the segment where cumulative budget breaks.

Fix: Reduce per-stage cost (shorter stubs/connectors, fewer testpoints) or re-segment so the far domain is not forced to carry upstream parasitics; keep maintenance visibility at each boundary.

Pass criteria: The last node meets the same error/latency thresholds as the first node (≤ X), across resets and full fanout.

Isolation added and timing margin shrank—where to place the boundary to reduce impact?

Likely cause: The isolation boundary introduces delay and changes the domain assumptions; placing it in the wrong spot amplifies the number of nodes affected.

Quick check: Count how many nodes sit “behind” the boundary. If most fanout is behind it, the added delay/constraints affect the whole system.

Fix: Move the boundary to minimize impacted fanout (keep a clean core reference domain); isolate edge domains where noise/ground offset originates; mark isolation boundaries as “no-bypass”.

Pass criteria: Timing/latency margins return to ≥ X and stability is unchanged when restoring full fanout behind the boundary.

Production-only failures: Monday effect—what topology-related measurement is missing?

Likely cause: A topology-sensitive variable changes in production (fixture contact, harness/connector variant, probe loading, or segment boundary not being logged).

Quick check: Require segment-tagged logging and record the exact wiring/fixture state (which connector path, which branch enabled, which probe points attached) for every failure.

Fix: Make the minimum repro topology explicit (smallest wiring that fails), standardize fixture loading, and add boundary A/B probe points so failures map to a segment—not to a day of week.

Pass criteria: Failure rate becomes explainable by a recorded topology variable; after standardization, escapes ≤ X.