123 Main Street, New York, NY 10001

Data Concentrator / Collector for Smart Grid Metering

← Back to: Smart Grid & Power Distribution

This page brings together a practical system view of data concentrators for smart metering, showing how to choose the right architecture, interfaces, security, storage and power ICs so that meter data stays reliable, secure and affordable over 10+ years of operation.

What this page solves

Data concentrators sit between dozens or hundreds of smart meters and the utility’s head-end systems. At this layer, bandwidth limits, communication retries, event storms and backhaul cost all accumulate. Focusing only on single-meter design easily misses how violent traffic becomes when every meter attempts to talk at once.

A practical concentrator must balance several competing goals at the same time: near-real-time event delivery, robust operation over years of outdoor stress, controlled cellular or PLC backhaul cost, and compliance with modern security requirements. Choices in MCU/SoC, memory, HSM/SE and interface ICs directly shape these trade-offs.

This section frames the concentrator as a traffic and security hub. Later sections break that hub into concrete building blocks: how meter-side links are handled, how events and profiles are buffered, how HSM/SE protects keys and firmware, and how PLC, cellular or Ethernet backhaul are wired so that protocols, security and power budgets do not fight each other during deployment.

Smart meters feeding a data concentrator and backhaul Cluster of smart meters connected through PLC and RF links into a data concentrator with MCU or SoC and HSM or secure element. The concentrator forwards data over cellular or Ethernet backhaul to head-end and MDMS systems. Smart meters, data concentrator and backhaul Smart meters Meter Meter Meter Meter Meter Meter PLC / RF / RS-485 Data concentrator MCU / SoC HSM / SE security Storage profiles / events Power & protection Backhaul interfaces PLC · Cellular · Ethernet Utility systems Head-End / MDMS Operations / SCADA Backhaul: Cellular · Ethernet · PLC

Where the concentrator sits in the grid system

In a typical low-voltage area, smart meters attach to LV feeders behind a distribution transformer. The data concentrator mounts near this transformer or in a nearby cabinet, acting as the first aggregation point between dozens of meters and upstream feeder or substation systems. Upwards from the concentrator, traffic flows toward substation IEDs, SCADA gateways and head-end or MDMS platforms.

In denser sites such as industrial parks or large residential complexes, concentrators can be arranged in multiple tiers. Building-level concentrators collect meters for a single block or tower, then report to a regional concentrator, which in turn connects to utility head-end systems. Hardware and firmware must reflect this hierarchy, with clear rules about what is aggregated at each level.

Through all of these topologies, two main data paths run through the concentrator: a metering path that carries profiles and billing data upstream, and a control and firmware path that carries commands and OTA images downstream. Interface choices for PLC, RS-485, Ethernet and cellular, together with buffering and scheduling, determine how well these two paths coexist during everyday operation and during large fault or storm events.

Data concentrator position and data paths in smart grid Diagram showing smart meters and LV panels connected to a low-voltage transformer, then to a data concentrator, and upwards to substation and head-end systems. Two paths are highlighted: metering data upstream and control or OTA downstream. Grid position and data paths through the concentrator Metering data upstream Control & OTA downstream LV feeders, smart meters and LV panel units Meters LV panel / gateway Meters Meters Meters Building-level concentrator LV transformer Data concentrator / regional collector PLC · RS-485 · Cellular · Ethernet Substation IEDs, SCADA gateways and utility head-end Billing / MDMS · Operations / control Metering data upstream Control / OTA downstream

Key requirements & constraints

Before picking an MCU, SoC or communication modules, the concentrator design needs a clear envelope for how many smart meters it must serve, how quickly profiles and events must move, and how harsh the environment and power conditions will be. These constraints drive memory, interface count, storage type and power architecture across the whole board.

  • Connected smart meters: Target 64/128/256+ end points, sized for worst-case bursts when all meters send events or retries at once, not only average traffic.
  • Latency split: Periodic profiles in the range of 15–60 minutes versus event alarms that should reach upstream systems within seconds to tens of seconds.
  • Environment & power: Operation from -40 to +70/+85 °C with wide input such as 24–48 Vdc, line sags, surges and short outages without corrupting logs or profile storage.
  • Security & trust: Hardware or at least strongly anchored protection for device keys, mutual authentication, and signed firmware or configuration updates to block rogue meters or counterfeit concentrators.
  • Backhaul cost & PLC bandwidth: Cellular tariffs, PLC line capacity and allowed offline buffer time defining how much local compression and storage are needed.
  • Redundancy & availability: Requirement or not for dual SIM, dual Ethernet, redundant feeds and watchdog coverage for critical feeders or industrial users.
  • Storage endurance: Expected lifetime of 10–15 years with repeated profile, event and log writes, shaping the choice between NOR, NAND, eMMC and FRAM.
  • Maintenance & OTA: Planned firmware update frequency, local service ports and log retention time, which all influence flash sizing and partitioning.

Once these boundaries are explicit, later architecture choices for MCU versus Linux-capable SoC, HSM or SE use, storage hierarchy and backhaul interfaces can be compared against the same requirement set instead of being tuned in isolation.

Key requirement bands and impact on concentrator design Diagram showing requirement bands for meter count, latency, environment, security, backhaul budget and redundancy, and how each band pushes MCU or SoC, storage and power design in the data concentrator. Requirement bands shaping the data concentrator Requirement Design impact Meter count 64 / 128 / 256+ Queue depth, RAM and CPU headroom PLC sessions, retry storms, event bursts Latency minutes vs seconds Scheduler and prioritisation profiles vs critical events Environment -40…+70/85 °C Power tree, derating, hold-up design surge, sag and outage handling Security keys · auth · OTA HSM / SE need, secure boot chain crypto engine and NVM sizing Backhaul tariff & PLC band Local buffer time and compression uplink mix: cellular, Ethernet, PLC Redundancy & life dual links · 10–15 years Extra PHYs, feeds and storage endurance OTA strategy and serviceability

Architecture options: MCU vs SoC vs gateway-like platforms

Data concentrators can follow several controller styles. Compact deployments often use a single 32-bit MCU with external PLC and cellular modules, while larger or more complex installations move to Linux-capable SoCs with DDR and multiple Ethernet ports. Utilities with strict security and audit demands add dedicated HSM or SE devices and treat the concentrator as a hardened gateway.

Each architecture changes the balance between power, cold-start time, software complexity, protocol flexibility and long-term security posture. The choice determines not only the main controller, but also the type and size of external storage, the number of Ethernet and PLC interfaces, and whether a stand-alone HSM or SE is present on the board.

The following comparison focuses on three patterns: a single MCU with external modems and PHYs, a Linux SoC or MPU platform with DDR and richer networking, and a gateway-like design that pairs the controller with a dedicated HSM or SE and a modular PLC front-end for markets that prioritise compliance and strong cryptography.

MCU, SoC and gateway-like data concentrator architectures Diagram comparing three data concentrator controller options: a single MCU with external modems and PHYs, a Linux-capable SoC or MPU with DDR and rich networking, and a hardened gateway-like design adding HSM or secure element and modular PLC front-end. Controller architecture options for the concentrator MCU-centric Linux SoC / MPU Gateway with HSM/SE Single MCU + modules 32-bit MCU internal Flash / SRAM PLC modem, cellular 1–2× Ethernet PHY SPI NOR / FRAM AC/DC & DC/DC Low power, fast start, limited protocol headroom SoC / MPU with Linux Cortex-A SoC / MPU external DDR Multi-port Ethernet / switch PLC and RF modules eMMC / NAND + NOR Rich power rails & PMIC Higher power, longer start, strong protocol and app flexibility Hardened gateway MCU or SoC controller HSM / secure element Modular PLC front-end Secure storage & logs Hardened power & tamper Strong compliance, PKI and audit, with higher BOM cost Requirement bands from the previous section guide which architecture column fits a given deployment.

Meter-side interfaces & protocol handling

On the meter side, the data concentrator must accept traffic from G3-PLC or PRIME networks, RF mesh, RS-485, M-Bus and region-specific AMI protocols, then turn all of those links into manageable sessions. Each smart meter becomes a logical endpoint with its own address, state and timers, regardless of the underlying physical interface and media conditions.

The controller inside the concentrator runs a multi-channel session manager that tracks which meters are active, which requests are outstanding and which ports are congested. This session layer sits above PLC or RF stacks and below the application logic so that retries, backoff and timeout behaviour can be tuned once and reused across protocols, instead of being duplicated in each physical interface.

When many meters are online at the same time, the main bottlenecks are not only PLC or RF bandwidth, but also queue depth, event storm handling and rate limiting on the concentrator itself. Profiles and routine reads go into lower-priority queues, while outage and tamper alarms are pushed through high-priority event queues with stricter latency targets. The PLC modem, AFE and coupling network are treated as a single interface module from the controller view; detailed matching networks and component choices remain in the PLC front-end design.

Meter-side interfaces and session management in the concentrator Block diagram showing G3-PLC, RF mesh, RS-485 and M-Bus interfaces feeding a meter session and queue manager inside the data concentrator, which then separates traffic into event and profile queues for upstream delivery. Meter interfaces and session manager inside the concentrator Meter-side interfaces G3-PLC / PRIME PLC modem + AFE + coupling RF mesh radio RS-485 multi-drop M-Bus / regional AMI Meters Meters Meter session & queue manager Session table address · state · timers per meter Scheduler and retries timeouts · backoff · rate limits Event storm handling per-meter limits · priority rules Event queue outage · tamper · alarms tight latency targets Profile queue 15–60 minute reads compressible and deferrable PLC front-end component-level design is handled in a dedicated PLC front-end topic.

Security architecture with HSM/SE

A data concentrator is a trust anchor between large populations of smart meters and utility back-end systems. The security architecture must address threats such as fake meters or concentrators, packet capture and replay, firmware tampering and physical access to the enclosure. Long-lived keys, device identity and audit-relevant counters should reside in a hardened HSM or secure element rather than in general-purpose flash.

The main MCU or SoC runs protocol stacks, application logic and queue management, while the HSM or SE holds private keys, certificates and sensitive counters, and performs cryptographic operations. Mutual authentication with meters, TLS or VPN sessions toward head-end or MDMS platforms and verification of signed firmware images are all anchored in the secure element so that compromise of the host controller alone does not expose long-term trust material.

When selecting an HSM or SE, attention should be given to available security certifications, supported ECC curves and crypto algorithms, secure NVM capacity for keys and certificates, and the interface used to connect to the controller, typically I²C or SPI. These choices set the foundation for secure boot chains, OTA roll-out policies and compliance with utility and regulatory audit requirements.

Security architecture using MCU or SoC with HSM or secure element Diagram showing smart meters and head-end systems connected to a data concentrator that contains an MCU or SoC for application logic and an HSM or secure element for keys, crypto and secure boot, with secure paths for mutual authentication, TLS or VPN and OTA. Security architecture with HSM / secure element Smart meters & field Meter Meter Meter Meter Threat examples fake meter / fake DCU capture & replay firmware tampering physical access Data concentrator security core MCU / SoC protocols · queues · OTA control HSM / secure element keys · certificates · RNG I²C / SPI Secure boot chain Signed firmware verify Head-end / MDMS / cloud TLS / VPN termination OTA server Mutual authentication TLS / VPN with keys in HSM OTA image + signature verified by HSM / SE HSM / SE selection focus Certifications · ECC curves · secure NVM size · I²C / SPI integration

Storage & power management

Storage in a data concentrator has to handle three different traffic classes: bulk profile data, event logs and long-lived security or audit records. Profile and routine meter reads push large volumes of data and are usually stored in eMMC or NAND with batch append and clean-up routines, while critical pointers and counters that change frequently are better kept in FRAM or similar high-endurance memory to avoid premature wear-out.

Event logs and security logs benefit from journal-style or double-buffered layouts so that mid-write power loss does not corrupt the structure. A small set of status flags, upload indices and fault counters can be mirrored between SPI NOR and FRAM so that recovery after a brownout is deterministic. Power-loss detection combined with a short hold-up period allows the controller to flush pending records, mark log state cleanly and record a final shutdown event before rails collapse.

On the power side, the concentrator typically accepts a wide 24–48 Vdc input with surge, lightning and reverse-polarity protection in front of DC/DC converters and LDOs. Local backup energy from supercapacitors or a small battery supports orderly shutdown and last-event reporting during outages. Staged power-up and selective wake-up of cellular, Ethernet and PLC sections keep standby consumption low while still preserving storage, HSM or SE and control logic in a safe state for many years of operation.

Storage tiers and power tree for a data concentrator Block diagram showing profile, event and security logs mapped to SPI NOR, eMMC and FRAM, with a power tree for wide input, surge protection, DC/DC rails and local backup energy for graceful shutdown and low-power modes. Storage tiers and power tree for the concentrator Storage layout SPI NOR Flash firmware · config · core logs eMMC / NAND bulk profiles · event history FRAM / high-endurance counters · pointers · flags Profiles Events outages · alarms Security logs Transactional logging journal · double buffer · recovery markers protect structure during brownouts and resets Power-loss detect & flush final writes · shutdown event · clean state Power tree and backup 24–48 Vdc input surge · lightning · reverse protection DC/DC and LDO rails core · DDR · I/O · RF · PLC · HSM Supercap / small battery graceful shutdown · last-event report Power domains and sequencing MCU first · links on demand Low-power operating modes storage and HSM kept alive Storage policy and power tree are co-designed so that 10–15 year life and outage behaviour stay predictable.

Backhaul: PLC, cellular and Ethernet PHY & network design

Backhaul design defines how the concentrator reaches upstream head-end or MDMS platforms. In some deployments a single cellular path is sufficient, while others combine cellular with wired Ethernet or use PLC to hop to a higher-level node or substation. The mix of links must respect bandwidth, latency and availability targets, as well as the cost of cellular data plans and the reliability of utility or private Ethernet networks.

Cellular modules interface over UART, USB or PCIe depending on the controller platform, with SIM management, antenna routing and watchdog-based recovery treated as part of the backhaul design rather than left to firmware alone. Ethernet uplinks rely on single or multiport PHYs and, where needed, small switches that may support PTP timestamps or TSN features. Decisions about PoE, port count and ring redundancy directly influence power sizing and PCB layout inside the concentrator enclosure.

VPN or TLS tunnels secure traffic across whichever backhaul is active, with long-lived keys and certificates stored in an HSM or secure element. Failover policies define how sessions move from primary Ethernet to cellular or PLC uplinks when faults occur, keeping outage reporting and billing data moving without opening up additional attack surfaces. High-voltage substation SCADA gateway architectures are handled separately and are not part of this concentrator-level backhaul discussion.

Backhaul options for a data concentrator Block diagram showing a data concentrator connected by cellular, Ethernet and PLC uplinks to head-end and substation nodes, with VPN or TLS tunnels anchored in an HSM or secure element and failover between primary and backup paths. Backhaul mix: cellular, Ethernet and PLC uplinks Data concentrator Backhaul controller link selection · failover · QoS VPN / TLS tunnel keys and certificates in HSM / SE HSM / secure element Cellular uplink 4G / 5G / NB-IoT module UART · USB · PCIe · SIM PLC hop uplink PLC modem + AFE hop to higher-level node Ethernet uplink 1–2× PHY or small switch optional PTP / TSN · PoE Head-end / MDMS Utility or cloud network primary or backup path PLC hop where applicable Ethernet primary link VPN / TLS tunnel Backhaul choices combine cellular, Ethernet and PLC paths while keeping secure tunnels anchored in the HSM / SE.

Design checklist & IC mapping

Before committing a data concentrator mainboard to layout, the engineering team benefits from a clear set of requirements around meter population, backhaul strategy, logging retention, security level and environmental stress. A structured checklist helps close gaps early, while an IC category map gives purchasing and FAE teams a common language for MCU or SoC, security, communications, storage and power components.

The design checklist focuses first on scale: maximum meter count per concentrator, expected readout cycle, typical and worst-case event rates and latency targets for outage and tamper alarms. It then moves to backhaul choices between cellular, Ethernet and any PLC hop, along with redundancy needs and the required availability level. Subsequent questions address how long profile, event and security logs must be retained locally, and whether high-endurance memory is allocated for counters and recovery markers.

Security questions clarify whether a hardware HSM or secure element is mandated, which certifications or utility guidelines apply and whether secure boot, firmware signing and mutual TLS with head-end systems are compulsory. Environmental and power topics round out the list, covering surge and lightning levels, input voltage range, backup energy for orderly shutdown and operational temperature range. IC mapping then groups candidate vendors for controllers, HSM or SE, PLC modem and cellular modules, Ethernet PHY and switches, SPI NOR, eMMC, FRAM or EEPROM, as well as AC/DC, DC/DC and protection devices, at the category level without locking into specific part numbers.

Design checklist and IC category mapping for a data concentrator Diagram showing a design checklist for data concentrator requirements on the left and IC category mapping on the right, including controller, security, connectivity, storage and power device groups that guide vendor discussions. Design checklist and IC category map Design checklist Capacity & load max meters · read cycle · event rate · latency targets Backhaul & redundancy cellular vs Ethernet · dual links · PLC hop · uptime target Logging & retention profile days · event history · security logs · FRAM use Security level HSM / SE need · certifications · secure boot · mutual TLS Environment & power surge level · input range · backup energy · temperature IC mapping by category Controller: MCU / SoC 32-bit M-class · A-class SoC · metering+concentrator SoC Security: HSM / secure element generic SE · utility-grade security modules Connectivity: PLC · cellular · Ethernet PLC modem and AFE · 4G/5G/NB-IoT modules Ethernet PHY / switch · optional TSN / PoE Storage: NOR · eMMC · FRAM · EEPROM SPI NOR for boot and config · eMMC for profiles FRAM / MRAM for counters · EEPROM for settings Power & protection AC/DC or HV DC/DC · multi-rail DC/DC · TVS · eFuse Answers to checklist drive IC category choices Design questions and IC categories are reviewed together before the data concentrator mainboard is released.

Request a Quote

Accepted Formats

pdf, csv, xls, xlsx, zip

Attachment

Drag & drop files here or use the button below.

FAQs about data concentrator architecture and IC choices

These twelve questions capture the main decisions engineers face when defining a data concentrator or collector: MCU versus Linux SoC, how to survive event storms, combining PLC and RF interfaces, where to anchor security, how to buffer data across outages and how to map requirements into IC categories without locking into part numbers too early.

1. When is an MCU-based concentrator not enough and a Linux-capable SoC becomes necessary?
For small meter groups, limited protocols and simple logging, a 32-bit MCU is usually sufficient. Once the design needs hundreds of meters, multiple PLC and RF stacks, rich logging, secure remote management and containers or field apps, a Linux-capable SoC becomes easier to scale and maintain than stretching an MCU to its limits. See H2-4
2. If hundreds of meters send events at once, how can the concentrator avoid being flooded?
The concentrator needs explicit protection against event storms: per-meter rate limits, priority queues for critical alarms, bulk handling for non-urgent events and backpressure toward the interfaces. A central session and queue manager keeps outages and tampers flowing while delaying or aggregating low-priority traffic instead of letting the CPU and storage overload. See H2-3, H2-5
3. How should hardware plan PLC, RF and RS-485 meter interfaces without creating conflicts?
The hardware should treat each interface as a modular channel with its own transceiver, isolation and buffering, then map them into a common session layer in the MCU or SoC. Shared resources such as DMA, interrupts and power rails are sized for worst-case concurrency, and protocol stacks are layered so PLC, RF mesh and RS-485 can coexist. See H2-5
4. If the secure element sits only in the concentrator and not in each meter, is there a security gap?
Placing the secure element in the concentrator protects uplinks, firmware and audit trails, but it does not replace security in meters. Each meter still needs its own identity and basic protection, while the concentrator SE anchors mutual authentication, VPN keys and secure boot. Both ends must be considered in a complete threat model. See H2-6
5. How can profile and event data be buffered and recovered during power loss or uplink failures?
A robust concentrator separates storage classes: eMMC or NAND for bulk profiles, FRAM or similar for counters and pointers, and SPI NOR for firmware and critical metadata. Power-loss detection triggers rapid flushing of key indices and a clean shutdown marker, so after recovery the system can resend missing data without corrupt logs or double-counting. See H2-7
6. Cellular data is expensive – how can the concentrator reduce backhaul traffic?
Concentrators can aggregate readings into time-bucketed profiles, compress payloads and batch uploads instead of streaming every event immediately. Non-critical records can be held until off-peak windows. Only essential alarms and control messages use immediate uplink, while routine data rides in compressed blocks to keep monthly cellular charges predictable. See H2-3, H2-8
7. When is it worth adding dual SIM or dual Ethernet for uplink redundancy?
Dual SIM or dual Ethernet make sense when regulatory or contractual uptime targets are strict, or when one network is significantly less stable. Redundant links let the concentrator fail over during outages or maintenance windows. Hardware must support independent paths, and the backhaul controller and VPN client must handle seamless reconnection without losing data. See H2-3, H2-8
8. During OTA firmware upgrades, how can the concentrator stay responsive and support rollback?
A safe OTA design keeps a known-good image and a candidate image in separate slots and boots through a secure chain anchored in the HSM or secure element. The concentrator validates signatures before switching and only marks the new image as permanent after passing health checks, so failures fall back automatically without bricking devices. See H2-6, H2-7
9. How does a PLC front-end in a concentrator differ from the one inside a meter?
A meter PLC front-end usually handles a single node with tight coupling to metrology, while a concentrator PLC interface must manage many nodes and higher aggregate traffic. The concentrator side often runs more complex routing, retries and diagnostics, and may require different coupling and protection because it is closer to the transformer or feeder side of the network. See H2-5
10. How can memory endurance for SPI NOR, eMMC and FRAM be estimated for 10+ years of service?
Endurance planning starts from realistic write rates for profiles, events and security logs, then maps each stream to the most suitable memory technology. eMMC write cycles, block sizes and wear-leveling are checked against the profile plan, while high-frequency counters move to FRAM. Safety margins and in-field diagnostics help confirm that lifetime assumptions remain valid. See H2-7
11. How should the concentrator’s role be scoped against the upstream SCADA gateway?
A data concentrator focuses on meter aggregation, local buffering and first-level alarms around a distribution area, while the SCADA gateway oversees wider substations, automation and protection logic. The boundary defines which protocols, logs and control loops remain local versus which are forwarded, preventing duplicated functions and overlapping security domains in the grid architecture. See H2-2, H2-8
12. What mandatory security and logging rules do different utilities and countries impose on concentrators?
Many utilities and regulators expect concentrators to keep tamper-proof audit logs, support secure boot and signed firmware, protect keys in dedicated hardware and record security-relevant events for years. Local rules may reference specific standards or certifications, so designs must leave headroom in storage, crypto performance and HSM or secure element capabilities to pass future compliance checks. See H2-3, H2-6