HFL

Horizontal Federated Learning

Other
Introduced in Rel-19
Horizontal Federated Learning (HFL) is a distributed machine learning framework standardized in 3GPP for enabling collaborative model training across multiple network nodes or user equipments without centralizing raw data. It preserves data privacy by sharing only model updates, such as gradients or parameters, and is crucial for developing intelligent, privacy-aware network functions and services.

Description

Horizontal Federated Learning (HFL) is a decentralized machine learning paradigm standardized by 3GPP to facilitate collaborative model training across distributed entities, such as User Equipments (UEs), base stations (gNBs), or network functions, without exchanging raw data. The architecture typically involves a central server, known as an aggregator or federation server, and multiple participating clients. Each client trains a local machine learning model on its own dataset and sends only the model updates (e.g., gradients, weights) to the aggregator. The aggregator then combines these updates—commonly using algorithms like Federated Averaging (FedAvg)—to produce an improved global model, which is redistributed to the clients for further training rounds. This iterative process continues until the model converges to a desired performance level, enabling the collective intelligence of the network while keeping sensitive data localized.

The technical workflow in a 3GPP context involves specific procedures for client selection, secure update transmission, and aggregation coordination. Key components include the Federated Learning Management Function (FLMF), which orchestrates the training process, and secure communication channels, often leveraging existing 3GPP security mechanisms. The FLMF handles tasks such as participant authentication, resource allocation, and aggregation scheduling. Model updates are transmitted over standardized interfaces, with considerations for bandwidth efficiency and latency, especially in wireless environments. Privacy is enforced through techniques like differential privacy or secure multi-party computation, which may be integrated to prevent inference of raw data from the updates.

HFL's role in mobile networks is to enable advanced, data-driven applications such as radio resource management, mobility prediction, and network slicing optimization without compromising user privacy or incurring massive data transfer overhead. By distributing the computational load, it also alleviates the burden on central cloud resources. The 3GPP specifications define the necessary protocols, interfaces, and security frameworks to ensure interoperability and reliable operation across different vendors and network deployments, making HFL a foundational technology for future AI-native networks.

Purpose & Motivation

Horizontal Federated Learning was introduced to address the growing need for intelligent network automation and personalized services while adhering to stringent data privacy regulations like GDPR. Traditional centralized machine learning approaches require aggregating vast amounts of user data in a central server, raising significant privacy concerns, legal compliance issues, and security risks from data breaches. HFL eliminates the need for raw data centralization, allowing models to be trained on decentralized data sources, which is particularly critical in telecommunications where user data is highly sensitive and geographically distributed.

The motivation for standardizing HFL in 3GPP stems from the industry's shift towards AI-driven networks (e.g., in 5G-Advanced and 6G) that require real-time, context-aware decision-making. Previous approaches lacked a unified framework for secure, efficient federated learning in mobile environments, leading to proprietary solutions and interoperability challenges. HFL provides a standardized method to leverage the collective data from millions of devices and network nodes to improve network performance, energy efficiency, and user experience without compromising privacy. It enables new use cases, such as collaborative intrusion detection or quality of experience prediction, that were previously infeasible due to data silos and privacy constraints.

Key Features

  • Decentralized model training without raw data exchange
  • Privacy preservation through sharing of only model updates
  • Support for iterative aggregation algorithms like Federated Averaging
  • Integration with 3GPP security frameworks for secure communication
  • Orchestration by a Federated Learning Management Function (FLMF)
  • Resource-efficient operation suitable for constrained wireless environments

Evolution Across Releases

Rel-19 Initial

Initial standardization of Horizontal Federated Learning architecture, introducing the Federated Learning Management Function (FLMF) and basic procedures for client-server interaction, model update aggregation, and privacy safeguards. Specifications defined the foundational protocols and interfaces to enable interoperable federated learning across 3GPP networks.

Defining Specifications

SpecificationTitle
TS 21.905 3GPP TS 21.905
TS 23.288 3GPP TS 23.288
TS 23.700 3GPP TS 23.700
TS 24.560 3GPP TS 24.560
TS 28.105 3GPP TS 28.105
TS 28.858 3GPP TS 28.858
TS 29.520 3GPP TS 29.520
TS 29.552 3GPP TS 29.552
TS 33.501 3GPP TR 33.501