AI/ML

Artificial Intelligence and Machine Learning

Other
Introduced in Rel-18
AI/ML in 3GPP refers to the standardized integration of artificial intelligence and machine learning techniques into mobile networks. It enables data-driven optimization, automation, and intelligent decision-making across RAN, core, and management domains. This transforms networks from static configurations to adaptive, self-optimizing systems.

Description

The 3GPP AI/ML framework establishes standardized mechanisms for incorporating artificial intelligence and machine learning into mobile network operations. The architecture follows a distributed approach with AI/ML functions deployed at various network locations: near-real-time functions at the RAN Intelligent Controller (RIC) for radio optimization, non-real-time functions at the Service Management and Orchestration (SMO) for network-wide optimization, and core network functions for service intelligence. The framework defines standardized interfaces for data collection, model training, inference execution, and result distribution across network elements.

Key components include the AI/ML pipeline management system, which handles the complete lifecycle of ML models from training to deployment and monitoring. The NWDAF (Network Data Analytics Function) in the 5G core serves as a centralized analytics engine that can host ML models for network and service analytics. The RIC architecture supports xApps and rApps that implement ML algorithms for RAN optimization, with standardized interfaces (A1, E2) for data exchange and control. The framework also specifies data collection mechanisms, including standardized data sets, collection frequencies, and data formats to ensure interoperability between different vendors' AI/ML solutions.

The technical implementation involves several standardized procedures: data collection and preparation using defined data models, model training either centrally or distributed, model deployment to inference points, and continuous model monitoring and retraining. The framework supports various ML paradigms including supervised learning, reinforcement learning, and federated learning. For RAN optimization, ML models can predict traffic patterns, optimize beamforming, manage handovers, and allocate resources dynamically. In the core network, ML enables predictive QoS management, anomaly detection, and service experience optimization. The management system includes mechanisms for model versioning, performance monitoring, and fallback procedures to ensure network stability when ML models underperform.

Security aspects are integral to the design, with mechanisms for model integrity verification, data privacy protection, and secure model distribution. The framework addresses the computational requirements by defining capabilities for edge computing integration and distributed inference. Performance monitoring includes both traditional KPIs and ML-specific metrics like model accuracy, inference latency, and training convergence. The standardization ensures that AI/ML capabilities can be implemented consistently across multi-vendor networks while allowing innovation through open interfaces for custom ML applications.

Purpose & Motivation

AI/ML integration addresses the growing complexity of 5G and future 6G networks, which traditional rule-based optimization cannot manage effectively. As networks support diverse services with stringent requirements (ultra-low latency, ultra-high reliability, massive IoT), manual configuration and static optimization become impractical. The explosion of network data from connected devices, applications, and network elements creates opportunities for data-driven optimization that previous network generations couldn't fully exploit.

Historically, network optimization relied on expert knowledge, predefined rules, and periodic manual adjustments. This approach couldn't adapt quickly to changing conditions or discover complex patterns in network behavior. The limitations became particularly evident with 5G's introduction of network slicing, where each slice requires different optimization objectives that may conflict. Traditional methods also struggled with the scale of massive MIMO configurations, where beam management involves thousands of parameters that interact in complex ways.

The standardized AI/ML framework enables networks to become self-optimizing, reducing operational expenses while improving performance. It addresses specific challenges like energy efficiency optimization (reducing base station power consumption based on traffic predictions), mobility robustness (predicting and preventing handover failures), and load balancing (distributing traffic optimally across cells). By making AI/ML capabilities part of the standard, 3GPP ensures interoperability between different vendors' solutions and creates a foundation for network intelligence that will be essential for 6G's vision of truly autonomous networks.

Key Features

  • Standardized AI/ML lifecycle management (training, deployment, monitoring)
  • Distributed inference architecture across RAN, core, and management domains
  • Integration with NWDAF for network and service analytics
  • RIC-based optimization through xApps/rApps with ML capabilities
  • Support for federated learning preserving data privacy
  • Model performance monitoring and fallback mechanisms

Evolution Across Releases

Rel-18 Initial

Introduced the foundational AI/ML framework with standardized interfaces for data collection and model management. Established NWDAF enhancements for ML-based analytics, defined AI/ML pipeline management in management systems, and specified initial use cases for RAN optimization including traffic prediction and energy saving. Created the architecture for distributed AI/ML execution across network domains.

Defining Specifications

SpecificationTitle
TS 21.905 3GPP TS 21.905
TS 23.288 3GPP TS 23.288
TS 23.501 3GPP TS 23.501
TS 29.122 3GPP TS 29.122
TS 29.520 3GPP TS 29.520
TS 29.530 3GPP TS 29.530
TS 32.254 3GPP TR 32.254
TS 33.501 3GPP TR 33.501
TS 38.300 3GPP TR 38.300
TS 38.306 3GPP TR 38.306