Description
The Model Training Logical Function (MTLF) is a key architectural component introduced in 3GPP Release 17 as part of the 5G system's support for network automation and data analytics. It is defined within the Management Data Analytics (MDA) framework and operates as a logical function that can be deployed within the 5G Core Network or in a management system. The primary role of the MTLF is to ingest, process, and analyze network data to train, validate, and produce machine learning (ML) models. These models are then used to optimize network performance, predict failures, manage resources, or enhance user experience. The MTLF interacts with other network functions and data sources through standardized interfaces to collect training datasets, which may include performance measurements, configuration data, fault information, and user plane analytics.
The MTLF operates through a defined workflow that includes data collection, model training, and model publication. It receives data from various producers, such as the Network Data Analytics Function (NWDAF), Operations, Administration and Maintenance (OAM) systems, or other network functions (NFs) via the Nnwdaf_MTLtraining_Request service operation. The data is formatted according to analytics subscriptions and can be raw or pre-processed. The MTLF then applies machine learning algorithms—which are implementation-specific but could include regression, classification, clustering, or deep learning techniques—to this data to create a trained model. The training process may involve feature extraction, model selection, hyperparameter tuning, and validation against test datasets to ensure accuracy and avoid overfitting.
Once a model is trained and meets required performance metrics, the MTLF publishes it to a Model Repository Function (MRF) or directly to a consumer, such as an NWDAF instance that will perform inference. The model is typically represented in a standardized format like the Predictive Model Markup Language (PMML) or the Open Neural Network Exchange (ONNX). The MTLF also manages the lifecycle of these models, including versioning, retraining triggers (e.g., based on data drift, periodic schedules, or performance degradation), and retirement. It can be configured with training policies that define objectives, data requirements, and performance thresholds.
Architecturally, the MTLF is part of the broader data-driven ecosystem in 5G. It works in concert with the NWDAF (which focuses on analytics inference and exposure), the MRF (for model storage), and the OAM system. The interfaces like Nnwdaf_MTLtraining_Request/Response (defined in TS 29.520) facilitate communication between an NWDAF (as a consumer) and the MTLF. This separation of training (MTLF) and inference (NWDAF) allows for scalable, specialized deployments where computationally intensive training can be offloaded to dedicated platforms, while lightweight inference occurs closer to the network edge. The MTLF enables use cases such as predictive load balancing, anomaly detection, energy saving, slice-specific optimization, and Quality of Experience (QoE) prediction.
Purpose & Motivation
The Model Training Logical Function was created to address the growing complexity of 5G networks and the need for intelligent, automated management. Traditional network management relied on manual configuration and rule-based automation, which could not efficiently adapt to dynamic conditions, predict issues, or optimize performance in real-time. The explosion of data from network functions, devices, and services presented an opportunity to leverage machine learning, but there was no standardized way to integrate ML model training into the network architecture.
MTLF was motivated by the vision of self-organizing networks (SON) evolving into AI-native networks. It solves the problem of how to systematically generate and update ML models using live network data within a standardized framework. Before MTLF, ML capabilities were vendor-specific, proprietary solutions that lacked interoperability and made it difficult to ensure consistent model quality or share models across different network domains. The MTLF provides a standardized, open interface for requesting model training, enabling network functions like NWDAF to consume analytics models without being tied to a specific vendor's training platform.
Its introduction in Release 17, as part of the 5G Phase 2 enhancements, specifically supports advanced network automation scenarios defined in the 5G System Architecture (TS 23.501) and the Management and Orchestration (MANO) framework. It allows operators to deploy closed-loop automation where analytics insights from trained models directly drive network actions (e.g., via the OAM or policy control). This is critical for managing network slicing, where each slice may require unique performance models, and for meeting the stringent latency, reliability, and efficiency demands of 5G verticals.
Key Features
- Standardized logical function for training machine learning models using 5G network data.
- Supports service-based interface (Nnwdaf_MTLtraining_Request) for receiving training requests and data from consumers like NWDAF.
- Produces trained models in standardized formats (e.g., PMML, ONNX) for publication to a Model Repository Function (MRF).
- Manages full ML model lifecycle including training, validation, versioning, and triggered retraining.
- Enables data-driven network optimization, predictive maintenance, and closed-loop automation.
- Designed to work within the Management Data Analytics (MDA) framework alongside NWDAF and MRF.
Evolution Across Releases
Initial introduction of the Model Training Logical Function. Defined its architecture, service-based interfaces (e.g., Nnwdaf_MTLtraining_Request), and integration with the NWDAF and Model Repository Function (MRF). Established its role in the Management Data Analytics framework for training ML models to support network automation use cases.
Defining Specifications
| Specification | Title |
|---|---|
| TS 23.501 | 3GPP TS 23.501 |
| TS 23.700 | 3GPP TS 23.700 |
| TS 29.520 | 3GPP TS 29.520 |
| TS 29.552 | 3GPP TS 29.552 |