ECI

Edge and Cloud Interworking

Services
Introduced in Rel-10
ECI is a framework for integrating edge computing platforms with central cloud infrastructures to enable low-latency, high-bandwidth applications. It defines the architecture and procedures for seamless service deployment and user plane routing between the edge and cloud.

Description

Edge and Cloud Interworking (ECI) is a 3GPP architectural and service framework designed to bridge distributed edge computing environments with centralized cloud data centers. It addresses the challenge of deploying applications that require both the ultra-low latency and localized data processing of the network edge and the vast scalability and computational resources of the central cloud. The ECI framework, developed within the context of 5G System (5GS) and edge computing (EDGE), standardizes how application functions can be distributed across these two domains and how the network can dynamically steer user traffic to the appropriate instance.

The architecture involves several key components: the Edge Application Server (EAS), the Edge Enabler Server (EES), and the Edge Configuration Server (ECS) in the edge domain, along with their counterparts or interworking functions in the central cloud. The core network, specifically the User Plane Function (UPF), plays a pivotal role. ECI defines procedures for EAS discovery, where a UE or an application client can discover available edge application instances based on location, capability, or service requirements. It also specifies traffic steering rules, where the Session Management Function (SMF) configures the UPF with filters to route specific data flows to a local edge data network (LADN) or a central data network.

How it works involves close coordination between the application layer and the 5G core network. An application provider can deploy an application with components in both the edge and cloud. The 3GPP network exposes capabilities (via Network Exposure Function - NEF) for the application to influence traffic routing. For example, for an interactive gaming service, the low-latency rendering component might be hosted at the edge, while the player database and matchmaking logic reside in the cloud. ECI mechanisms ensure the UE's traffic is split accordingly: real-time game control packets are routed to the local EAS, while non-latency-critical data (like player stats) goes to the cloud. This interworking is managed through standardized service-based interfaces and APIs, ensuring interoperability between equipment from different vendors and cloud service providers.

Purpose & Motivation

ECI was created to solve the problem of application silos and inefficient resource utilization in early edge computing deployments. Initially, edge computing was often envisioned as an isolated platform, leading to scenarios where applications had to be entirely deployed at the edge or entirely in the cloud, with no graceful way to combine the benefits of both. This was inefficient and limited the types of applications that could be effectively supported. The motivation for ECI stemmed from the realization that most advanced use cases—such as autonomous vehicles, industrial IoT, and augmented reality—require a hybrid compute model. These applications need immediate processing at the edge for reaction-time-critical tasks but also rely on the cloud for massive data analytics, AI model training, and centralized control.

Historically, without a standardized interworking framework, operators and enterprises faced proprietary and complex integration challenges when trying to connect edge sites to central clouds, hindering scalability and multi-vendor deployment. ECI, introduced in 3GPP Release 16 and enhanced thereafter, provides a standardized 'glue' that defines the roles, responsibilities, and interfaces between the edge and cloud domains within the 5G system. It addresses the limitations of previous approaches by formally integrating edge computing as a native capability of the 5G core network, enabling dynamic, policy-driven, and seamless service continuity as users move or as application state needs to migrate between edge and cloud resources. This unlocks the full economic and technical potential of distributed computing.

Key Features

  • Standardized architecture for integrating edge and central cloud application layers
  • Procedures for Edge Application Server (EAS) discovery and registration
  • Network-assisted traffic steering and routing control between edge and cloud data networks
  • Support for application context transfer and service continuity
  • Exposure of edge capabilities to application functions via 3GPP APIs
  • Enables hybrid application deployment models (split computing)

Evolution Across Releases

Rel-16 Initial

Introduced the foundational Edge Computing (EDGE) architecture within 5GS, laying the groundwork for ECI. Defined key architectural entities like the Edge Enabler Client (EEC), Edge Enabler Server (EES), and Edge Configuration Server (ECS). Specified initial procedures for application discovery and traffic routing towards a local edge data network.

Defining Specifications

SpecificationTitle
TS 22.810 3GPP TS 22.810
TS 23.558 3GPP TS 23.558
TS 24.229 3GPP TS 24.229
TS 29.558 3GPP TS 29.558
TS 36.331 3GPP TR 36.331
TS 38.331 3GPP TR 38.331