LSR

Late Stage Reprojection

Other
Introduced in Rel-13
A rendering technique used in Extended Reality (XR) applications over 5G systems to compensate for motion-to-photon latency. It adjusts the final image frame just before display based on the latest head/device pose, reducing perceived latency and improving visual stability.

Description

Late Stage Reprojection (LSR) is an advanced rendering and image processing technique defined within the 3GPP framework for supporting Extended Reality (XR) services over 5G networks. It specifically addresses the challenge of motion-to-photon latency—the delay between a user's movement (e.g., turning their head in VR) and the corresponding update of the image on the display. In a cloud/edge-rendered XR scenario, where complex 3D graphics are rendered on a remote server and streamed as video to a lightweight headset, this latency can cause disorientation, nausea, and a break in immersion. LSR operates as a final correction step on the client device (the XR terminal).

The architecture involves split rendering. The application server, potentially located at the network edge, renders the primary XR scene based on an initial pose (position and orientation) of the user's device received over the 5G network. This rendered frame is encoded and transmitted to the device. During the transmission and decoding time, the user's pose will have changed slightly. Instead of discarding the frame or waiting for a completely new one from the server, the LSR module on the client device takes the decoded frame and applies a geometric transformation (reprojection) to it. This transformation is calculated using the very latest, high-frequency pose data from the device's onboard sensors (gyroscopes, accelerometers). The reprojection typically involves a warping or shifting of the image pixels to align the virtual scene with the user's new viewpoint.

How it works technically: The client device maintains a pose prediction pipeline. When a new video frame arrives from the network, the client compares the pose timestamp used by the server for rendering with its current, updated pose. The LSR algorithm then computes the transformation (rotation, and sometimes translation) needed to correct the image. This is often a 3D rotational reprojection, which is less computationally intensive than full re-rendering but effective for compensating for small, rapid head movements. The reprojected frame is then sent to the display. This process happens in the final milliseconds before the screen refreshes, hence the term 'late stage.' Its role is critical in maintaining a convincing illusion of a stable virtual world, masking the inherent latency of the wireless transmission and remote rendering pipeline, which is a key performance indicator for quality of experience in wireless XR.

Purpose & Motivation

Late Stage Reprojection was standardized in 3GPP to solve the fundamental latency problem that threatens the feasibility of wireless, cloud-based Extended Reality. High-quality XR requires immense computational power for rendering, which is ideally offloaded to powerful edge servers. However, the round-trip time for sending pose data to the server, rendering a frame, and streaming it back can easily exceed the 20ms threshold beyond which latency becomes perceptible and causes simulator sickness. Traditional video streaming has no mechanism to correct for this.

LSR exists to decouple the rendering latency from the perceived motion-to-photon latency. It addresses the limitation of simply trying to make the network faster; even with ultra-reliable low-latency communication (URLLC), some latency is unavoidable. LSR provides a software-based correction mechanism that works within the last-mile of the display chain. It was motivated by the need to enable high-quality, untethered XR experiences over 5G networks, making the headsets lighter, cheaper, and more mobile by relying on network compute resources.

Historically, similar techniques were used in standalone VR headsets. 3GPP's work in Rel-13 and beyond formalized its requirements and integration into the 5G system architecture for media streaming, ensuring that the network (e.g., through quality of service parameters) and the application server can cooperate effectively with the client's LSR capability. It solves the problem of visual jitter and instability in cloud-rendered scenes, which is essential for professional, consumer, and industrial XR applications to become mainstream over cellular connections.

Key Features

  • Compensates for motion-to-photon latency in cloud-rendered XR streams.
  • Performs geometric image warping on the client device using latest local sensor data.
  • Operates in the final display pipeline stage just before scan-out.
  • Reduces perceived latency without requiring lower network latency.
  • Enables use of lighter, less powerful XR terminals by offloading heavy rendering.
  • Defined in 3GPP specifications for interoperable XR over 5G systems.

Evolution Across Releases

Rel-13 Initial

Introduced as part of the early study on media streaming enhancements. Defined the initial concept and requirements for Late Stage Reprojection as a client-side technique to compensate for end-to-end latency in rendered video streams, particularly targeting emerging virtual reality applications over mobile networks.

Further refined the requirements and system impacts of LSR within the 5G architecture. Work focused on understanding the interaction between the network's quality of service, application server rendering, and the client's reprojection capabilities to ensure a synchronized experience.

Integrated LSR more concretely into the 5G System architecture for supporting Extended Reality (XR). Specifications defined more detailed procedures for session negotiation, where client LSR capabilities can be signaled, and network parameters can be optimized to support the split rendering model that LSR enables.

Defining Specifications

SpecificationTitle
TS 26.998 3GPP TS 26.998
TS 32.855 3GPP TR 32.855