Google has informed developers of a vulnerability in versions (earlier than M102) of WebRTC. Impacts, workarounds, and updates can be found here.
The default Pixel Streaming configuration is made to balance image quality, latency, and resiliency of the stream under the assumption that it will be deployed on the internet and consumed by users in a variety of network conditions. If this assumption matches your use case then you should not need to modify your Pixel Streaming configuration. However, Pixel Streaming is a flexible technology and can be used in many other use cases. This guide intends to inform the reader about how quality, latency, and resiliency are achieved in Pixel Streaming while additionally giving guidance about cases where optimizing for either image quality, latency, or resiliency is more important than a balanced stream.
WebRTC
The balancing of image quality, latency, and resiliency in Pixel Streaming is largely handled by a technology called WebRTC. WebRTC is widely used in video conferencing and real time streaming, examples include Facebook Messenger, Discord, Amazon Chime, and Google Stadia. WebRTC is designed to facilitate low latency, fault tolerant, multimedia and data transmission between multiple participants.
In the case of Pixel Streaming the WebRTC participants are the Unreal Engine application using the Pixel Streaming plugin and some number of WebRTC capable clients, typically web browsers. WebRTC does not have modes or predefined configurations for specific use cases, due to the wide variety of use cases it has. Instead WebRTC will try to balance quality, latency, and resiliency against network conditions and resource constraints.
However, we have exposed a number of parameters in Pixel Streaming that allow users who have specific requirements around image quality, latency, and resiliency to favor one of those.
Note that any prioritization of one of these does impact the other two, it is your decision if these tradeoffs are acceptable in your use case. In the following sections we will give guidance on how to:
- Maintain image quality despite network conditions
- Achieve the lowest possible latency
- Make the stream resilient to poor network conditions
Image Quality
The image quality of the video stream is ultimately determined by how much compression is used when the Unreal Engine imagery is encoded before being transmitted by WebRTC. This compression occurs inside the Pixel Streaming application and by default is entirely controlled by WebRTC.
WebRTC Encoder Bitrate Adaptation
WebRTC will repeatedly determine the available network bandwidth between the Pixel Streaming application and the WebRTC client, calculate a suitable bitrate value and then update the video stream encoder with that latest bitrate estimate. The video stream encoder will then use that bitrate as an upper bound and will not produce an encoded image that exceeds that bitrate.
This system produces highly compressed images (e.g blocky and pixelated) when network conditions are poor and less compressed images when network conditions are capable of playing a high quality video stream. In other words, this system adapts the compression of the video stream based on network conditions.
Maintain Image Quality Despite Network Conditions
Answer: Use -PixelStreamingEncoderMaxQP=N
(where N is an integer between 1 and 51, inclusive).
We elaborate on the meaning of QP and the tradeoff that using this parameter introduces below.
Ultimately the only factor truly determining the image quality of the video stream is the compression performed by the video stream encoder. The compression of any given frame is measured using a metric called the "quantization parameter" (QP). In Pixel Streaming the encoders we use have a QP range of 1 - 51 (inclusive), where:
- 1 is the best quality/least compressed encoding
- 51 is the worst quality/most compressed encoding
By default we do not restrict this QP range, which means the video stream encoder can select an appropriate QP based on the target bitrate that WebRTC has passed it. However, in the case where we have an application that must never produce poor imagery (e.g. a luxury product configurator) then we may wish to restrict the QP range that the encoder can use.
If MaxQP is restricted such that the bitrate produced exceeds network conditions or the -PixelStreamingWebRTCMaxBitrate then the streamed frame rate will be reduced as frames are dropped.
Pixel Streaming video encoder QP can be controlled as a launch argument using:
-PixelStreamingEncoderMinQP=
-PixelStreamingEncoderMaxQP=
Typically, MaxQP is the only parameter you will need to change to bound the image quality. The MaxQP that is suitable for your application requires some experimentation as it depends on how compressible your application visuals are (particularly in the presence of movement). However, in our experience a MaxQP of 20 is acceptable for most users that wish to put an upper bound on how much compression they are willing to accept.
Transmitted bitrate and QP can be tracked using the in-page settings"/"stats panel
that Pixel Streaming ships with or by using chrome://webrtc-internals
in Chromium-based browsers or about:webrtc in Firefox.
The following images illustrate the impact that QP has on image quality and bitrate:






QP impact on image quality and bitrate
We highlight that the relationship between QP and bitrate is logarithmic, meaning the change in QP, for example, between a QP of 4 and 5 represents a much larger difference in bitrate than a change in QP between 20 and 21.
Latency
Latency in Pixel Streaming is controlled by a combination of factors, some in your control, and some not. While you cannot control the end user's device hardware, the public internet, or the speed of light; there are, however, a number of other factors you can control that impact Pixel Streaming's latency. We highlight these factors in the following section.
Achieve the Lowest Possible Latency
In order to minimize latency you may wish to adjust the following factors; however, be aware that some of these adjustments may impact stream quality and resiliency.
Latency Factors | Guidance |
---|---|
Selected video encoder. | Do not use the experimental VP8/VP9 software encoders as these introduce more latency than the hardware accelerated H.264 encoders. |
Geographic location of your application. | Host the Pixel Streaming application as geographically close to the target users as possible. This may require hosting in multiple regions. |
The hardware you use to host your application. | We recommend hardware that supports hardware accelerated H.264 encoding on the GPU. Additionally, we recommend profiling your application on your target CPU to ensure usage is not at 100% as this can stall the WebRTC transmission thread. |
Maximum bitrate and resolution. | Reducing resolution and maximum bitrate can reduce data transmission and encoding complexity, which makes encoding, packetization, and decoding faster. This latency reduction is usually not worth the tradeoff in quality. |
Synchronization of audio and video. | If you are willing to accept audio/visual desync then you can improve latency with -PixelStreamingWebRTCDisableAudioSync which will transmit audio and video in separate streams. |
Disable audio. | If you don't need audio, save bandwidth by disabling its transmission with -PixelStreamingWebRTCDisableReceiveAudio and -PixelStreamingWebRTCDisableTransmitAudio . |
Disable motion blur or scene complexity. | Disabling motion blur or any effect that increases visual complexity can, in some scenes, significantly decrease encoding complexity, thus resulting in a lower bitrate. |
In general, the biggest latency reductions will come from geographic proximity and the quality of the network between the Pixel Streaming application and the user.
Resiliency
In this context resiliency is how stable the stream is in the presence of packet loss, network jitter, and data corruption. WebRTC already has a number of internal, dynamically adjusted, mechanisms it uses to manage stream resiliency. For example, WebRTC can increase the size of its "jitter buffer" that it uses to store received packets to compensate for network delays and retransmissions, at the cost of increased latency. While the jitter buffer is not directly controllable, there are other factors we can control during video stream encoding to increase stream resiliency through data redundancy.
Make the Stream Resilient to Poor Network Conditions
Video stream resiliency in Pixel Streaming can be increased by adjusting the following:
Resiliency Factors | Guidance |
---|---|
Encoder keyframe interval | Sending keyframes allows the stream and decoder to recover after heavy data loss. The interval keyframes are sent on can be controlled using -PixelStreaming.Encoder.KeyframeInterval . Note, keyframes do take more bandwidth than normal frames, so if packet loss is being incurred by network saturation sending more keyframes may not help. |
Encoder intraframe refresh | Video stream recovery information can be encoded across multiple frame slices. This information makes the stream more resilient in the presence of data loss; however, this does take more bandwidth for the entire stream and introduces a scanline type artifact when stream recovery occurs. Note, this option is only available on NVIDIA GPUs at this time and can enabled in Pixel Streaming by using -NVENCIntraRefreshPeriodFrames=N and -NVENCIntraRefreshCountFrames=M (where N is how many frames before sending intraframe refresh again and M is how many frames to encode with intraframe refresh data). |
Generally, stream resiliency is mostly impacted by network quality and the amount of data being transmitted. Therefore, if it is acceptable for your Pixel Streaming application to trade decreased quality for increased resiliency, then reducing the amount of data transmitted by leaving the QP range unrestricted or by reducing your application resolution are also viable options.
Optimizing Your Application for Pixel Streaming
The following are additional suggestions we have to optimize your application for Pixel Streaming:
-
The presence of color banding artifacts can be greatly reduced by adding a post process introducing film grain into your scene, however; bandwidth will be increased.
-
If you can afford extra latency and you are not supporting multiple peers per Pixel Streaming application then you may wish to experiment with the experimental VP8/VP9 software encoders using
-PixelStreamingEncoder=
as they produce better quality encodings for the same bitrate as the H.264 encoders. - To achieve many Pixel Streaming applications running on a single GPU (multi-tenancy) your application will have to be profiled heavily or you may have to accept a reduced frame rate/resolution.
- If you intend to run your application at scale in the cloud, it will be much simpler and cheaper if your Unreal application is built for Linux due to the greater support of Linux by technologies like Kubernetes.