AMD has released the source code of FSR 2.0 GPUOpenavailable for anyone to download and use – part of its commitment to making FSR fully open source. The download includes all the necessary APIs and libraries for integrating the upscaling algorithm into DirectX 12 and Vulkan-based titles, as well as a quick-start checklist. According to AMD, DirectX 11 support needs to be discussed with AMD representatives, suggesting that DirectX 11 is either not officially supported or more difficult to implement.
Implementation of FSR 2.0 will apparently take developers anywhere from three days to four weeks (or more), depending on the features supported in the game engine. FSR 2.0 uses temporal upscaling, which requires additional data inputs from motion vectors, depth buffers, and color buffers to produce a quality image. Games need to add these structures to their engine if they are not already available.
Games that already support 2.0 versions of DLSS is the easiest to integrate according to AMD and usually requires less than three days of development time. Next up are UE4 and UE5 titles with the new FSR 2.0 plugin. Games that support decoupled display and rendering resolutions sit in the middle of AMD’s “development timeline,” which includes most games that support Temporary Anti-Aliasing (TAA). Finally, without the required inputs from FSR 2.0, games take four weeks or more.
Game developers need to implement FSR 2.0 right in the middle of the frame rendering pipeline as it completely replaces temporal anti-aliasing duties. This requires that any post-processing effects that require anti-aliasing be dealt with later in the pipeline, after the FSR 2.0 upscaling takes effect.
At the beginning of the pipeline, you have rendered and upscaled effects, as well as post-processing effects that don’t require anti-aliasing. Right in the middle is the FSR 2.0 upscaling, after which post-upscale and anti-aliasing post-processing effects are dealt with. Finally, the HUD rendering takes place after everything else is done.
AMD says machine learning is overrated
Perhaps the most controversial aspect of AMD’s GPUOpen article is its take on machine learning. According to AMD, machine learning isn’t a prerequisite for achieving good image quality and is often just used to combine previous frames to produce the upscaled image, and that’s it. This means there is no AI algorithm to actually recognize shapes or objects in a scene, which is what we would expect from an “AI upscaler”.
This statement is a direct attack on Nvidia’s Deep Learning Super Sampling (DLSS) technology, as well as Intel’s forthcoming XeSS upscaling algorithm – both of which are AI upscaled. Nvidia has particularly boasted about the AI requirements of DLSS, suggesting it’s a necessity to produce native image quality.
However, we cannot substantiate AMD’s statement that machine learning is only used to combine previous frame data and not on objects in the actual scene. Nvidia has explained that the AI training for DLSS takes lower and higher resolution images and then combines all of this with the depth buffers and motion vectors with DLSS 2.0 and higher. Specifying exactly what the weighted AI network does and doesn’t do is not really possible with most machine learning algorithms.
Regardless, AMD has shown with FSR 2.0 that you don’t need machine learning hardware (ie Nvidia’s Tensor cores or Intel’s forthcoming Matrix engines) to produce native image quality. FSR 2.0 has proven to be almost as good as DLSS 2.x in tests we ran in both God of War and death loopand more importantly, it can run on everything from current-gen RX 6000 and RTX 30-series GPUs to cards like the GTX 970 that launched back in 2014.
While we give Nvidia’s DLSS a slight edge in image quality, being limited to RTX cards potentially makes it far less useful for gamers. In the future, any game that supports DLSS 2.x or FSR 2.0 will hopefully also support the other upscaling solution, giving all users access to one feature or the other.