This week Rise of the Tomb Raider released a DirectX 12 API update. It joins Hitman, Ashes of the Singularity, and Descent Underground as DX12 enabled games. With it’s release gamers’ reactions to performance has been varied. It has become a hot topic item for enthusiasts in discussion forums. Does it signify the leap in performance on existing hardware as Microsoft claimed? Well yes, and no, with such varied PC gamer hardware configurations the results at the moment, might not be what some expected. My own experiences taught me to temper expectations. Remember, DX12 API is about reducing driver overhead and allowing CPU usage to expand. You can reduce driver overhead by using DX12 calls, but to maximize CPU/GPU usage game engines need to be written to do so.
Test System: Intel i7-4790K – 16GB Ram – Windows 10 Pro – 2x Gigabyte G1 GTX 970- Nvidia Drivers 364.51
One thing to note is single GPU users should see an immediate boost in performance regardless of settings. If you are like me, you set quality settings to remain on average as close to 60fps as possible. In most cases when Direct X 12 is enabled titles tend to show an increase in FPS with existing settings. When using DX12 certain quality settings can be increased, and still maintain desired 60 fps averages, so that’s a benefit. Descent Underground is an Unreal 4 based game in Early Access. Unreal 4 does not by default utilize SLI/Crossfire implementations. Initial tests shows a small 5 FPS performance boost using DirectX 12. Recent builds have altered these findings at the moment. This could be various lighting and effect implementations being worked into the game. For example explosions and gunfire can tank FPS on DX11 down to 30-32 FPS, where DX12 hovers closer to 45-50FPS regardless of onscreen changes. Constant build changes and content additions make it difficult to gauge performance improvements, nevertheless it is clear DX12 can improve Descent’s advanced lighting effects to some degree.
Single GPU configurations should notice improvements in games like Rise of the Tomb Raider and Hitman (although Hitman’s DX12 implementation is currently raw and prone to problems). Running at resolutions of 2560×1600 on either game with all options maxed, except AF set to x4, and Texture Quality to High, single GTX 970 options run poorly. Averaging 30-32 frames per second with drops down to 20 at this setting. In Rise of the Tomb Raider, this will increase to around 40FPS averages with lows of 33-35FPS by switching to the newer API.
Hitman on DirectX 12, makes the game not only playable, but impressive to look at. A flipbook animation looks more impressive than the DirectX 11 API with ultra level detail (textures still on High and AF x4) on my system. There are constant loading hitches. In crowded areas performance plummets to 20FPS. Meanwhile using DirectX 12, the game maintains a fluid 45-55FPS. This framerate for some purists may not be ideal, but I found the game at these settings to be gorgeous and eye catching. I ran into severe issues switching between the two APIs to run benchmarks and playtesting. At one point I needed to remove my drivers using DDU, and reinstall to remove a DX11 corruption issues that arose with both Tomb Raider and Hitman.
Rise of the Tomb Raider runs into a caveat for me. Unlike Descent and Hitman, this game will fully utilize SLI configurations. Neither Tomb Raider nor Descent Underground of the games discussed so far utilize one of DirectX12’s more creative features: asynchronous compute/shading. Worse yet, this higher efficiency API is not available to Nvidia owners. The result is DX11 with SLI outperforming DX12 Single GPU by a large margin with no image quality loss. These titles are updating DX11 rendering paths for DX12 ones and are offering performance gains, but are scratching the surface in terms of DirectX 12 benefits. However being that I am a Nvidia card owner at the moment, I am missing out on the performance gains, but can still use SLI for certain titles.
Ashes of the Singularity is designed from the ground up to utilize DirectX12’s features, including asynchronous compute/shading. I pushed settings beyond the default: glare on medium, no AA, point lights on high, textures and shows on medium. The results have been surprising and informative. Heavy batches on DX11 has an average framerate of 35.4, while 36.8 on DX12 with a single GPU and only 39.7 with dual GPUs. What benchmark charts do show however is that the GTX 970s is limiting potential CPU performance. CPU utilization floats between 50-80% during benchmark tests. As mentioned, asych-Compute/Shading is not enabled in Nvidia’s latest drivers (as per Anandtech). There’s no indication that Nvidia will have that ready when the full retail of Ashes is released. As has been my experience with SLI, I’ll have to wait to get the “full” experience on Nvidia’s development process, which is disappointing.
The most enjoyable part of games is playing them, and yet tinkering with settings to get them to look their best becomes an unavoidable side distraction for me. There are limits to the budget I will allow for gaming hardware. DirectX 12 comes with the promise of pushing those limits without changing existing hardware. Real world examples have to be tempered. This is a new API, and it’s efficiency is in its infancy. Vulkan API is another contender in the performance game, but only The Talos Principle has implemented a beta edition to test. It is clear that low overhead APIs will be the mainstay, however the next gaming “leap” maybe a ways off. Asynchronous GPU load balancing holds promise, and may even serve a reason to mix and match graphics cards into each PC configuration for maximum gains.
Further, will Nvidia implement an ACE implementation in their drivers that benefit current 700-900 series card owners? Is their Pascal chipset doomed to remain in ACE limbo? There are some vocal users on Reddit stating that Nvidia’s hardware architecture prevents efficient asychrounous computer/shading. If that were to be the case, how will Nvidia maintain it’s advantage over the more efficient AMD GPUs? AMD APU/GPUs have been designed for low overhead APIs for some time now, starting with their own Mantle release years back. Could the future of DX12 gaming also represent a shift towards AMD?
Rushing out for at end Team Red card based on posted performance charts may not be wise either, Nvidia’s DX12 driver support needs work. In addition both card giants are expected to release new technologies in the coming months that improve DX12 efficiency. I am of the mind on waiting on next gen GPUs instead of fussing with SLI ever again. I would rather have a card capable of rendering all my games in 4k on the day of release, than waiting months for SLI support to catch up. Multi-GPU may yield benefits under DX12 for VR gaming since it’s requirements are 90FPS or more per eye, but that too is uncertain. Therefore, a “wait and see” approach over the course of this year seems to be the wiser option.Source: Anandtech