To say there’s a bit of excitement for DirectX 12 and other low-level APIs is probably an understatement. A big understatement. With DirectX 12 ramping up for a release later this year, Mantle 1.0 already in pseudo-release, and its successor Vulkan under active development, the world of graphics APIs is changing in a way not seen since the earliest days, when APIs such as Direct3D, OpenGL, and numerous vendor proprietary APIs were first released. From a consumer standpoint this change will still take a number of years, but from a development standpoint 2015 is going to be the year that everything changed for PC graphics programming.

So far much has been made about the benefits of these APIs, the potential performance improvements, and ultimately what can be done and what new things can be achieved with them. The true answer to those questions are that this is going to be a multi-generational effort; until games are built from the ground-up for these APIs, developers won’t be able to make full use of their capabilities. Even then, the coolest tricks will take some number of years to develop, as developers become better acquainted with these new APIs, their idiosyncrasies, and the capabilities of the underlying hardware when interfaced with these APIs. In other words, right now we’re just scratching the surface.

The first DirectX 12 games are expected towards the end of the year, and in the meantime Microsoft and their hardware partners have been ramping up the DirectX 12 ecosystem, hammering out the API implementation in Windows 10 while the hardware vendors write and debug their WDDM 2.0 drivers. Meanwhile as this has been going on, we’ve seen a slow release of software released designed to showcase DirectX 12 features in a proof of concept manner. A number of various internal demos exist, and we saw the first semi-public DirectX 12 software release last month with our look at Star Swarm.

This week the benchmarking gurus over at Futuremark are releasing their own first run at a DirectX 12 test with their latest update for the 3DMark benchmark. Futuremark has been working away at DirectX 12 for some time – in fact they were the first partner to show DirectX 12 code in action at Microsoft’s 2014 DX12 unveiling – and now they are releasing their first DirectX 12 project.

In keeping with the general theme of the demos we’ve seen so far, Futuremark’s new DirectX 12 release is another proof of concept test. Dubbed the 3DMark API Overhead Feature Test, this benchmark is a purely synthetic benchmark designed to showcase the draw call benefits of the new API even more strongly than earlier benchmarks. Whereas Star Swarm was a best-case scenario test within the confines of a realistic graphics workload, the API Overhead Feature Test is a proper synthetic benchmark that is designed to test one thing and one thing only: how many draw calls a system can handle. The end result, as we’ll see, showcases just how great the benefits of DirectX 12 are in this situation, allowing for an order of magnitude’s improvement, if not more.

To do this, Futuremark has written a relatively simple test that draws out a very simple scene with an ever-increasing number of objects in order to measure how many draw calls a system can handle before it becomes saturated. As expected for a synthetic test, the underlying rendering task is very simple – render an immense amount of building-like objections at both the top and bottom of the screen – and the bottleneck is in processing the draw calls. Generally speaking, under this test you should either be limited by the number of draw calls you can generate (CPU limited) or limited by the number of draw calls you can consume (GPU’s command processor limited), and not the GPU’s actual rendering capabilities. The end result is that the API Overhead Feature Test can push an even larger number of draw calls than Star Swarm could.

To showcase the difference between various APIs, this test is available with DirectX 12 and Mantle, but also two different DirectX 11 modes. Standard DirectX 11 single-threading is one mode, alongside support for DirectX 11 multi-threading. The latter has a checkered history – it never did work as well in the real world as initially hoped – and in practice only NVIDIA supports it to any decent degree. But regardless, as we’ll see DirectX 12’s throughput will put even DX11MT to shame.

FutureMark’s complete technical description is posted below:

The test is designed to make API overhead the performance bottleneck. The test scene contains a large number of geometries. Each geometry is a unique, procedurally-generated, indexed mesh containing 112 -127 triangles.

The geometries are drawn with a simple shader, without post processing. The draw call count is increased further by drawing a mirror image of the geometry to the sky and using a shadow map for directional light.

The scene is drawn to an internal render target before being scaled to the back buffer. There is no frustum or occlusion culling to ensure that the API draw call overhead is always greater than the application side overhead generated by the rendering engine.

Starting from a small number of draw calls per frame, the test increases the number of draw calls in steps every 20 frames, following the figures in the table below.

To reduce memory usage and loading time, the test is divided into two parts. The second part starts at 98304 draw calls per frame and runs only if the first part is completed at more than 30 frames per second.

Draw calls per frame Draw calls per frame increment per step Accumulated duration in frames
192 – 384 12 320
384 – 768 24 640
768 – 1536 48 960
1536 – 3072 96 1280
3072 – 6144 192 1600
6144 – 12288 384 1920
12288 – 24576 768 2240
24576 – 49152 1536 2560
49152 – 98304 3072 2880
98304 – 196608 6144 3200
196608 – 393216 12288 3520
Other Notes & The Test
Comments Locked

113 Comments

View All Comments

  • Mannymal - Sunday, March 29, 2015 - link

    The article fails to address for the layman how exactly this will impact gameplay. Will games simply look better? Will AI get better? will maps be larger and more complex? All of the above? And how much?
  • Ryan Smith - Sunday, March 29, 2015 - link

    It's up to the developers. Ultimately DX12 frees up resources and removes bottlenecks; it's up to the developers to decide how they want to spend that performance. They could do relatively low draw calls and get some more CPU performance for AI, or they could try to do more expansive environments, etc.
  • jabber - Monday, March 30, 2015 - link

    Yeah seems to me that DX12 isn't so much about adding new eye-sandy its about a long time coming total back end refresh to get rid of the old DX crap and bring it more up to speed with modern hardware.
  • AleXopf - Sunday, March 29, 2015 - link

    I would love to see what effect directx 12 has on the cpu side. All the articles so far have been about cpu scalling with different gpus. Would be nice to see how amd compare to intel with a better use of their higher core count.
  • Netmsm - Monday, March 30, 2015 - link

    AMD is the thech's hero ^_^. always been.
  • JonnyDough - Tuesday, March 31, 2015 - link

    Great! Now all we need are driver hacks to make our over priced non-DX12 video cards worth their money!
  • loguerto - Friday, April 3, 2015 - link

    AMD masterpiece. Does this superiority has something to do with AMD Asynchronous Shaders? I know that nvidia's kerpel and maxwell asynchronous pipeline engine is not as powerful as the one in GCN architecture.
  • perula - Thursday, April 9, 2015 - link


    1)PASSPORT:
    2) license driving:
    3)Identity Card:
    For other types of documents, the price is to be determined we are also
    able to clone credit cards, or create for you a physical card codes
    starting with cc in your possession. But they are not able to do it with
    cards equipped with the latest generation of chips, but only with the old
    ones are still outstanding feature of the single magnetic stripe. The
    price in this case is 200 euro per card./

    Email /perula0@gmail.com /
    Text;+1(201) 588-4406
  • Clorex - Wednesday, April 22, 2015 - link

    On page 4:
    "Intel does end up seeing the smallest gains here, but again even in this sort of worst case scenario of a powerful CPU paired with a weak CPU, DX12 still improved draw call performance by over 3.2x."

    Should be "powerful CPU paired with a weak GPU".
  • akamateau - Thursday, April 30, 2015 - link

    FINALLY THE TRUTH IS REVEALED!!!

    AMD A6-7400 K CRUSHES INTEL i7 IGP by better than 100%!!!

    But Anand is also guilty of a WHOPPER of a LIE!

    Anand uses Intel i7-4960X. NOBODY uses RADEON with an Intel i7 cpu. But rather than use either an AMD FX CPU or an AMD A10 CPU they decided to degrade AMD's scores substanbtially by using an Intel product which is not optimsed to work with Radeon. Intel i7 also is not GCN or HSA compatible nor can it take advantage Asynchronous Shader Pipelines either. Only an IDIOT would feed Radeon GPU with Intel CPU.

    In short Anand's journalistic integrity is called into question here.

    Basically RADEON WOULD HAVE DESTROYED ALL nVIDIA AND INTEL COMBINATIONS if Anand benchmarked Radeon dGPU with AMD silicon. By Itself A6 is staggeringly superior to Intel i3, i5, AND i7.

    Ryan Smith & Ian Cutress have lied.

    As it stands A10-7700k produces 4.4 MILLION drawcalls per second. At 6 cores the GTX 980 in DX11 only produces 2.2 MILLION draw calls.

    DX12 enables a $150 AMD APU to CRUSH a $1500.00 Intel/nVidia gaming setup that runs DX11.

    Here is the second lie.

    AMD Asynchronous Shader Pipelines allow for 100% multithreaded proceesing in the CPU feeding the GPU whether it is an integrated APU or an 8 core FX feeding a GPU. What Anand sould also show is 8 core scaling using an AMD FX processor.

    Anand will say that they are too poor to use an AMD CPU or APU set up. Somehow I think that they are being disingenuous.

    NO INTEL/nVidia combination can compete with AMD using DX12.

Log in

Don't have an account? Sign up now