Other Notes

Before jumping into our results, let’s quickly talk about testing.

For our test we are using the latest version of the Windows 10 technical preview – build 10041 – and the latest drivers from AMD, Intel, and NVIDIA. In fact for testing DirectX 12 these latest packages are the minimum versions that the test supports. Meanwhile 3DMark does of course also run on Windows Vista and later, however on Windows Vista/7/8 only the DirectX 11 and Mantle tests are available since those are the only APIs available.

From a test reliability standpoint the API Overhead Feature Test (or as we’ll call it from now, AOFT) is generally reliable under DirectX 12 and Mantle, however we would like to note that we have found it to be somewhat unreliable under DirectX 11. DirectX 11 scores have varied widely at times, and we’ve seen one configuration flip between 1.4 million draw calls per second and 1.9 million draw calls per second based on indeterminable factors.

Our best guess right now is that the variability comes from the much greater overhead of DirectX 11, and consequently all of the work that the API, video drivers, and OS are all undertaking in the background. Consequently the DirectX 11 results are good enough for what the AOFT has set out to do – showcase just how much incredibly faster DX12 and Mantle are – but it has a much higher degree of variability than our standard tests and should be treated accordingly.

Meanwhile Futuremark for their part is looking to make it clear that this is first and foremost a test to showcase API differences, and is not a hardware test designed to showcase how different components perform.

The purpose of the test is to compare API performance on a single system. It should not be used to compare component performance across different systems. Specifically, this test should not be used to compare graphics cards, since the benefit of reducing API overhead is greatest in situations where the CPU is the limiting factor.

We have of course gone and benchmarked a number of configurations to showcase how they benefit from DirectX 12 and/or Mantle, however as per Futuremark’s guidelines we are not looking to directly compare video cards. Especially since we’re often hitting the throughput limits of the command processor, something a real-world task would not suffer from.

The Test

Moving on, we also want to quickly point out the clearly beta state of the current WDDM 2.0 drivers. Of note, the DX11 results with NVIDIA’s 349.90 driver are notably lower than the results with their WDDM 1.3 driver, showing much greater variability. Meanwhile AMD’s drivers have stability issues, with our dGPU testbed locking up a couple of different times. So these drivers are clearly not at production status.

DirectX 12 Support Status
  Current Status Supported At Launch
AMD GCN 1.2 (285) Working Yes
AMD GCN 1.1 (290/260 Series) Working Yes
AMD GCN 1.0 (7000/200 Series) Working Yes
NVIDIA Maxwell 2 (900 Series) Working Yes
NVIDIA Maxwell 1 (750 Series) Working Yes
NVIDIA Kepler (600/700 Series) Working Yes
NVIDIA Fermi (400/500 Series) Not Active Yes
Intel Gen 7.5 (Haswell) Working Yes
Intel Gen 8 (Broadwell) Working Yes

And on that note, it should be noted that the OS and drivers are all still in development. So performance results are subject to change as Windows 10 and the WDDM 2.0 drivers get closer to finalization.

One bit of good news is that DirectX 12 support on AMD GCN 1.0 cards is up and running here, as opposed to the issues we ran into last month with Star Swarm. So other than NVIDIA’s Fermi cards, which aren’t turned on in beta drivers, we have the ability to test all of the major x86-paired GPU architectures that support DirectX 12.

For our actual testing, we’ve broken down our testing for dGPUs and for iGPUs. Given the vast performance difference between the two and the fact that the CPU and GPU are bound together in the latter, this helps to better control for relative performance.

On the dGPU side we are largely reusing our Star Swarm test configuration, meaning we’re testing the full range of working DX12-capable GPU architectures across a range of CPU configurations.

DirectX 12 Preview dGPU Testing CPU Configurations (i7-4960X)
Configuration Emulating
6C/12T @ 4.2GHz Overclocked Core i7
4C/4T @ 3.8GHz Core i5-4670K
2C/4T @ 3.8GHz Core i3-4370

Meanwhile on the iGPU side we have a range of Haswell and Kaveri processors from Intel and AMD respectively.

CPU: Intel Core i7-4960X @ 4.2GHz
Motherboard: ASRock Fatal1ty X79 Professional
Power Supply: Corsair AX1200i
Hard Disk: Samsung SSD 840 EVO (750GB)
Memory: G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26)
Case: NZXT Phantom 630 Windowed Edition
Monitor: Asus PQ321
Video Cards: AMD Radeon R9 290X
AMD Radeon R9 285
AMD Radeon HD 7970
NVIDIA GeForce GTX 980
NVIDIA GeForce GTX 750 Ti
NVIDIA GeForce GTX 680
Video Drivers: NVIDIA Release 349.90 Beta
AMD Catalyst 15.200.1012.2 Beta
OS: Windows 10 Technical Preview (Build 10041)

 

CPU: AMD A10-7850K
AMD A10-7700K
AMD A8-7600
AMD A6-7400L
Intel Core i7-4790K
Intel Core i5-4690
Intel Core i3-4360
Intel Core i3-4130T
Pentium G3258
Motherboard: GIGABYTE F2A88X-UP4 for AMD
ASUS Maximus VII Impact for Intel LGA-1150
Zotac ZBOX EI750 Plus for Intel BGA
Power Supply: Rosewill Silent Night 500W Platinum
Hard Disk: OCZ Vertex 3 256GB OS SSD
Memory: G.Skill 2x4GB DDR3-2133 9-11-10 for AMD
G.Skill 2x4GB DDR3-1866 9-10-9 at 1600 for Intel
Video Cards: AMD APU Integrated
Intel CPU Integrated
Video Drivers: AMD Catalyst 15.200.1012.2 Beta
Intel Driver Version 10.18.15.4124
OS: Windows 10 Technical Preview (Build 10041)
3DMark API Overhead Feature Test Discrete GPU Testing
Comments Locked

113 Comments

View All Comments

  • Laststop311 - Saturday, March 28, 2015 - link

    The benefit will come mainly for people using chips with many cores but poorer single threaded performance. That's basically quad core AMD APU users and 8 core FX chip and 6 core phenom II x6. Since users of those cpu's are the people most likely to be CPU bottlenecked due to dx11 only caring about single thread performance. Since intel chips have top tier single threaded performance they were not as restricted in dx11 and the gpu was usually the bottleneck to begin with so not much change there the gpu will still be shader bound.
  • silverblue - Saturday, March 28, 2015 - link

    I'm glad somebody mentioned the Phenom II X6. I'd be very interested to see how it copes, particularly against the 8350 and 6350.
  • akamateau - Thursday, April 30, 2015 - link

    AMD A6 APU has 4.4 million draw calls per second running DX12. Intel i7 4560 and GTX980 only has 2.2MILLION draw calls running DX11!!!!

    DX12 allows a $100 AMD APU by itself to outperform a $1500 Intel/nVidia gaming system running DX11.

    That is with 4 CORES. Single core performance is not relevant any more.

    All things being equal, DX12 will give AMD APU and Radeon dGPU a staggering performance advantage over Intel/nVidia.
  • FlushedBubblyJock - Tuesday, March 31, 2015 - link

    What's the mystery ? It's Mantle for everyone - that's what DX12 essentially is.
    So just look at what mantle did.
    Close enough.
  • StevoLincolnite - Friday, March 27, 2015 - link

    The consoles are limited to 6 cores for gaming, not 8, those 6 cores are roughly equivalent to a Haswell Core i3 in terms of total performance (Or a high clocked Pentium Anniversary!).
    Remember, AMD's fastest high-end chips struggles to beat Intel's 4 year old mid-range... Take AMD's low-end low-powered chips and it's a laughable situation.
    But that's to be expected, consoles cannot afford to have high-end components, they are cost sensitive low-end devices.
    Lets just hope that Microsoft and Sony do not beat this horse for all it's worth and we get a successor out within the next 4ish years.

    The Xbox One also uses a modified version of Direct X 11 for it's high level API.
    The Xbox One also has a low-level API which developers can target and extract more performance.

    Basically once Direct X 12 is finalized for the PC it will be modified and ported to the Xbox One, giving developers who do not buy an already made game-engine like Unreal, CryEngine etc' a performance boost without blowing out development time significantly by being forced to target the low level API.

    The same thing is also occurring on the Playstation, the high-level API is getting an overhaul thanks to Vulkan, it still has it's low-level API for developers to target of course.

    Ram is still a bit of an issue too, 5-5.5Gb of Ram for the game and graphics is pretty tiny, it may become a real limiter in the coming years, slightly offset with hard drive asset streaming.

    To compare it to a PC the Xbox One is like a Core i3 3ghz, 4Gb of Ram, Radeon 7770, 1.5Gb graphics card.
    Change the GPU to a Radeon 7850 for the PS4 and that's what we have for the next half decade or more.
  • Laststop311 - Saturday, March 28, 2015 - link

    Correct me if I'm wrong but I believe the ps4 is built with a downclocked 7870 (20 cu) but the ps4 igpu has 2 CU disabled as well as the downclock. The 7850 is a 16 CU part but i guess the 2 extra CU combined with the downclock would make the ps4 behave like a 7850. the radeon 7770 is only 10CU and the xbone has 12CU's but a lower clock. So are you basically saying for the ps4 and xbone the extra 2cu + the lower clock speed makes them equal to those desktop cards? Because they really aren't exactly those cards. Some situations the higher clock speed matters more and some the more cu's matter more. In some situations the ps4 may behave more like a 7870 than a 7850 and the xbone may be more like a 7790 than a 7770 in some situations.
  • Gigaplex - Monday, March 30, 2015 - link

    The console CPUs are actually significantly slower than a Haswell i3. The Pentium chips are a closer comparison due to the lack of hyperthreading
  • mr_tawan - Monday, March 30, 2015 - link

    'PC is not meant to be played' (TM)

    (Just kidding though)

    If the developers done their jobs right, hi-specs PC still gains much advantage over console (especially in the frame rate area). However PC itself are also a drag as well (remember those Atom/Pentium equipped PC).
  • JonnyDough - Tuesday, March 31, 2015 - link

    Half the time it's just that they don't even bother updating menus and controls. Skyrim is a prime example.
  • Veritex - Friday, March 27, 2015 - link

    All the next generation consoles are based on AMD eight core CPU and GCN architecture (with Nintendo possibly opting for an ARM CPU paired with GCN), so developers will just have to optimize once for the consoles and have a easier time porting to PCs.

    It is interesting to see the AMD R9-285 Tonga consistently outperform Nvidia's high end GTX 980 and its make you wonder how incredibly fast the next generation R9-390x Fiji and 380x could be.

Log in

Don't have an account? Sign up now