SPECviewperf 12 on a GTX 980

From popular demand, we have introduced SPECviewperf 12 into our testing regimen from August 2015. SPEC is the well-known purveyor of industry standard benchmarks, often probing both fundamental architectural behavior of processors and controllers, as well as comparing performance with well understood industry software and automated tools. It is this last point we pick up – SPECviewperf 12 tests the responsiveness of graphics packages in the fields of design, medical, automotive as well as energy. The benchmarks focus purely on responsiveness and the ability to both display and rotate complex models to aid in design or interpretation, using each packages internal graphics schema (at 1080p).  We run this set with a discrete graphics card, similar to the workstation environments in which they would be used. As a new benchmark, we are still filling the system with data.

SPECviewperf 12: catia-04 (with GTX 980)

SPECviewperf 12: creo-01 (with GTX 980)

SPECviewperf 12: energy-01 (with GTX 980)

SPECviewperf 12: maya-04 (with GTX 980)

SPECviewperf 12: medical-01 (with GTX 980)

SPECviewperf 12: showcase-01 (with GTX 980)

SPECviewperf 12: snx-02 (with GTX 980)

SPECviewperf 12: sw-03 (with GTX 980)

At a certain point it seems that most tests are graphics card bound, however a few show up that having the fastest processor makes a difference. Differences from the Haswell platforms score +5% at best, although a bigger difference can be seen going further back in CPU generations. At this point with a discrete graphics card, SPECviewperf's tests are more akin to our gaming tests when it comes to responsiveness.

Professional Performance: Windows Office and Web Performance
Comments Locked

72 Comments

View All Comments

  • runciterassociates - Wednesday, August 26, 2015 - link

    This is a server chip. Why are you benchmarking games?
    Furthermore, for SPEC, why are you using a dGPU when this chip has on die graphics?
    Where are the OpenCL, OpenMP, GPGPU benchmarks, which are going to be the majority of how these will be used for green heterogeneous computing?
  • Gigaplex - Wednesday, August 26, 2015 - link

    The E3 Xeons are more likely to be used in a workstation than a server.
  • TallestJon96 - Wednesday, August 26, 2015 - link

    They benchmark games because ignorant gamers (like myself) love to see gaming benchmarks for everything, even if they will never be used for games! If it was a 20 core Xeon clocked at 2ghz with hyper threading, we would want the benchmarks, even though they just show that everything i5 and up performs identically. We are a strange species, and you should not waste your time trying to understand us.
  • Oxford Guy - Wednesday, August 26, 2015 - link

    No benchmarks are irrelevant when they involve products people are using today. Gaming benchmarks are practical. However, that doesn't mean charts are necessarily well-considered, such as with how this site refuses to include a 4.5 GHz FX chip (or any FX chip) and instead only includes weaker APUs.
  • Ian Cutress - Thursday, August 27, 2015 - link

    As listed in a couple of sections of the review, this is because Broadwell-H on the desktop does not have an equivalent 84W part for previous generations and this allows us, perhaps somewhat academically, so see if there ends up being a gaming difference between Broadwell and Haswell at the higher power consumption levels.
  • Jaybus - Friday, August 28, 2015 - link

    Because, as stated in the article, the Ubuntu Live CD kernel was a fail for these new processors, so they couldn't run the Linux stuff.
  • Voldenuit - Wednesday, August 26, 2015 - link

    SPECviewperf on a desktop card?

    I'd be interested to see if a Quadro or FirePro would open up the gap between the CPUs.
  • mapesdhs - Thursday, August 27, 2015 - link

    I was wondering that too; desktop cards get high numbers for Viewperf 12 because they cheat in the driver layer on image quality. SPEC testing should be done with pro cards where the relevance is more sensible. The situation is worse now because both GPU makers have fiddled with their drivers to be more relevant to consumer cards. Contrast how Viewperf 12 behaves with desktop cards to the performance spread observed with Viewperf 11, the differences are enormous.

    For example, tesing a 980 vs. a Quadro k5000 with Viewperf 11 and 12, the 980 is 3X faster than the K5000 for Viewperf 12, whereas the K5000 is 6x faster than the 980 for Viewperf 11. More than an order of magnitude performance shift just by using the newer test suite?? I have been told by tech site people elsewhere that the reason is changes to drivers and the use of much less image quality on consumer cards. Either way, it makes a nonsense of the usefulness of Viewperf if this is what's going on now. Otherwise, someone has to explain why the 980 compares so differently to a K5000 for Viewperf 11.
  • Ian Cutress - Thursday, August 27, 2015 - link

    Both points noted. I'll see what I can do to obtain the professional cards.
  • XZerg - Wednesday, August 26, 2015 - link

    The gaming charts are messed up - igp performs faster than the dgpu on the SAME settings? i think something is wrong - most likely the labels of settings.

    Also it would have been better to compare IGP performance against the older versions of IRIS - where is 4770R? the point here is that while keeping the W similar, what are we really getting out of 14nm?

Log in

Don't have an account? Sign up now