SPECviewperf 12 on a GTX 980

From popular demand, we have introduced SPECviewperf 12 into our testing regimen from August 2015. SPEC is the well-known purveyor of industry standard benchmarks, often probing both fundamental architectural behavior of processors and controllers, as well as comparing performance with well understood industry software and automated tools. It is this last point we pick up – SPECviewperf 12 tests the responsiveness of graphics packages in the fields of design, medical, automotive as well as energy. The benchmarks focus purely on responsiveness and the ability to both display and rotate complex models to aid in design or interpretation, using each packages internal graphics schema (at 1080p).  We run this set with a discrete graphics card, similar to the workstation environments in which they would be used. As a new benchmark, we are still filling the system with data.

SPECviewperf 12: catia-04 (with GTX 980)

SPECviewperf 12: creo-01 (with GTX 980)

SPECviewperf 12: energy-01 (with GTX 980)

SPECviewperf 12: maya-04 (with GTX 980)

SPECviewperf 12: medical-01 (with GTX 980)

SPECviewperf 12: showcase-01 (with GTX 980)

SPECviewperf 12: snx-02 (with GTX 980)

SPECviewperf 12: sw-03 (with GTX 980)

At a certain point it seems that most tests are graphics card bound, however a few show up that having the fastest processor makes a difference. Differences from the Haswell platforms score +5% at best, although a bigger difference can be seen going further back in CPU generations. At this point with a discrete graphics card, SPECviewperf's tests are more akin to our gaming tests when it comes to responsiveness.

Professional Performance: Windows Office and Web Performance
Comments Locked

72 Comments

View All Comments

  • ruthan - Thursday, August 27, 2015 - link

    So pleas add some virtualization into benchmarking set.
  • Ian Cutress - Thursday, August 27, 2015 - link

    It's on the cards.
  • Mastadon - Thursday, August 27, 2015 - link

    No support for DDR4 RAM? C'mon, it's 2015.
  • SuperVeloce - Thursday, August 27, 2015 - link

    This is Broadwell, not Skylake... It's meant to introduce new litography process and updated platform, not new arhitectures and memory controllers...
  • Oxford Guy - Thursday, August 27, 2015 - link

    DDR4 isn't of much benefit, except for servers (power consumption)
  • AnnonymousCoward - Thursday, August 27, 2015 - link

    Skylake FTW. Why pay more for the slower Xeon?
  • Oxford Guy - Thursday, August 27, 2015 - link

    If you read the Skylake review here you'll find that it's not really better than Broadwell, just different.
  • AnnonymousCoward - Thursday, August 27, 2015 - link

    Dude, look at the graphs on the conclusion page of this review. Skylake beats the closest Xeon by 19% in most of them.
  • Oxford Guy - Sunday, August 30, 2015 - link

    I wasn't talking about Xeon. Look at the previous desktop review. I read your post too quickly and missed that you were talking about Xeon.
  • joex4444 - Thursday, August 27, 2015 - link

    Is it even clear that the 1285 and 1285L performed differently to a statistically significant degree? I mean if one has a benchmark performed three times and scores of, say, {1176, 1188, 1182} are obtained for the 1285 but the 1285L gets {1190, 1175, 1184} then the 1285L seems to have an average of 1183 while the 1285 has an average of 1182. But when we look at those distributions, they completely agree and show no performance difference, which given one has an extra 100MHz on it we'd expect a 1 part in 34 advantage, ie, a 2.9% performance gap with the 95W 1285 outperforming the 65W 1285L.

    Further, it's important to recall the first chart showing that the 95W 1285 actually used less power in the idle -> OCCT test. The TDP is not a measure of how much power the CPU uses, plain and simple. It's a specification stating the maximum amount of power that can be dissipated in the form of heat. Therefore when the author states "100MHz does not adequately explain 30W in the grand scheme of things" they're exactly correct about the TDP, but it comes off suggesting one actually *uses* 30W more than the other which is simply not true. It does sound pretty clear that either (a) Intel bins their TDPs and the 3.5GHz one bumped up past the 65W bin or (b) Intel uses better parts for the 1285L, but this does not explain why it would cost $100-ish (~18%) less as we would expect better parts to be scarcer not more abundant.

    As far as binned TDPs go, we know they do this. Look at the 84W parts. They don't all use 84W, they're just all rated as capable of dissipating up to 84W. Further we don't see arbitrary TDPs, we see a few, eg, 35W, 65W, 84W, 95W, 125W, and if you're AMD, 220W.

Log in

Don't have an account? Sign up now