Challenging the Xeon

So what caused us to investigate the IBM POWER8 as a viable alternative to the mass market Xeon E5s and not simply the high-end quad (and higher) socket Xeon E7 parts? A lot. IBM sold its x86 server division to Lenovo. So there is only one true server processor left at IBM: the POWER family. But more importantly, the OpenPOWER fondation has a lot of momentum since its birth in 2013. IBM and the OpenPOWER Foundation Partners like Google, NVIDIA, and Mellanox are all committed to innovating around the POWER processor-based systems from the chip level up through the whole platform. The foundation has delivered some tangible results:

  • Open Firmware which includes both the firmware to boot the hardware (similar to the BIOS) ...
  • ... as OPAL (OpenPOWER Abstraction Layer) to boot and launch a hypervisor kernel.
  • OpenBMC
  • Cheaper and available to third parties (!) POWER8 chips
  • CAPI over PCIe, to make it easier to link the POWER8 to GPUs (and other PCIe cards)
  • And much more third party hardware support (Mellanox IB etc.)
  • A much large software ecosystem (see further)

The impact of opening up firmware under the Apache v2 license and BMC (IBM calls it "field processor") code should not be underestimated. The big hyperscale companies - Google, Amazon, Microsoft, Facebook, Rackspace - want as much control over their software stack as they can.

The resuls are that Google is supporting the efforts and Rackspace has even built their own OpenPOWER server called "Barreleye". While Google has been supportive and showing of proof of concepts, Rackspace is going all the way:

... and aim to put Barreleye in our datacenters for OpenStack services early next year.

The end result is that the complete POWER platform, once only available in expensive high end servers, can now be found inside affordable linux based servers, from IBM (S8xxL) and third parties like Tyan. The opinions of usual pundits range from "too little, too late" to "trouble for Intel". Should you check out a POWER8 based server before you order your next Xeon - Linux server? And why? We started with analyzing the available benchmarks carefully.

A Real Alternative? Reading the Benchmarks
Comments Locked

146 Comments

View All Comments

  • usernametaken76 - Thursday, November 12, 2015 - link

    Technically this is not true. IBM had a working version of AIX running on PS/2 systems as late as the 1.3 release. Unfortunately support was withdrawn and future releases of AIX were not compiled for x86 compatible processors. One can still find a copy of this release if one knows where to look. It's completely useless to anyone but a museum or curious hobbyist, but it's out there.
  • zenip - Friday, November 13, 2015 - link

    ...>--click here-
  • Steven Perron - Monday, November 23, 2015 - link

    Hello Johan,

    I was reading this article, and I found it interesting. Since I am a developer for the IBM XL compiler, the comparisons between GCC and XL were particularly interesting. I tried to reproduce the results you are seeing for the LZMA benchmark. My results were similar, but not exactly the same.

    When I compared GCC 4.9.1 (I know a slightly different version that you) to XL 13.1.2 (I assume this is the version you used), I saw XL consistently ahead of GCC, even when I used -O3 for both compilers.

    I'm still interested in trying to reproduce your results, so I can see what XL can do better, so I have a couple questions on areas that could be different.

    1) What version of the XL compiler did you use? I assumed 13.1.2, but it is worth double checking.
    2) Which version of the 7-zip software did you use? I picked up p7zip 15.09.
    3) Also, I noticed when the Power 8 machine was running at full capacity (for me that was 192 threads on a 24 core machine), the results would fluctuate a bit. How many runs did you do for each configuration? Were the results stable?
    4) Did you try XL at the less aggressive and more stable options like "-O3" or "-O3 -qhot"?

    Thanks for you time.
  • Toyevo - Wednesday, November 25, 2015 - link

    Other than the ridiculous price of CDIMMs the power efficiency just doesn't look healthy. For data centers leasing their hardware like Amazon AWS, Google AppEngine, Azure, Rackspace, etc, clients who pay for hardware yet fail to use their allocation significantly help the bottom line of those companies by reduced overheads. For others high usage is a mandatory part of the ROI equation during its period as an operating asset, thus power consumption is a real cost. Even with our small cluster of 12 nodes the power efficiency is a real consideration, let alone companies standardizing toward IBM and utilising 100s or 1000s of nodes that are arguably less efficient.

    Perhaps you could devise some sort of theoretical total cost of ownership breakdown for these articles. My biggest question after all of this is, which one gets the most work done with the lowest overheads. Don't get me wrong though, I commend you and AnandTech on the detail you already provide.
  • AstroGuardian - Tuesday, December 8, 2015 - link

    It's good to have someone challenging Intel, since AMD crap their pants on regular basis
  • dba - Monday, July 25, 2016 - link

    Dear Johan:

    Can you extrapolate how much faster the Sparc S7 will be in your Cluster Benchmarking,
    if the 2 on Die Infiniband ports are Activated, 5, 10, 20% ???

    Thank You, dennis b.

Log in

Don't have an account? Sign up now