Reading the Benchmarks

There are a lot of benchmarks available that compare the IBM POWER8 to Xeons. One example is the Enterprise Resource Planning (ERP) software SAP. We have used the Sales & Distribution 2 Tier benchmark many times because it is one of the very few benchmarks that is a very good representation of real world high-end enterprise workloads.

SAP Sales & Distribution 2 Tier benchmark

Now combine this with the benchmarks that IBM has compiled on their marketing slides and the fact that we know that the POWER8 chip has a TDP of 190W at nominal speed and 247W when running at "Turbo" clockspeeds.

It all seems very simple: the IBM POWER8 is a more power hungry chip but delivers much better performance. But as always you should take the time to read the benchmarks very closely. The IBM S824 is typically the one featured in the benchmarks. However, we are pretty sure that is not the system that will be able to sway the current Intel Xeon customers towards OpenPOWER. Nor are we convinced that the most widely reported benchmarks are accurately predicting the experience of those people.

There are three reasons for that. First of all, most of the benchmarks are run on AIX (7), IBM's own proprietary UNIX. AIX is a high performance, extremely robust OS, but it does not have the rich software system and support that Linux has. Furthermore even with their common design elements, an excellent Linux administrator will have to invest some time to get the same level of expertise in AIX. But more importantly, the S824 is a pretty expensive machine, both in acquisition cost (starting at $21.000, up to $60.000 and more) and energy cost. That kind of pricing lands the system in hostile and more powerful quad Xeon E7 territory.

Lastly, the S824 uses two CPU cards or Dual Chip Modules (DCM), each containing two six-core POWER8 modules at 3.5 GHz. Now consider that the third party OpenPOWER servers have 190/247W TDP 10-core 3.4 GHz POWER8 CPUs. The power consumption does not increase linearly as you add more cores and higher clocks. So the CPU modules found inside the S824 are definitely more power hungry, probably well above 250W.

There is more. Take a look at IBM "Scale-out" server, the more affordable server range of IBM servers. First, a bit of IBM server nomenclature which is actually quite logical and easy to decipher (take note, Intel marketing).

  • S stands for "Scale-out"
  • 8 stands for POWER8
  • 1 or 2 is the number of sockets
  • 2 or 4 is the height, expressed in rack Us.

So an S824 contains 2 sockets in a 4U chassis and a S812 is a one socket system. There is one designation left, : the "L" or Linux .

Notice that the non-L versions also support Linux, but a few months ago they supported only the Big Endian (BE) versions (the slide is from the beginning of this year). IBM told us that all POWER8 servers now support both Little Endian (LE) and BE Linux.

This is important since using an LE version (Ubuntu, SUSE) makes data migration from and data sharing (NAS, SAN) with an x86 system much easier, as x86 only supports LE.

Challenging the Xeon Software Issues
Comments Locked

146 Comments

View All Comments

  • hissatsu - Friday, November 6, 2015 - link

    You might want to look more closely. Thought it's a bit blurry, I'm almost certain that's the 80+ Platinum logo, which has no color.
  • DanNeely - Friday, November 6, 2015 - link

    That's possible; it looks like there's something at the bottom of the logo. Google image search shows 80+ platinum as a lighter silver/gray than 80+ silver; white is only the original standard.
  • Shezal - Friday, November 6, 2015 - link

    Just look up the part number. It's a Platinum :)
  • The12pAc - Thursday, November 19, 2015 - link

    I have a S814, it's Platinum.
  • johnnycanadian - Friday, November 6, 2015 - link

    Oh yum! THIS is what I still love about AT: non-mainstream previews / reviews. REALLY looking forward to more like this. I only wish SGI still built workstation-level machines. :-(
  • mapesdhs - Tuesday, November 10, 2015 - link


    Indeed, but it'd need a hefty change in direction at SGI to get back into workstations again, so very unlikely for the forseeable future. They certainly have the required base tech (NUMALink6, MPI offload, etc.), namely lots of sockets/cores/RAM coupled with GPUs for really heavy tasks (big data, GIS, medical, etc.), ie. a theoretical scalable, shared-memory workstation. But the market isn't interested in advanced performance solutions like this atm, and the margin on standard 2/4-socket systems isn't worthwhile, it'd be much cheaper to buy a generic Dell or HP (plus, it's only above this no. of sockets that their own unique tech comes into play). Pity, as the equivalent of a UV 30/300 workstation would be sweet (if expensive), though for virtually all of the tasks discussed in this article, shared memory tech isn't relevant anyway. The notion of connectable, scalable, shared memory workstations based on NV gfx, PCIe and newer multi-core MIPS CPUs was apparently brought up at SGI way back before the Rackable merger, but didn't go anywhere (not viable given the financial situation at the time). It's a neat concept, eg. imagine being able to connect two or more separate ordinary 2/4-socket XEON workstations together (each fitted with, say, a couple of M6000s) to form a single combined system with one OS instance and resources pool, allowing users to combine & split setups as required to match workloads, but it's a notion whose time has not yet come.

    Of course, what's missing entirely is the notion of advanced but costly custom gfx, but again there's no market for that atm either, at least not publicly. Maybe behind the scenes NV makes custom stuff the way SGI used to for relevant customers (DoD, Lockheed, etc.), but SGI's products always had some kind of commercially available equivalent from which the custom builds were derived (IRx gfx), whereas atm there's no such thing as a Quadro with 30000 cores and 100GB RAM that costs $50K and slides into more than one PCIe slot which anyone can buy if they have the moolah. :D

    Most of all though, even if the demand existed and the tech could be built, it'd never work unless SGI stopped using its pricing-is-secret reseller sales model. They should have adopted a direct sales setup long ago, order on the site, pricing configurator, etc., but that never happened, even though the lack of such an option killed a lot of sales. Less of an issue with the sort of products they sell atm, but a better sales model would be essential if they were to ever try to sell workstations again, and that'd need a huge PR/sales management clearout to be viable.

    Pity IBM couldn't pay NV to make custom gfx, that'd be interesting, but then IBM quit the workstation market aswell.

    Ian.
  • mostlyharmless - Friday, November 6, 2015 - link

    "There is definitely a market for such hugely expensive and robust server systems as high end RISC machines are good for about 50.000 servers. "

    Rounding error?
  • DanNeely - Friday, November 6, 2015 - link

    50k clients would be my guess.
  • FunBunny2 - Friday, November 6, 2015 - link

    (dot) versus (comma) most likely. Euro centric versus 'Murcan centric.
  • DanNeely - Friday, November 6, 2015 - link

    If that was the case, a plain 50 would be much more appropriate.

Log in

Don't have an account? Sign up now