Assessing Cavium's ThunderX2: The Arm Server Dream Realized At Last
by Johan De Gelas on May 23, 2018 9:00 AM EST- Posted in
- CPUs
- Arm
- Enterprise
- SoCs
- Enterprise CPUs
- ARMv8
- Cavium
- ThunderX
- ThunderX2
Benchmark Configuration and Methodology
For our look at the ThunderX2, all of our testing was conducted on Ubuntu Server 17.10, Linux kernel 4.13 64 bit. Normally we would use an LTS version, but since the Cavium shipped with that Ubuntu version, we did not want to take any unnecessary risks by changing the OS. The compiler that ships with this distribution is GCC 7.2.
Unfortunately however, our AMD EPYC system has missed the deadline for this article. We ran into problems with that system right up to press time and are still debugging the matter. But in short, the system did not perform well after we performed a kernel upgrade.
Finally, you will notice that the DRAM capacity varies among our server configurations. The reason is simple: Intel's system has 6 memory channels, while Cavium's ThunderX2 has 8 memory channels.
Gigabyte - Cavium "Saber"
CPU | Two Cavium ThunderX2 CN9980 (32 cores at 2.2 - 2.5 GHz) |
RAM | 512 GB (16x32GB) Micron Reg. DDR4 @2666 |
Internal Disks | SANDISK Cloudspeed Gen II 800 GB |
Motherboard | Cavium Sabre |
BIOS version | 18/2/2018 |
PSU | Dual 1600W 80+ Platinum |
Intel's Xeon "Purley" Server – S2P2SY3Q (2U Chassis)
CPU | Two Intel Xeon Platinum 8176 (28 cores at 2.1 GHz, 165W) |
RAM | 384 GB (12x32 GB) Hynix DDR4-2666 |
Internal Disks | SAMSUNG MZ7LM240 (bootdisk) Intel SSD3710 800 GB (data) |
Motherboard | Intel S2600WF (Wolf Pass baseboard) |
Chipset | Intel Wellsburg |
BIOS version | 9/02/2017 |
PSU | 1100W PSU (80+ Platinum) |
The typical BIOS settings can be seen below. I should also note that we have both hyperthreading and Intel's virtualization technology enabled.
Other Notes
Both servers are fed by a standard European 230V (16 Amps max.) power line. The room temperature is monitored and kept at 23°C by our Airwell CRACs.
Energy Consumption
One thing that concerned us was the fact that the Gigabyte "Saber" system consumed 500W while simply running Linux (so mostly idle). Under load however the system consumed around 800W, which is in line with our expectations, as we have two 180W TDP chips inside. So as is typically the case for early test systems, we are not able to do any accurate power comparisons.
In fact, Cavium claims that the actual systems from HP, Gigabyte and others will be far more power efficient. The "Sabre" testing system we received had several power management problems: immature fan management firmware, a BMC bug, and an oversized (1600W) PSU.
97 Comments
View All Comments
Davenreturns - Wednesday, May 23, 2018 - link
In the spec table for the AMD EPYC 7601 you have max sockets 4 and PCIe 3.0 lanes as 64. I thought the max sockets was 2 and that the total number of PCIe 3.0 lanes was 128 (64 in a dual socket machine).davegraham - Wednesday, May 23, 2018 - link
max sockets is 2 and PCIe lanes is 128 (64 from each 7601 for a combined total of 128; remember, each 7601 has 128 PCIe lanes by themselves. 64 from each are ganged together for IF in a 2P system).davegraham - Wednesday, May 23, 2018 - link
*are not *isDavenreturns - Wednesday, May 23, 2018 - link
But in a single socket motherboard system, the total PCIe lanes available from one EPYC processor is 128 which I think we are both saying is correct.Davenreturns - Wednesday, May 23, 2018 - link
The reason I think these two corrections are important and should be addressed by the author is the way the players in the market are competing. The table should read 128 PCIe lanes and 2 sockets max for EPYC. One only needs to look at AMD's EPYC One socket page to understand why it is important.https://www.amd.com/en/products/epyc-7000-series-1...
The page is filled with marketing trying to convince customers that you are actually getting a two socket server in just one socket. And yes 128 PCIe lanes are available to the customer in these one socket products as part of the reasoning.
The max number of sockets is also important. AMD and probably Cavium are both arguing that 90% of the market only needs 1 or 2 sockets. Intel doesn't agree and provides 4 or more socket configurations.
The one socket argument centers around the I/O and memory channels available in the AMD processor. Even though the table just might have typos, reviewers around the web had a hard time believing that a single chip offered 128 lanes of PCIe connectivity and I found a lot of misinformation. It continues today.
DanNeely - Wednesday, May 23, 2018 - link
AFAIK even for intel 1/2 socket machines are around 90% of their sales. They're just selling enough total server chips in total that catering to the sliver of the market that does want 4/8way configurations is still worth their time.Arnulf - Sunday, May 27, 2018 - link
Profit margins in that market segment are likely to be way higher so it's worth it for Intel as long as there is no competition, forcing prices downwards.Ryan Smith - Wednesday, May 23, 2018 - link
You are correct. Thanks for pointing that out.Davenreturns - Wednesday, May 23, 2018 - link
Thanks so much, Ryan.vanilla_gorilla - Wednesday, May 23, 2018 - link
"This is because the customers who have invested in expensive enterprise software (Oracle, SAP) are less sensitive to cost on the hardware side, so they are much less likely to change to a new hardware platform."I don't really follow the logic here. Just because you spend a lot more money on software doesn't mean you wouldn't try to save money on hardware. You don't only focus on one related expense because it's larger.