The Competition

We don't have a very large collection of enterprise SSDs, but we have a handful of other recent high-end datacenter drives to compare the PBlaze5 C916 against. Most of these drives were included in our recent roundup of enterprise SSDs. The PBlaze5 C900 is the immediate predecessor to the C916, and the D900 is the U.2 version. The Micron 9100 MAX is an older drive that uses the same Microsemi controller but planar MLC NAND, so it represents the high-end from two generations back.

From Intel we have the top of the line Optane DC P4800X, and the TLC-based P4510 8TB. The P4610 would be a closer match for the C916 as both are rated for 3 DWPD, while the P4510 is better suited for comparison against the PBlaze5 C910 in the 1 DWPD segment. However, the P4510 is still based on the same 64L IMFT TLC that the PBlaze5 C916 uses, so aside from steady state write speeds the performance differences should be mostly due to the controller differences.

The two Samsung drives are both based around the 8-channel Phoenix controller that is also used in their consumer NVMe product line. The 983 DCT occupies a decidedly lower market segment than the Memblaze drives, but the 983 ZET is a high-end product with Samsung's specialized low-latency Z-NAND flash memory. Samsung's PM1725b is their current closest competitor to the PBlaze5 C916, with a PCIe x8 interface and 3 DWPD rating. However, there's no retail version of the PM1725b so samples are harder to come by.

Test System

Intel provided our enterprise SSD test system, one of their 2U servers based on the Xeon Scalable platform (codenamed Purley). The system includes two Xeon Gold 6154 18-core Skylake-SP processors, and 16GB DDR4-2666 DIMMs on all twelve memory channels for a total of 192GB of DRAM. Each of the two processors provides 48 PCI Express lanes plus a four-lane DMI link. The allocation of these lanes is complicated. Most of the PCIe lanes from CPU1 are dedicated to specific purposes: the x4 DMI plus another x16 link go to the C624 chipset, and there's an x8 link to a connector for an optional SAS controller. This leaves CPU2 providing the PCIe lanes for most of the expansion slots, including most of the U.2 ports.

Enterprise SSD Test System
System Model Intel Server R2208WFTZS
CPU 2x Intel Xeon Gold 6154 (18C, 3.0GHz)
Motherboard Intel S2600WFT
Chipset Intel C624
Memory 192GB total, Micron DDR4-2666 16GB modules
Software Linux kernel 4.19.8
fio version 3.12
Thanks to StarTech for providing a RK2236BKF 22U rack cabinet.

The enterprise SSD test system and most of our consumer SSD test equipment are housed in a StarTech RK2236BKF 22U fully-enclosed rack cabinet. During testing for this review, the front door on this rack was generally left open to allow better airflow, since the rack doesn't include exhaust fans of its own. The rack is currently installed in an unheated attic with ambient temperatures that provide a reasonable approximation of a well-cooled datacenter.

The test system is running a Linux kernel from the most recent long-term support branch. This brings in about a year's work on Meltdown/Spectre mitigations, though strategies for dealing with Spectre-style attacks are still evolving. The benchmarks in this review are all synthetic benchmarks, with most of the IO workloads generated using FIO. Server workloads are too widely varied for it to be practical to implement a comprehensive suite of application-level benchmarks, so we instead try to analyze performance on a broad variety of IO patterns.

Enterprise SSDs are specified for steady-state performance and don't include features like SLC caching, so the duration of benchmark runs doesn't have much effect on the score, so long as the drive was thoroughly preconditioned. Except where otherwise specified, for our tests that include random writes, the drives were prepared with at least two full drive writes of 4kB random writes. For all the other tests, the drives were prepared with at least two full sequential write passes.

Our drive power measurements are conducted with a Quarch XLC Programmable Power Module. This device supplies power to drives and logs both current and voltage simultaneously. With a 250kHz sample rate and precision down to a few mV and mA, it provides a very high resolution view into drive power consumption. For most of our automated benchmarks, we are only interested in averages over time spans on the order of at least a minute, so we configure the power module to average together its measurements and only provide about eight samples per second, but internally it is still measuring at 4µs intervals so it doesn't miss out on short-term power spikes.

Introduction Performance at Queue Depth 1
Comments Locked

13 Comments

View All Comments

  • MrRuckus - Wednesday, March 13, 2019 - link

    Because the PCIe lane count is dictated by the processor and Intel has been notoriously light on the number of PCIe lanes for their mainstream products. So is AMD for that matter (Ryzen). Threadripper though has a large number of PCIe lanes, along with EPYC. XEON is also more then standard desktop procs. From reading around it looks like cost is the main reason for the limited PCIe lanes.
  • DanNeely - Thursday, March 14, 2019 - link

    And the reason for limited PCIe lanes is that the number of them are controlled by socket size, and socket size is constrained by cost. (And once you get up to truly enormous ones like LGA3647 or SP3 by the fact that they take up so much physical space that smaller form factors like ITX become nearly impossible and highly wasteful because you're unable to use most of the CPUs IO.)
  • mikmod - Tuesday, April 30, 2019 - link

    It would be great to be able to buy such drive for high-end workstation at home, even if they're only for enterprise. Such write endurance and power loss protection cap... Is there any pricing revealed anywhere?

Log in

Don't have an account? Sign up now