The Western Digital WD Black 3D NAND SSD Review: EVO Meets Its Match
by Ganesh T S & Billy Tallis on April 5, 2018 9:45 AM EST- Posted in
- SSDs
- Storage
- Western Digital
- SanDisk
- NVMe
- Extreme Pro
- WD Black
Sequential Read Performance
Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.
The burst sequential read performance of the WD Black is several times higher than last year's model, but doesn't come close to setting any records.
Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data.
On the sustained sequential read test, the Samsung NVMe drives have a clear lead over the WD Black, which is tied with Toshiba's drives.
Power Efficiency in MB/s/W | Average Power in W |
In terms of power efficiency for sequential reads, the WD Black is much closer to the top drives, with the exception of the Samsung 960 PRO.
The sequential read performance of the WD Black starts out rather poor at QD1 but grows steadily all the way up to QD16, by which point it is outperforming everything except the Optane SSD. The Toshiba XG5 shows similar scaling behavior but can't quite keep pace with the WD Black.
Sequential Write Performance
Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.
As with the burst random write test, our two samples show surprising differences in burst sequential write speeds. The difference amounts to the WD Black/SanDisk Extreme PRO either being tied for second place with the Samsung 960 EVO, or almost tied with the PM981 that the 960 EVO's replacement will be based on.
Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.
The sustained sequential write performance of the WD Black is not quite the best, but it is well ahead of everything except the best drives from Samsung and Intel. The WD Black is almost twice as fast as the Toshiba XG5 that uses essentially the same flash.
Power Efficiency in MB/s/W | Average Power in W |
Despite not having the best performance on the sequential write test, the WD Black is the clear winner on the efficiency metric. With power draw of just over 4W it isn't close to being the least power-hungry drive, but it get so much done on that budget that the efficiency score beats everything else.
The sequential write speed of the WD Black is quite steady across the range of queue depths, with just a small increase from QD1 to QD2 and no signs of degraded performance from excessive garbage collection after the SLC cache is full.
69 Comments
View All Comments
Chaitanya - Thursday, April 5, 2018 - link
Nice to see some good competition to Samsung products in SSD space. Would like to see durability testing on these drives.HStewart - Thursday, April 5, 2018 - link
Yes it nice to have competition in this area and important thing to notice here a long time disk drive manufacture is changes it technology to meet changes in storage technology.Samus - Thursday, April 5, 2018 - link
Looks like WD's purchase of SanDisk is showing some payoff. If only Toshiba would have taken advantage of OCZ (who purchased Indilinx) in-house talent. The Barefoot controller showed a lot of promise and could have easily been updated to support low power states and TLC NAND. But they shelved it. I don't really know why Toshiba bought OCZ.haukionkannel - Friday, April 6, 2018 - link
Indeed! Samsung did have too long time performance supremesy and that did make the company to upp the prices (natural development thought).Hopefully this better situation help uss customers in reasonable time frame. Too much bad news to consumers last years considering the prices.
XabanakFanatik - Thursday, April 5, 2018 - link
Whatever happened to performance consistency testing?Billy Tallis - Thursday, April 5, 2018 - link
The steady state QD32 random write test doesn't say anything meaningful about how modern SSDs will behave on real client workloads. It used to be a half-decent test before everything was TLC with SLC caching and the potential for thermal throttling on M.2 NVMe drives. Now, it's impossible to run a sustained workload for an hour and claim that it tells you something about how your drive will handle a bursty real world workload. The only purpose that benchmark can serve today is to tell you how suitable a consumer drive is for (ab)use as an enterprise drive.iter - Thursday, April 5, 2018 - link
Most of the tests don't say anything meaningful about "how modern SSDs will behave on real client workloads". You can spend 400% more money on storage that will only get you 4% of performance improvement in real world tasks.So why not omit synthetic tests altogether while you are at it?
Billy Tallis - Thursday, April 5, 2018 - link
You're alluding to the difference between storage performance and whole system/application performance. A storage benchmark doesn't necessarily give you a direct measurement of whole system or application performance, but done properly it will tell you about how the choice of an SSD will affect the portion of your workload that is storage-dependent. Much like Amdahl's law, speeding up storage doesn't affect the non-storage bottlenecks in your workload.That's not the problem with the steady-state random write test. The problem with the steady state random write test is that real world usage doesn't put the drive in steady state, and the steady state behavior is completely different from the behavior when writing in bursts to the SLC cache. So that benchmark isn't even applicable to the 5% or 1% of your desktop usage that is spent waiting on storage.
On the other hand, I have tried to ensure that the synthetic benchmarks I include actually are representative of real-world client storage workloads, by focusing primarily on low queue depths and limiting the benchmark duration to realistic quantities of data transferred and giving the drive idle time instead of running everything back to back. Synthetic benchmarks don't have to be the misleading marketing tests designed to produce the biggest numbers possible.
MrSpadge - Thursday, April 5, 2018 - link
Good answer, Billy. It won't please everyone here, but that's impossible anyway.iter - Thursday, April 5, 2018 - link
People do want to see how much time it takes before cache gives out. Don't presume to know what all people do with their systems.As I mentioned 99% of the tests are already useless when it comes to indicating overall system performance. 99% of the people don't need anything above mainstream SATA SSD. So your point on excluding that one test is rather moot.
All in all, it seems you are intentionally hiding the weakness of certain products. Not cool. Run the tests, post the numbers, that's what you get paid for, I don't think it is unreasonable to expect that you do your job. Two people pointed out the absence of that tests, which is two more than those who explicitly stated they don't care about it, much less have anything against it. Statistically speaking, the test is of interest, and I highly doubt it will kill you to include it.