Introduction and Testbed Setup

The SMB / SOHO / consumer NAS market has been experiencing rapid growth over the last few years. With declining PC sales and increase in affordability of SSDs, hard drive vendors have scrambled to make up for the deficit and increase revenue by targeting the NAS market. The good news is that the growth is expected to accelerate in the near future (thanks to increasing amounts of user-generated data through the usage of mobile devices).

Back in July 2012, Western Digital began the trend of hard drive manufacturers bringing out dedicated units for the burgeoning SOHO / consumer NAS market with the 3.5" Red hard drive lineup. The firmware was tuned for 24x7 operation in SOHO and consumer NAS units. 1 TB, 2 TB and 3 TB versions were made available at launch. Later, Seagate also jumped into the fray with a hard drive series carrying similar firmware features. Over the last two years, the vendors have been optimizing the firmware features as well as increasing the capacities. On the enterprise side, hard drive vendors have been supplying different models for different applications, but all of them are quite suitable for 24x7 NAS usage. For example, the WD Re and Seagate Constellation ES are tuned for durability under heavy workloads, while the WD Se and Seagate Terascale units are targeted towards applications where scalability and capacity are important.

Usually, the enterprise segment is quite conservative when it comes to capacity, but datacenter / cloud computing requirements have resulted in capacity becoming a primary factor to ward off all-flash solutions. HGST, a Western Digital subsidiary, was the first vendor to bring a 6 TB hard drive to the market. The sealed Helium-filled HDDs could support up to seven disks (instead of the five usually possible in air-filled units), resulting in a bump up to 6 TB in the same height as traditional 3.5" drives. Seagate adopted a six-platter design for the Enterprise Capacity v4 6 TB version. Today, Western Digital launched the first NAS-specific 6 TB drive targeting SOHO / home consumers, the WD Red 6 TB. In expanding their Red portfolio, WD provides us an opportunity to see how the 6 TB version stacks up against other offerings targeting the NAS market.

The correct choice of hard drives for a NAS system is influenced by a number of factors. These include expected workloads, performance requirements and power consumption restrictions, amongst others. In this review, we will discuss some of these aspects while evaluating three different hard drives targeting the NAS market:

  • Western Digital Red 6 TB [ WDC WD60EFRX-68MYMN0 ]
  • Seagate Enterprise Capacity 3.5 HDD v4 6 TB [ ST6000NM0024-1HT17Z ]
  • HGST Ultrastar He6 6 TB [ HUS726060ALA640 ]

Each of these drives target slightly different markets. While the WD Red is mainly for SOHO and home consumers, the Seagate Enterprise Capacity targets ruggedness for heavy workloads while the HGST Ultrastar aims for data centers and cloud storage applications with a balance of performance and power efficiency.

Testbed Setup and Testing Methodology

Unlike our previous evaluation of 4 TB drives, we managed to obtain enough samples of the new drives to test them in a proper NAS environment. As usual, we will start off with a feature set comparison of the three drives, followed by a look at the raw performance when connected directly to a SATA 6 Gbps port. In the same PC, we also evaluate the performance of the drive using some aspects of our direct attached storage (DAS) testing methodology. For evaluation in a NAS environment, we configured three drives in a RAID-5 volume and processed selected benchmarks from our standard NAS review methodology.

We used two testbeds in our evaluation, one for benchmarking the raw drive and DAS performance and the other for evaluating performance when placed in a NAS unit.

AnandTech DAS Testbed Configuration
Motherboard Asus Z97-PRO Wi-Fi ac ATX
CPU Intel Core i7-4790
Memory Corsair Vengeance Pro CMY32GX3M4A2133C11
32 GB (4x 8GB)
DDR3-2133 @ 11-11-11-27
OS Drive Seagate 600 Pro 400 GB
Optical Drive Asus BW-16D1HT 16x Blu-ray Write (w/ M-Disc Support)
Add-on Card Asus Thunderbolt EX II
Chassis Corsair Air 540
PSU Corsair AX760i 760 W
OS Windows 8.1 Pro
Thanks to Asus and Corsair for the build components

In the above testbed, the hot swap bays of the Corsair Air 540 have to be singled out for special mention.
They were quite helpful in getting the drives processed in a fast and efficient manner for benchmarking. For NAS evaluation, we used the QNAP TS-EC1279U-SAS-RP. This is very similar to the unit we reviewed last year, except that we have a slightly faster CPU, more RAM and support for both SATA and SAS drives.

The NAS setup itself was subjected to benchmarking using our standard NAS testbed.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evolution 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

Thank You!

We thank the following companies for helping us out with our NAS testbed:

 

6 TB Face-Off: The Contenders
Comments Locked

83 Comments

View All Comments

  • jabber - Tuesday, July 22, 2014 - link

    Quality of HDDs is plummeting. The mech drive makers have lost interest, they know the writing is on the wall. Five years ago it was rare to get a HDD fail of less than 6 months old. But now I regularly get in drives with bad sectors/failed mechanics in that are less than 6-12 months old.

    I personally don't risk using any drives over a terrabyte for my own data.
  • asmian - Tuesday, July 22, 2014 - link

    You're not seriously suggesting that WD RE drives are the same as Reds/Blacks or whatever colour but with a minor firmware change, are you? If they weren't significantly better build quality to back up the published numbers I'm sure we'd have seen a court case by now, and the market for them would have dried up long ago.

    On the subject of my rebuild failure calculation, I wonder whether that is exactly what happened to the failing drive in the article: an unrecoverable bit read error during an array rebuild, making the NAS software flag the drive as failed or failing, even though the drive subsequently appears to perform/test OK. Nothing to do with compatability, just the verification of their unsuitability for use in arrays due to their size increasing the risk of bit read errors occurring at critical moments.
  • NonSequitor - Tuesday, July 22, 2014 - link

    It's more likely that they are binned than that they are manufactured differently. Think of it this way: you manufacture a thousand 4TB drives, then you take the 100 with the lowest power draw and vibration. Those are now RE drives. Then the rest become Reds.

    Regarding the anecdotes of users with several grouped early failures: I tend to blame some of that on low-dollar Internet shopping, and some of it on people working on hard tables. It takes very little mishandling to physically damage a hard drive, and even if the failure isn't initial a flat spot in a bearing will eventually lead to serious failure.
  • Iketh - Tuesday, July 22, 2014 - link

    LOL no
  • m0du1us - Friday, July 25, 2014 - link

    @NonSequitor This is exactly how enterprise drives are chosen, as well as using custom firmware.
  • LoneWolf15 - Friday, July 25, 2014 - link

    Aren't most of our drives fluid-dynamic bearing rather than ball bearing these days?
  • asmian - Wednesday, July 23, 2014 - link

    Just in case anyone is still denying the inadvisability of using these 6TB consumer-class Red drives in a home NAS, or any RAID array that's not ZFS, here's the maths:

    6TB is approx 0.5 x 10^14 bits. That means if you read the entire disk (as you have to do to rebuild a parity or mirrored array from the data held on all the remaining array disks) then there's a 50% chance of a disk read error for a consumer-class disk with 1 in 10^14 unrecoverable read error rate (check the maker's specs). Conversely, that means there's a 50% chance that there WON'T be a read error.

    Let's say you have a nice 24TB RAID6 array with 6 of these 6TB Red drives - four for data, two parity. RAID6, so good redundancy right? Must be safe! One of your disks dies. You still have a parity (or two, if it was a data disk that died) spare, so surely you're fine? Unfortunately, the chance of rebuilding the array without ANY of the disks suffering an unrecoverable read error is: 50% (for the first disk) x 50% (for the second) x 50% (for the third) x 50% (for the fourth) x 50% (for the fifth. Yes, that's ** 3.125% ** chance of rebuilding safely. Most RAID controllers will barf and stop the rebuild on the first error from a disk and declare it failed for the array. Would you go to Vegas to play those odds of success?

    If those 6TB disks had been Enterprise-class drives (say WD RE, or the HGST and Seagates reviewed here) specifically designed and marketed for 24/7 array use, they have a 1 in 10^15 unrecoverable error rate, an order of magnitude better. How does the maths look now? Each disk now has a 5% chance of erroring during the array rebuild, or a 95% chance of not. So the rebuild success probability is 95% x 95% x 95% x 95% x 95% - that's about 77.4% FOR THE SAME SIZE OF DISKS.

    Note that this success/failure probability is NOT PROPORTIONAL to the size of the disk and the URE rate - it is a POWER function that squares, then cubes, etc. given the number of disks remaining in the array. That means that using smaller disks than these 6TB monsters is significant to the health of the array, and so is using disks with much better URE figures than consumer-class drives, to an enormous extent as shown by the probability figure above.

    For instance, suppose you'd used an eight-disk RAID6 of 6TB Red drives to get the same 24TB array in the first example. Very roughly your non-error probability per disk full read is now 65%, so the probability of no read errors over a 7-disk rebuild is roughly 5%. Better than 3%, but not by much. However, all other things being equal, using far smaller disks (but more of them) to build the same size of array IS intrinsically safer for your data.

    Before anyone rushes to say none of this is significant compared to the chance of a drive mechanically failing in other ways, sure, that's an ADDITIONAL risk of array failure to add to the pretty shocking probabilities above. Bottom line, consumer-class drives are intrinsically UNSAFE for your data at these bloated multi-terabyte sizes, however much you think you're saving by buying the biggest available, since the build quality has not increased in step with the technology cramming the bits into smaller spaces.
  • asmian - Wednesday, July 23, 2014 - link

    Apologies for proofing error: "For instance, suppose you'd used an eight-disk RAID6 of 6TB Red drives" - obviously I meant 4TB drives.
  • KAlmquist - Wednesday, July 23, 2014 - link

    "6TB is approx 0.5 x 10^14 bits. That means if you read the entire disk (as you have to do to rebuild a parity or mirrored array from the data held on all the remaining array disks) then there's a 50% chance of a disk read error for a consumer-class disk with 1 in 10^14 unrecoverable read error rate (check the maker's specs)."

    What you are overlooking is that even though each sector contains 4096 bytes, or 32768 bits, it doesn't follow that to read the contents of the entire disk you have to read the contents of each sector 32768 times. To the contrary, to read the entire disk, you only have to read each sector once.

    Taking that into account, we can recalculate the numbers. A 5.457 gigabyte drive contains 1,464,843,750 sectors. If the probability of an unrecoverable read error is 1 in 10^14, and the probability of a read error on one sector is independent of the probability of a read error in any other sector, then the probability of getting a read error at some point when reading the entire disk is 0.00146%. I suspect that the probability of getting a read error in one sector is probably not independent of the probability of getting a read error in any other sector, meaning that the 0.00146% figure is too high. But sticking with that figure, it gives us a 99.99268% probability of rebuilding safely.

    I don't know of anyone who would dispute that the correct way for a RAID card to handle an unrecoverable read error is to calculate the data that should have been read, try to write it to the disk, and remove the disk from the array if the write fails. (This assumes that the data can be computed from data on the other disks, as is the case in your example of rebuilding a RAID 6 array after one disk has been replaced.) Presumably a lot of RAID card vendors assume that unrecoverable read errors are rare enough that the benefits of doing this right, rather than just assuming that the write will fail without trying, are too small to be worth the cost.
  • asmian - Wednesday, July 23, 2014 - link

    That makes sense IF (and I don't know whether it is) the URE rate is independent of the number of bits being read. If you read a sector you are reading a LOT of bits. You are suggesting that you would get 1 single URE event on average in every 10^14 sectors read, not in every 10^14 BITS read... which is a pretty big assumption and not what the spec seems to state. I'm admittedly suggesting the opposite extreme, where the chance of a URE is proportional to the number of bits being read (which seems more logical to me). Since you raise this possibility, I suspect the truth is likely somewhere in the middle, but I don't know enough about how UREs are calculated to make a judgement. Hopefully someone else can weigh in and shed some light on this.

    Ganesh has said that previous reviews of the Red drives mention they are masking the UREs by using a trick: "the drive hopes to tackle the URE issue by silently failing / returning dummy data instead of forcing the rebuild to fail (this is supposed to keep the RAID controller happy)." That seems incredibly scary if it is throwing bad data back in rebuild situations instead of admitting it has a problem, potentially silently corrupting the array. That for me would be a total deal-breaker for any use of these Red drives in an array, yet again NOT mentioned in the review, which is apparently discussing their suitability for just that... <sigh>

Log in

Don't have an account? Sign up now