Introduction and Testbed Setup

Hard drives continue to remain the storage medium of choice for applications where capacity and cost factors outweigh performance requirements. Vendors have also realized that enterprise hard drives are an overkill for some applications, but the recently launched NAS-targeted drives do not deliver the necessary performance for those. In order to cater to that market, Western Digital introduced the WD Red Pro lineup a few months back. Last week, Seagate launched their competitor, the Enterprise NAS HDD.

We have already had comprehensive coverage of a number of 4 TB NAS drives and a few 6 TB ones. In this review, we will look at what the Seagate Enterprise NAS HDD (ST6000VN0001) brings to the market and how it compares against the other 6 TB drives that have been evaluated before.

The correct choice of hard drives for a NAS system is influenced by a number of factors. These include expected workloads, performance requirements and power consumption restrictions, amongst others. In this review, we will discuss some of these aspects while comparing the Seagate Enterprise NAS HDD against other drives targeting the NAS market. The list of drives that we will be looking at today is listed below.

  1. Seagate Enterprise NAS HDD 6 TB [ ST6000VN0001-1SF17Z ]
  2. Western Digital Red 6 TB [ WDC WD60EFRX-68MYMN0 ]
  3. Seagate Enterprise Capacity 3.5 HDD v4 6 TB [ ST6000NM0024-1HT17Z ]
  4. HGST Ultrastar He6 6 TB [ HUS726060ALA640 ]

Prior to proceeding with the actual review, it must be made clear that the above drives do not target the same specific market. For example, the WD Red targets 1- 8 bay NAS systems in the tower form factor. The Seagate Enterprise NAS HDD is meant for rackmount units up to 16 bays, but is not intended to be a replacement for drives such as the Seagate Enterprise Capacity v4 meant for higher-end enterprise use. The HGST Ultrastar He6 targets capacity-sensitive datacenter applications.

Testbed Setup and Testing Methodology

Our NAS drive evaluation methodology consists of putting the units to test under both DAS and NAS environments. We first start off with a feature set comparison of the various drives, followed by a look at the raw performance when connected directly to a SATA 6 Gbps port. In the same PC, we also evaluate the performance of the drive using some aspects of our direct attached storage (DAS) testing methodology. For evaluation in a NAS environment, we configure three drives of each model in a RAID-5 volume and process selected benchmarks from our standard NAS review methodology. Since our NAS drive testbed supports both SATA and SAS drives, but our DAS testbed doesn't, only SATA drives are subject to the DAS benchmarks.

We used two testbeds in our evaluation, one for benchmarking the raw drive and DAS performance and the other for evaluating performance when placed in a NAS unit.

AnandTech DAS Testbed Configuration
Motherboard Asus Z97-PRO Wi-Fi ac ATX
CPU Intel Core i7-4790
Memory Corsair Vengeance Pro CMY32GX3M4A2133C11
32 GB (4x 8GB)
DDR3-2133 @ 11-11-11-27
OS Drive Seagate 600 Pro 400 GB
Optical Drive Asus BW-16D1HT 16x Blu-ray Write (w/ M-Disc Support)
Add-on Card Asus Thunderbolt EX II
Chassis Corsair Air 540
PSU Corsair AX760i 760 W
OS Windows 8.1 Pro
Thanks to Asus and Corsair for the build components

In the above testbed, the hot swap bays of the Corsair Air 540 have to be singled out for special mention.
They were quite helpful in getting the drives processed in a fast and efficient manner for benchmarking. For NAS evaluation, we used the QNAP TS-EC1279U-SAS-RP. This is very similar to the unit we reviewed last year, except that we have a slightly faster CPU, more RAM and support for both SATA and SAS drives.

The NAS setup itself was subjected to benchmarking using our standard NAS testbed.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evolution 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

Thank You!

We thank the following companies for helping us out with our NAS testbed:

Specifications and Feature Set Comparison
Comments Locked

51 Comments

View All Comments

  • Communism - Wednesday, December 10, 2014 - link

    Seagate 1TB per platter drives have been the fastest (per RPM) ever since their introduction.

    Compare to WD Blacks with 1TB per platter or HGST 1TB per platter drives and in every single sequential benchmark they have been faster.

    The cache size differential between the competing drives has little to do with the sequential results.
  • Laststop311 - Thursday, December 11, 2014 - link

    The seagate did have like 20-30MB/sec faster sequential transfers but the He6 has 2-3 milliseconds faster latency on the access times. Personally I'd rather have the 2-3 milliseconds lower in access time over 20-30MB/sec higher sequential transfers. Not too mention the lower power use, less heat, less noise and hitachis unrivaled reliability. If you are building a dense NAS setup the lower heat per drive really helps out. I feel like you would notice the lower latency more than like 160MB/sec vs 130MB/sec
  • MrSpadge - Thursday, December 11, 2014 - link

    "The cache size differential between the competing drives has little to do with the sequential results."

    I know. That's exactly why I replied this to Ganesh's

    "... Seagate Enterprise Capacity v4 vs. the WD Red Pro at the 4 TB capacity point. Both of them use the same number of platters, have the same rotational speed. The only difference was the cache size."
  • romrunning - Wednesday, December 10, 2014 - link

    All of the performance test charts shown MB/sec generally in the hundreds. However, the "Real Life 60% Random 65% Reads" test shows only single digits in MB/s. Is this a chart labeling problem? If not, why isn't there any explanation about the huge difference?
  • DanNeely - Wednesday, December 10, 2014 - link

    HDDs are very fast for sequential reads/writes because as soon as it finishes reading/writing one sector, the next is underneath the read heads. They're horribly slow for random IO because most of the time is spent moving the read/write heads into place not doing data reads. This has been the case with every HDD for decades. (Possibly all the way to the beginning; but I'm not familiar with very old designs limitations.) The main advantage of SDDs is that because they don't have to move drive heads around they can be many times faster in random IO than a magnetic HDD. (They're still faster in sequential IO; read the intro to SSD articles on this site from a few years ago for details about their architecture.)
  • romrunning - Wednesday, December 10, 2014 - link

    I agree with you, but that is a serious drop-off. Shouldn't an intelligent NAS be able to have different drives look for different parts of those reads with some type of large LUT?
  • MrSpadge - Wednesday, December 10, 2014 - link

    You've just invented Raid 0 / 5 / whatever :)

    For small files the typical transfer rates of HDDs are in the low single-digit range. Even if you have 4 of them and performance scales perfectly, that's still very slow. That's why a good SSD on SATA 2 get still be 10 to 100 times faster than an HDD, depending on the actual usage case, even though their maximum transfer rates are comparable.
  • romrunning - Thursday, December 11, 2014 - link

    That's what I was thinking - the test was performed on a 3-drive RAID-5 array in the QNAP, right? So why isn't it's RAID controller more intelligent?
  • Supercell99 - Thursday, December 11, 2014 - link

    Honestly, most serious enterprises do not use SATA HDD drives for production servers. The queue depth is only 32 vs 256 for SAS drives. SATA drives are fine for backups, the just can't provide the IOPS an Enterprise server running multiple VM's or DB's. Will still need to demand SAS for better IOPS in the HDD storage arena. VSphere VSAN will choke on SATA based disk system if a hosts dies.
  • cm2187 - Thursday, December 11, 2014 - link

    Most clouds use SATA drives.

Log in

Don't have an account? Sign up now