Introduction

The SMB / SOHO / consumer NAS market is expected to experience good growth over the next few years. With declining PC sales and increase in affordability of SSDs, hard drive vendors have scrambled to make up for the deficit and increase revenue by targeting the NAS market. Hard drive models specifically catering to 1-5 bay consumer NAS units have been introduced by both Western Digital and Seagate. Seagate took the lead in the capacity segment with the launch of the 4 TB NAS HDD in June 2013. Western Digital achieved parity with the launch of the second generation WD Red models yesterday.

The higher end SATA DAS/NAS storage segments have been served by 4 TB models for quite some time now. The WD Re (targeting applications where durability under heavy workloads is important) has been available in a 4 TB version since September 2012, while the WD Se (targeting applications where scalability and capacity are important) was introduced in May 2013.

The correct choice of hard drives for a NAS system is influenced by a number of factors. These include expected workloads, performance requirements and power consumption restrictions, amongst others. In this review, we will discuss some of these aspects while evaluating four different hard drives targeting the NAS market:

  • Western Digital Red 4 TB [ WDC WD40EFRX-68WT0N0 ]
  • Seagate 4 TB NAS HDD [ ST4000VN000-1H4168 ]
  • Western Digital Se 4 TB [ WDC WD4000F9YZ-09N20L0 ]
  • Western Digital Re 4 TB [ WDC WD4000FYYZ-01UL1B0 ]

While the WD Red and Seagate NAS HDD compete against each other in the same market segment (consumer / SOHO NAS units with 1-5 bays), the WD Re and WD Se are portrayed as complementary offerings for higher end NAS units. We will also try to determine how they differ in the course of this article.

Western Digital provided us with at least two drives each of the WD Red, WD Se and WD Re, but, Seagate came forward with only one disk. Readers of our initial WD Red 3 TB review would have found us evaluating the disks in multiple NAS units with multiple RAID configurations. Unfortunately, Seagate's sampling forced us to rethink our review strategy for these NAS drives. We will start off with a feature set comparison of the four drives followed by a look at the raw performance when connected directly to a SATA 6 Gbps port. A 2-bay Intel Atom-based NAS (LenovoEMC PX2-300D) with single-bay occupancy is then used to evaluate performance in a networked environment. Power consumption numbers and other factors are addressed in the final section with the networked configuration as a point of reference.

We used two testbeds in our evaluation, one for benchmarking the raw drive performance and the other for evaluating performance when placed in a NAS unit.

SATA Drive Benchmarking Testbed Setup
Processor Intel i7-3770K CPU - 4C/8T - 3.50GHz, 8MB Cache
Motherboard Asus P8H77-M Pro
OS Hard Drive Seagate Barracuda XT 2 TB
Secondary Drives Corsair Performance 3 Series™ P3-128 128 GB SSD
WD40EFRX / ST4000VN000 / WD4000F9YZ / WD4000FYYZ
Memory G.SKILL ECO Series 4GB (2 x 2GB) SDRAM DDR3 1333 (PC3 10666) F3-10666CL7D-4GBECO CAS 7-7-7-21
Case Antec VERIS Fusion Remote Max
Power Supply Antec TruePower New TP-550 550W
Operating System Windows 7 Ultimate x64

Our NAS testbed was built for evaluating NAS units when subject to access from multiple clients (virtual machines). We ran the benchmarks presented in this review on one of the twenty five available Windows 7 VMs.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ RevoDrive Hybrid (1TB HDD + 100GB NAND)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evolution 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

The hard drives under the scanner were placed in a single-drive configuration in the Intel Atom D525-based LenovoEMC PX2-300D. The network links of the PX2-300D were bonded in LACP 802.3ad mode, but that shouldn't have any bearing on the results since we are looking at a single client scenario with a single GbE link.

Feature Set Comparison
Comments Locked

54 Comments

View All Comments

  • dingetje - Thursday, September 5, 2013 - link

    thanks Ganesh
  • Arbie - Wednesday, September 4, 2013 - link

    Ignorant here, but I want to raise the issue. In casual research on a home NAS w/RAID I ran across a comment that regular drives are not suitable for that service because of their threshhold for flagging errors. IIRC the point was that they would wait longer to do so, and in a RAID situation that could make eventual error recovery very difficult. Drives designed for RAID use would flag errors earlier. I came away mostly with the idea that you should only build a NAS / RAID setup with drives (eg the WD Red series) designed for that.

    Is this so?
  • fackamato - Wednesday, September 4, 2013 - link

    Arbie, good point. You're talking about SCTERC. Some consumer drives allow you to alter that timeout, some don't.
  • brshoemak - Wednesday, September 4, 2013 - link

    A VERY broad and simplistic explanation is that "RAID enabled" drives will limit the amount of time they spend attempting to correct an error. The RAID controller needs to stay in constant contact with the drives to make sure the arrays integrity is intact.

    A normal consumer drive will spend much more time trying to correct an internal error. During this time, the RAID controller cannot talk to the drive because it is otherwise occupied . Because the drive is no longer responding to requests from the RAID controller (as it's now doing it's own thing), the controller drops the drive out of the array - which can be a very bad thing.

    Different ERC (error recovery control) methods like TLER and CCTL limit the time a drive spends trying to correct the error so it will be able to respond to requests from the RAID controller and ensure the drive isn't dropped from the array.

    Basically a RAID controller is like "yo dawg, you still there?" - With TLER/CCTL the drive's all like "yeah I'm here" so everything is cool. Without TLER the drive might just be busy fixing the toilet and takes too long to answer so the RAID controller just assumes no one is home and ditches its friend.
  • tjoynt - Wednesday, September 4, 2013 - link

    brshoemak, that was the clearest and most concise (not to mention funniest) explanation of TLER/CCTL that I've come across. For some reason, most people tend to confuse things and make it more complicated than it is.
  • ShieTar - Wednesday, September 4, 2013 - link

    I can't really follow that reasoning, maybe I am missing something. First off, error checking should in general be done by the RAID system, not by the drive electronic. Second off, you can always successfully recover the RAID after replacing one single drive. So the only way to run into a problem is not noticing a damage to one drive before a second drive is also damaged. I've been using cheap drives in RAID-1 configurations for over a decade now, and while several drives have died in that period, I've never had a RAID complain about not being able to restore.
    Maybe it is only relevant on very large RAID seeing very heavy use? I agree, I'd love to hear somebody from AT comment on this risk.
  • DanNeely - Wednesday, September 4, 2013 - link

    "you can always successfully recover the RAID after replacing one single drive."

    This isn't true. If you get any errors during the rebuilt and only had a single redundancy drive for the data being recovered the raid controller will mark the array as unrecoverable. Current drive capacities are high enough that raid5 has basically been dead in the enterprise for several years due to the risk of losing it all after a single drive failure being too high.
  • Rick83 - Wednesday, September 4, 2013 - link

    If you have a home usage scenario though, you can schedule surface scans to run every other day, in that case this becomes essentially a non-issue, At worst you'll lose a handful of KB or so.

    And of course you have backups to cover anything going wrong on a separate array.

    Of course, going RAID 5 beyond 6 disks is being slightly reckless, but that's still 20TB.
    By the time you manage that kind of data, ZFS is there for you.
  • Dribble - Wednesday, September 4, 2013 - link

    My experience for home usage is raid 1, or no raid at all and regular backups is best. Raid 5 is too complex for it's own good and never seems to be as reliable or repair like it's meant too. Because data is spread over several disks if it gets upset and goes wrong it's very hard to repair and you can loose everything. Also because you think you are safe you don't back up as often as you should so you suffer the most.

    Raid 1 or no raid means a single disk has a full copy of the data so is most likely to work if you run a disk repair program over it. No raid also focuses the mind on backups so if it goes chances are you'll have a very recent backup and loose hardly any data.
  • tjoynt - Wednesday, September 4, 2013 - link

    ++ this too. If you *really* need volume sizes larger than 4TB (the size of a single drive or RAID-1), you should bite the bullet and get a pro-class raid-6 or raid-10 system or use a software solution like ZFS or Windows Server 2012 Storage Space (don't know how reliable that is though). Don't mess with consumer-level striped-parity RAID: it will fail when you most need it. Even pro-class hardware fails, but it does so more gracefully, so you can usually recover your data in the end.

Log in

Don't have an account? Sign up now