NAS Performance - SPEC SFS 2014

Evaluation of the system as a storage node on the network can be done using multiple approaches. As a simple NAS accessed from a single client, Intel NASPT would work very well. There are other artificial benchmarking programs such as IOMeter and IOZone (all of which are used in our dedicated NAS reviews). However, when it comes to file servers used in business scenarios, business metrics make more sense. For example, a database administrator might wonder how many simultaneous databases could be sustained on a given machine? An administrator in a software company might want to know how many simultaneous software builds could be processed on the machine if it were to be used as a storage node. SPEC SFS 2014 allows us to evaluate systems based on such business metrics.

Prior to discussing about the various business scenarios, let us take a look at the test setup (including details of the testbed and how the file server itself was configured.

Solution Under Test Bill of Materials

  • ASRock Rack C2750D4I in a U-NAS NSC-800 (8GB RAM)
  • AnandTech NAS Testbed (64GB RAM, 1GB to each member VM
  • Netgear GSM7352S Ethernet Switch

Component Software

  • ASRock Rack C2750D4I system running Windows Storage Server 2012 R2
  • Load generators running on AnandTech NAS Testbed (10x Windows 7 VMs in a Windows Server 2008 R2 Hyper-V Installation)

Storage and File-Systems

  • ASRock Rack C2750D4I - 8x OCZ Vector 128GB SSDs : Storage Spaces with Parity Space
  • AnandTech NAS Testbed - NTFS partitions created at OS install time on OCZ Vertex 4 64GB SSDs

Transport Configuration

  • ASRock Rack C2750D4I - 2x 1GbE LAN Ports in 802.11ad LACP to Netgear GSM7352S
  • AnandTech NAS Testbed - 11x 1GbE LAN Ports to Netgear GSM7352S (1x management, 1x to each of 10 VMs)
  • All SMB benchmark traffic flowed through the Netgear GSM7352S network switch

The four business metrics that we will be looking at today include:

  • Database
  • Software Build
  • Video Data Acquisition (VDA)
  • Virtual Desktop Infrastructure (VDI)

The database and software build categories are self-explanatory. The VDA profile refers to usage of a storage node as a recording target for streaming video (usually from IP cameras). The VDI profile refers to the number of virtual desktops / virtual machines that can be supported using the file server as a storage node for the virtualization infrastructure.

Database

The following graphs show the requested and achieved op rates for the database workload. Note that beyond four databases, the gap between them is more than 10% - this automatically means that the storage system is unable to support more than four databases concurrently. In all the workloads, it is the latency which decides the suitability and not the bandwidth available.

Database Workload - Op Rates

Database Workload - Latency and Bandwidth

The SPEC SFS 2014 benchmark also provides a summary file for each workload which contains data additional to whatever is graphed above. The summary for the database workload is available here

Software Build

A similar analysis for the software build benchmark profile shows that the system is able to support up to 10 builds without any problems.

Software Build Workload - Op Rates

Software Build Workload - Latency and Bandwidth

The report summary for the software build workload is available here

Video Data Acquisition

Video data acquisition for up to 10 streams is easily handled by our DIY solution.

VDA Workload - Op Rates

VDA Workload - Latency and Bandwidth

The report summary for the VDA workload is available here

Virtual Desktop Infrastructure

VDI presents a very sorry story. The op rate achieved is not even close to the required rate, and the solution seems incapable of supporting any virtualization infrastructure.

VDI Workload - Op Rates

VDI Workload - Latency and Bandwidth

The report summary for the VDI workload is available here

Performance Metrics - Storage Subsystem Miscellaneous Aspects and Concluding Remarks
Comments Locked

48 Comments

View All Comments

  • nevenstien - Monday, August 10, 2015 - link

    An excellent article on a cost effective File Server/NAS DIY build with a good choice of hardware. After struggling with the dedicated NAS vs. File server question for over a year I decided on FreeNAS using jails for whatever service I wanted to run. I was not a FreeNAS fan before the latest versions which I found very opaque and confused. My experience in the past with how painful hardware failures can be on storage systems even at a PC level convinced me the ZFS file system is the file system of choice for storage systems. The portability of the file system trumps everything else in my opinion. Whether you install FreeNAS or ZFS based Linux the ZFS file system should be the one that is used. When a disk fails its easy and when the hardware fails it’s just a matter of moving the disks to hardware that is not vendor dependent which means basically any hardware with enough storage ports. The software packages of the commercial NAS vendors is great but the main priority for me is the data integrity, reliability portability than the other services like serving video, web hosting or personal cloud services.
  • tchief - Monday, August 10, 2015 - link

    Synology uses mdadm for their arrays along with ext4 for the filesystem. It's quite simple to move the drives to any hardware that runs linux and remount and recover the array.
  • ZeDestructor - Monday, August 10, 2015 - link

    If you virtualize, even the "hardware" becomes portable :)
  • xicaque - Monday, November 23, 2015 - link

    are you pretty good with Freenas? I am not a programmer and there are things that the freenas manual does not address in a clearer way to me. I have a few questions that I like to ask offline. Thanks.
  • thewishy - Tuesday, December 1, 2015 - link

    Agreed, after data corrupting following a disk failure on my synology, it's either a FS with checksum or go home.

    Based on those requirements, it's ZFS or BRTFS. ZFS disk expansion isn't ideal, but I can live with it. BRTFS is "getting there" for RAID5/6, but it's not there yet.

    The choice of board for the cost comparison is about 2.5x the price of the CPU (Skylake pentium) and Motherboard (B150) I decided on. Add a PCI-E SATA card and life is good.
    Granted, it doesn't support ECC, but nor do a lot of mid-range COTS NAS units.
  • Navvie - Monday, August 10, 2015 - link

    Any NAS or Fileserver which isn't using ZFS is a non-starter for me. Likewise a review of such a system which doesn't include some ZFS numbers is of little value.

    I appreciate ZFS is 'new' but people not using it are missing a trick and AnandTech not covering it are doing a disservice to their readers.

    All IMO of course.
  • tchief - Monday, August 10, 2015 - link

    Until you can expand a vdev without having to double the drive count, ZFS is a non starter for many NAS appliance users.
  • extide - Monday, August 10, 2015 - link

    You can ... you can add drives one at a time if you really want (although I wouldn't suggest doing that...)
  • jb510 - Monday, August 10, 2015 - link

    Or one could use BtrFS. Which could stand for better pool resizing (it doesn't, that's just a joke people).

    Check out RockStor, it's no where near as mature as FreeNAS but it's catching up fast. Personally I'd much rather deal with Limux and docker containers than BSD and jails.
  • DanNeely - Monday, August 10, 2015 - link

    If there're major gotchas involved it's a major regression compared to other alternatives out there.

    I'm currently running WHS2011 + StableBit DrivePool. I initially setup with 2x 3GB drives in mirrored storage (raid 1ish equivalent). About a month ago, my array was almost completely full. Not wanting to spend more than I had to at this point (I intend to have a replacement running by December so I can run in parallel for a few months before WHS is EOL) I slapped in an old 1.5GB drive into the server. After adding to the array and rebalancing it I had an extra 750GB of mirrored storage available; it's not a ton but should be plenty to keep the server going until I stand it down. I don't want to lose that level of flexibility in being able to add un-matched drives into my array at need with whatever I use to replace my current setup with.

    If the gotcha is that by adding a single drive I end up with an array that's effectively a 2 drive not-raid1 not-raid0ed with a single drive, it'd be a larger regression in a feature I know I've needed than I'm confortable just to gain a bunch of improvements for what amount to what-if scenarios I've never encountered yet.

Log in

Don't have an account? Sign up now