NAS Performance - SPEC SFS 2014

Evaluation of the system as a storage node on the network can be done using multiple approaches. As a simple NAS accessed from a single client, Intel NASPT would work very well. There are other artificial benchmarking programs such as IOMeter and IOZone (all of which are used in our dedicated NAS reviews). However, when it comes to file servers used in business scenarios, business metrics make more sense. For example, a database administrator might wonder how many simultaneous databases could be sustained on a given machine? An administrator in a software company might want to know how many simultaneous software builds could be processed on the machine if it were to be used as a storage node. SPEC SFS 2014 allows us to evaluate systems based on such business metrics.

Prior to discussing about the various business scenarios, let us take a look at the test setup (including details of the testbed and how the file server itself was configured.

Solution Under Test Bill of Materials

  • ASRock Rack C2750D4I in a U-NAS NSC-800 (8GB RAM)
  • AnandTech NAS Testbed (64GB RAM, 1GB to each member VM
  • Netgear GSM7352S Ethernet Switch

Component Software

  • ASRock Rack C2750D4I system running Windows Storage Server 2012 R2
  • Load generators running on AnandTech NAS Testbed (10x Windows 7 VMs in a Windows Server 2008 R2 Hyper-V Installation)

Storage and File-Systems

  • ASRock Rack C2750D4I - 8x OCZ Vector 128GB SSDs : Storage Spaces with Parity Space
  • AnandTech NAS Testbed - NTFS partitions created at OS install time on OCZ Vertex 4 64GB SSDs

Transport Configuration

  • ASRock Rack C2750D4I - 2x 1GbE LAN Ports in 802.11ad LACP to Netgear GSM7352S
  • AnandTech NAS Testbed - 11x 1GbE LAN Ports to Netgear GSM7352S (1x management, 1x to each of 10 VMs)
  • All SMB benchmark traffic flowed through the Netgear GSM7352S network switch

The four business metrics that we will be looking at today include:

  • Database
  • Software Build
  • Video Data Acquisition (VDA)
  • Virtual Desktop Infrastructure (VDI)

The database and software build categories are self-explanatory. The VDA profile refers to usage of a storage node as a recording target for streaming video (usually from IP cameras). The VDI profile refers to the number of virtual desktops / virtual machines that can be supported using the file server as a storage node for the virtualization infrastructure.

Database

The following graphs show the requested and achieved op rates for the database workload. Note that beyond four databases, the gap between them is more than 10% - this automatically means that the storage system is unable to support more than four databases concurrently. In all the workloads, it is the latency which decides the suitability and not the bandwidth available.

Database Workload - Op Rates

Database Workload - Latency and Bandwidth

The SPEC SFS 2014 benchmark also provides a summary file for each workload which contains data additional to whatever is graphed above. The summary for the database workload is available here

Software Build

A similar analysis for the software build benchmark profile shows that the system is able to support up to 10 builds without any problems.

Software Build Workload - Op Rates

Software Build Workload - Latency and Bandwidth

The report summary for the software build workload is available here

Video Data Acquisition

Video data acquisition for up to 10 streams is easily handled by our DIY solution.

VDA Workload - Op Rates

VDA Workload - Latency and Bandwidth

The report summary for the VDA workload is available here

Virtual Desktop Infrastructure

VDI presents a very sorry story. The op rate achieved is not even close to the required rate, and the solution seems incapable of supporting any virtualization infrastructure.

VDI Workload - Op Rates

VDI Workload - Latency and Bandwidth

The report summary for the VDI workload is available here

Performance Metrics - Storage Subsystem Miscellaneous Aspects and Concluding Remarks
Comments Locked

48 Comments

View All Comments

  • xicaque - Monday, November 23, 2015 - link

    Can you elaborate on redundant power supplies? Please? What is their purpose?
  • nxsfan - Tuesday, August 11, 2015 - link

    I have the ASRack C2750d4i + Silverstone DS380, with 8x3.5" HDDs and one SSD (& 16GB ECC). Your CPU and MB temps seem high, particularly when (if I understand correctly) you populated the U-NAS with SSDs.

    If lm-sensors is correct my CPU cores idle around 25 C and under peak load get to 50 C. My MB sits around 41 C. My HDDs range from ~50 C (TOSHIBA MD04ACA500) to ~37 C (WDC WD40EFRX). "Peak" (logged in the last month) power consumption (obtained from the UPS - so includes a 24 port switch) was 60 W. Idle is 41 W.

    The hardware itself is great. I virtualize with KVM and the hardware handles multiple VMs plus multiple realtime 1080p H.264 transcodes with aplomb (VC-1 not so much). File transfers saturate my gigabit network, but I am not a power user (i.e. typically only 2-3 active clients).
  • bill.rookard - Tuesday, August 11, 2015 - link

    I really like this unit. Compact. Flexible. Well thought out. Best of all, -affordable-. Putting together a budget media server just became much easier. Now to just find a good itx based mobo with enough SATA ports to handle the 8 bays...
  • KateH - Tuesday, August 11, 2015 - link

    Another good turnkey solution from ASRock, but I still think they missed a golden opportunity by not making an "ASRack" brand for their NAS units ;)
  • e1jones - Wednesday, August 12, 2015 - link

    Would be great for a Xeon D-15*0 board, but most of the ones I've seen so far only have 6 sata ports. A little more horsepower to virtualize and run CPU intensive programs.
  • akula2 - Monday, August 17, 2015 - link

    >A file server can be used for multiple purposes, unlike a dedicated NAS.

    Well, I paused reading right there! What does that mean? You should improve on that sentence; it could be quite confusing to novice members who aspire to buy/build storage systems.

    Next, I don't use Windows on any Servers. I never recommend that OS to anyone either, especially when the data is sensitive be it from business or personal perspective.

    I use couple of NAS Servers based on OpenIndiana (Solaris based) and BSD OSes. ZFS can be great if one understands its design goals and philosophy.

    I don't use FreeNAS or NAS boxes such as from Synology et al. I build the Hardware from the scratch to have greater choice and cost saving factors. Currently, I'm in Alpha stage building a large NAS Server (200+ TB) based on ZoL (ZFS on Linux). It will take at least two more months of effort to integrate to my company networks; few hundreds of associates based in three nations work more closely to augment efficiency and productivity.

    Yeah, few more things to share:

    1) Whatever I plan I look at Power consumption factor (green), especially high gulping ones such as Servers, Workstations, Hydrib Cluster, NAS Server etc. Hence, I allocate more funds to address the Power demand by deploying Solar solutions wherever it is viable in order to save some good money in the long run.
    2) I mostly go for Hitachi SAS drives and SATA III about 20% (Enterprise segment).
    3) ECC memory is mandatory. No compromise on this one to save some dough.
    4) Moved away from Cloud service providers by building by private cloud (NAS based) to protect my employee privacy. All employee data should remain in the respective nations. Period.
  • GuizmoPhil - Friday, August 21, 2015 - link

    I built a new server using their 4 bay model (NSC-400) last year. extremely sastisfied.

    Here's the pictures: https://picasaweb.google.com/117887570503925809876...

    Below the specs:

    CPU: Intel Core i3-4130T
    CPU cooler: Thermolab ITX30 (not shown on the pictures, was upgraded after)
    MOBO: ASUS H87i-PLUS
    RAM: Crucial Ballistix Tactical Low Profile 1.35V XMP 8-8-8-24 (1x4GB)
    SSD: Intel 320 series 80GB SATA 2.5"
    HDD: 4x HGST 4TB CoolSpin 3.5"
    FAN: Gelid 120mm sleeve silent fan (came with the unit)
    PSU: Seasonic SS-350M1U
    CASE: U-NAS NSC-400
    OS: LinuxMint 17.1 x64 (basically ubuntu 14.04 lts, but hassle-free)
  • Iozone_guy - Wednesday, September 2, 2015 - link

    I'm struggling to understand the test configuration. There seems to be a disconnect in the results. Almost all of the results have an average latency that is looking like a physical spindle, but yet the storage is all SSDs. How can the latency be so high ? Was there some problem with the setup, such that it wasn't measuring the SSD storage but something else ? Could the tester post the sfs_rc file and the sfslog.* and sfsc*.log files ? So we can try to sort out what happened ?

Log in

Don't have an account? Sign up now