Performance Metrics - Storage Subsystem

In the previous section, we looked at various benchmarks for databases, web servers, general memory and CPU performance etc. For a file server, the storage performance is of paramount importance, since the main expectation from the system is one of writing to and reading from a disk volume protected against disk failure. In this section, we use Ubuntu 14.04 and mdadm to configure the disks in the hot-swap drive bays in a RAID-5 volume. Selected benchmarks from the Phoronix Test Suite are run with the RAID-5 volume as the target disk.

AIO Stress

Our first test in the storage benchmark is the AIO Stress PTS test profile. It is an asynchronous I/O benchmark, and our configuration tests random writes to a 2048MB test file using a 64KB record size, enabling apples-to-apples comparison with the other results reported to OpenBenchmarking.org

AIO Stress - Random Write

FS-Mark

FS-Mark is used to evaluate the performance of a system's file-system. The benchmark involves determination of the rate of processing files in a given volume. Different test profiles are used - processing 1000 files of 1MB each, processing 5000 files of 1MB each using four threads, processing 4000 files of 1MB each spread over 32 sub-directories and finally, 1000 files of 1MB each without using sync operations to the disk. The processing efficiencies are recorded in the graphs below.

FS-Mark v3.3 - Processing Efficiency - I

FS-Mark v3.3 - Processing Efficiency - II

FS-Mark v3.3 - Processing Efficiency - III

FS-Mark v3.3 - Processing Efficiency - IV

PostMark

This benchmark simulates small-file testing similar to the tasks endured by web and mail servers. This test profile performs 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes.

PostMark Disk Transaction Performance

Numbers from the evaluation of other systems can be found on OpenBenchmarking.org

Performance Metrics - Phoronix Test Suite NAS Performance - SPEC SFS 2014
Comments Locked

48 Comments

View All Comments

  • xicaque - Monday, November 23, 2015 - link

    Can you elaborate on redundant power supplies? Please? What is their purpose?
  • nxsfan - Tuesday, August 11, 2015 - link

    I have the ASRack C2750d4i + Silverstone DS380, with 8x3.5" HDDs and one SSD (& 16GB ECC). Your CPU and MB temps seem high, particularly when (if I understand correctly) you populated the U-NAS with SSDs.

    If lm-sensors is correct my CPU cores idle around 25 C and under peak load get to 50 C. My MB sits around 41 C. My HDDs range from ~50 C (TOSHIBA MD04ACA500) to ~37 C (WDC WD40EFRX). "Peak" (logged in the last month) power consumption (obtained from the UPS - so includes a 24 port switch) was 60 W. Idle is 41 W.

    The hardware itself is great. I virtualize with KVM and the hardware handles multiple VMs plus multiple realtime 1080p H.264 transcodes with aplomb (VC-1 not so much). File transfers saturate my gigabit network, but I am not a power user (i.e. typically only 2-3 active clients).
  • bill.rookard - Tuesday, August 11, 2015 - link

    I really like this unit. Compact. Flexible. Well thought out. Best of all, -affordable-. Putting together a budget media server just became much easier. Now to just find a good itx based mobo with enough SATA ports to handle the 8 bays...
  • KateH - Tuesday, August 11, 2015 - link

    Another good turnkey solution from ASRock, but I still think they missed a golden opportunity by not making an "ASRack" brand for their NAS units ;)
  • e1jones - Wednesday, August 12, 2015 - link

    Would be great for a Xeon D-15*0 board, but most of the ones I've seen so far only have 6 sata ports. A little more horsepower to virtualize and run CPU intensive programs.
  • akula2 - Monday, August 17, 2015 - link

    >A file server can be used for multiple purposes, unlike a dedicated NAS.

    Well, I paused reading right there! What does that mean? You should improve on that sentence; it could be quite confusing to novice members who aspire to buy/build storage systems.

    Next, I don't use Windows on any Servers. I never recommend that OS to anyone either, especially when the data is sensitive be it from business or personal perspective.

    I use couple of NAS Servers based on OpenIndiana (Solaris based) and BSD OSes. ZFS can be great if one understands its design goals and philosophy.

    I don't use FreeNAS or NAS boxes such as from Synology et al. I build the Hardware from the scratch to have greater choice and cost saving factors. Currently, I'm in Alpha stage building a large NAS Server (200+ TB) based on ZoL (ZFS on Linux). It will take at least two more months of effort to integrate to my company networks; few hundreds of associates based in three nations work more closely to augment efficiency and productivity.

    Yeah, few more things to share:

    1) Whatever I plan I look at Power consumption factor (green), especially high gulping ones such as Servers, Workstations, Hydrib Cluster, NAS Server etc. Hence, I allocate more funds to address the Power demand by deploying Solar solutions wherever it is viable in order to save some good money in the long run.
    2) I mostly go for Hitachi SAS drives and SATA III about 20% (Enterprise segment).
    3) ECC memory is mandatory. No compromise on this one to save some dough.
    4) Moved away from Cloud service providers by building by private cloud (NAS based) to protect my employee privacy. All employee data should remain in the respective nations. Period.
  • GuizmoPhil - Friday, August 21, 2015 - link

    I built a new server using their 4 bay model (NSC-400) last year. extremely sastisfied.

    Here's the pictures: https://picasaweb.google.com/117887570503925809876...

    Below the specs:

    CPU: Intel Core i3-4130T
    CPU cooler: Thermolab ITX30 (not shown on the pictures, was upgraded after)
    MOBO: ASUS H87i-PLUS
    RAM: Crucial Ballistix Tactical Low Profile 1.35V XMP 8-8-8-24 (1x4GB)
    SSD: Intel 320 series 80GB SATA 2.5"
    HDD: 4x HGST 4TB CoolSpin 3.5"
    FAN: Gelid 120mm sleeve silent fan (came with the unit)
    PSU: Seasonic SS-350M1U
    CASE: U-NAS NSC-400
    OS: LinuxMint 17.1 x64 (basically ubuntu 14.04 lts, but hassle-free)
  • Iozone_guy - Wednesday, September 2, 2015 - link

    I'm struggling to understand the test configuration. There seems to be a disconnect in the results. Almost all of the results have an average latency that is looking like a physical spindle, but yet the storage is all SSDs. How can the latency be so high ? Was there some problem with the setup, such that it wasn't measuring the SSD storage but something else ? Could the tester post the sfs_rc file and the sfslog.* and sfsc*.log files ? So we can try to sort out what happened ?

Log in

Don't have an account? Sign up now