Performance Metrics - Storage Subsystem

In the previous section, we looked at various benchmarks for databases, web servers, general memory and CPU performance etc. For a file server, the storage performance is of paramount importance, since the main expectation from the system is one of writing to and reading from a disk volume protected against disk failure. In this section, we use Ubuntu 14.04 and mdadm to configure the disks in the hot-swap drive bays in a RAID-5 volume. Selected benchmarks from the Phoronix Test Suite are run with the RAID-5 volume as the target disk.

AIO Stress

Our first test in the storage benchmark is the AIO Stress PTS test profile. It is an asynchronous I/O benchmark, and our configuration tests random writes to a 2048MB test file using a 64KB record size, enabling apples-to-apples comparison with the other results reported to OpenBenchmarking.org

AIO Stress - Random Write

FS-Mark

FS-Mark is used to evaluate the performance of a system's file-system. The benchmark involves determination of the rate of processing files in a given volume. Different test profiles are used - processing 1000 files of 1MB each, processing 5000 files of 1MB each using four threads, processing 4000 files of 1MB each spread over 32 sub-directories and finally, 1000 files of 1MB each without using sync operations to the disk. The processing efficiencies are recorded in the graphs below.

FS-Mark v3.3 - Processing Efficiency - I

FS-Mark v3.3 - Processing Efficiency - II

FS-Mark v3.3 - Processing Efficiency - III

FS-Mark v3.3 - Processing Efficiency - IV

PostMark

This benchmark simulates small-file testing similar to the tasks endured by web and mail servers. This test profile performs 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes.

PostMark Disk Transaction Performance

Numbers from the evaluation of other systems can be found on OpenBenchmarking.org

Performance Metrics - Phoronix Test Suite NAS Performance - SPEC SFS 2014
Comments Locked

48 Comments

View All Comments

  • nevenstien - Monday, August 10, 2015 - link

    An excellent article on a cost effective File Server/NAS DIY build with a good choice of hardware. After struggling with the dedicated NAS vs. File server question for over a year I decided on FreeNAS using jails for whatever service I wanted to run. I was not a FreeNAS fan before the latest versions which I found very opaque and confused. My experience in the past with how painful hardware failures can be on storage systems even at a PC level convinced me the ZFS file system is the file system of choice for storage systems. The portability of the file system trumps everything else in my opinion. Whether you install FreeNAS or ZFS based Linux the ZFS file system should be the one that is used. When a disk fails its easy and when the hardware fails it’s just a matter of moving the disks to hardware that is not vendor dependent which means basically any hardware with enough storage ports. The software packages of the commercial NAS vendors is great but the main priority for me is the data integrity, reliability portability than the other services like serving video, web hosting or personal cloud services.
  • tchief - Monday, August 10, 2015 - link

    Synology uses mdadm for their arrays along with ext4 for the filesystem. It's quite simple to move the drives to any hardware that runs linux and remount and recover the array.
  • ZeDestructor - Monday, August 10, 2015 - link

    If you virtualize, even the "hardware" becomes portable :)
  • xicaque - Monday, November 23, 2015 - link

    are you pretty good with Freenas? I am not a programmer and there are things that the freenas manual does not address in a clearer way to me. I have a few questions that I like to ask offline. Thanks.
  • thewishy - Tuesday, December 1, 2015 - link

    Agreed, after data corrupting following a disk failure on my synology, it's either a FS with checksum or go home.

    Based on those requirements, it's ZFS or BRTFS. ZFS disk expansion isn't ideal, but I can live with it. BRTFS is "getting there" for RAID5/6, but it's not there yet.

    The choice of board for the cost comparison is about 2.5x the price of the CPU (Skylake pentium) and Motherboard (B150) I decided on. Add a PCI-E SATA card and life is good.
    Granted, it doesn't support ECC, but nor do a lot of mid-range COTS NAS units.
  • Navvie - Monday, August 10, 2015 - link

    Any NAS or Fileserver which isn't using ZFS is a non-starter for me. Likewise a review of such a system which doesn't include some ZFS numbers is of little value.

    I appreciate ZFS is 'new' but people not using it are missing a trick and AnandTech not covering it are doing a disservice to their readers.

    All IMO of course.
  • tchief - Monday, August 10, 2015 - link

    Until you can expand a vdev without having to double the drive count, ZFS is a non starter for many NAS appliance users.
  • extide - Monday, August 10, 2015 - link

    You can ... you can add drives one at a time if you really want (although I wouldn't suggest doing that...)
  • jb510 - Monday, August 10, 2015 - link

    Or one could use BtrFS. Which could stand for better pool resizing (it doesn't, that's just a joke people).

    Check out RockStor, it's no where near as mature as FreeNAS but it's catching up fast. Personally I'd much rather deal with Limux and docker containers than BSD and jails.
  • DanNeely - Monday, August 10, 2015 - link

    If there're major gotchas involved it's a major regression compared to other alternatives out there.

    I'm currently running WHS2011 + StableBit DrivePool. I initially setup with 2x 3GB drives in mirrored storage (raid 1ish equivalent). About a month ago, my array was almost completely full. Not wanting to spend more than I had to at this point (I intend to have a replacement running by December so I can run in parallel for a few months before WHS is EOL) I slapped in an old 1.5GB drive into the server. After adding to the array and rebalancing it I had an extra 750GB of mirrored storage available; it's not a ton but should be plenty to keep the server going until I stand it down. I don't want to lose that level of flexibility in being able to add un-matched drives into my array at need with whatever I use to replace my current setup with.

    If the gotcha is that by adding a single drive I end up with an array that's effectively a 2 drive not-raid1 not-raid0ed with a single drive, it'd be a larger regression in a feature I know I've needed than I'm confortable just to gain a bunch of improvements for what amount to what-if scenarios I've never encountered yet.

Log in

Don't have an account? Sign up now