Multi-Client Performance - CIFS on Windows

We put the QNAP TS-853 Pro through some IOMeter tests with a CIFS share being accessed from up to 25 VMs simultaneously. The following four graphs show the total available bandwidth and the average response time while being subject to different types of workloads through IOMeter. The tool also reports various other metrics of interest such as maximum response time, read and write IOPS, separate read and write bandwidth figures etc. Detailed listings of the IOMeter benchmark numbers (including IOPS and maximum response times) for each configuration are linked below:

QNAP TS-853 Pro - 4x 1G Multi-Client CIFS Performance - 100% Sequential Reads

 

QNAP TS-853 Pro - 4x 1G Multi-Client CIFS Performance - Max Throughput - 50% Reads

 

QNAP TS-853 Pro - 4x 1G Multi-Client CIFS Performance - Random 8K - 70% Reads

 

QNAP TS-853 Pro - 4x 1G Multi-Client CIFS Performance - Real Life - 65% Reads

The important aspect to note here is that the performance for the random workloads when the VMs are active are all over the place, following no particular pattern. This is due to the fact that the PC Mark 8 'Work' workload running in the background doesn't uniformly load the system resources. It is sufficient to observe that even moderately heavy word processing or similar task can pull down the NAS performance for certain types of workloads.

In the absence of active VMs, enabling link aggregation with all four ports allows maximum throughput numbers in the order of 400+ MBps for pure read workloads.

Single Client Performance - CIFS and NFS on Linux Multi-Client iSCSI Evaluation
Comments Locked

58 Comments

View All Comments

  • ap90033 - Wednesday, December 31, 2014 - link

    RAID is not a REPLACEMENT for BACKUP and BACKUP is not a REPLACEMENT for RAID.... RAID 5 can be perfectly fine... Especially if you have it backed up. ;)
  • shodanshok - Wednesday, December 31, 2014 - link

    I think you should consider raid10: recovery is much faster (the system "only" need to copy the content of a disk to another) and URE-imposed threat is way lower.

    Moreover, remember that large RAIDZ arrays have the IOPS of a single disk. While you can use a large ZIL device to transform random writes into sequential ones, the moment you hit the platters the low IOPS performance can bite you.

    For reference: https://blogs.oracle.com/roch/entry/when_to_and_no...
  • shodanshok - Wednesday, December 31, 2014 - link

    I agree.

    The only thing to remember when using large RAIDZ system is that, by design, RAIDZ arrays have the IOPS of a single disk, no matter how much disks you throw at it (throughput will linearly increase, though). For increased IOPS capability, you should construct your ZPOOL from multiple, striped RAIDZ arrays (similar to how RAID50/RAID60 work).

    For more information: https://blogs.oracle.com/roch/entry/when_to_and_no...
  • ap90033 - Friday, January 2, 2015 - link

    That is why RAID is not Backup and Backup is not RAID. ;)
  • cjs150 - Wednesday, January 7, 2015 - link

    Totally agree. As a home user, Raid 5 on a 4 bay NAS unit is fine, but I have had it fall over twice in 4 yrs, once when a disk failed and a second time when a disk worked loose (probably my fault). Failure was picked up, disk replaced and riad rebuilt. Once you have 5+ discs, Raid 5 is too risky for me.
  • jwcalla - Monday, December 29, 2014 - link

    Just doing some research and it's impossible to find out if this has ECC RAM or not, which is usually a good indication that it doesn't. (Which is kind of surprising for the price.)

    I don't know why they even bother making storage systems w/o ECC RAM. It's like saying, "Hey, let's set up this empty fire extinguisher here in the kitchen... you know... just in case."
  • Brett Howse - Monday, December 29, 2014 - link

    The J1900 doesn't support ECC:
    http://ark.intel.com/products/78867/Intel-Celeron-...
  • icrf - Monday, December 29, 2014 - link

    I thought the whole "ECC required for a reliable file system" was really only a thing for ZFS, and even then, only barely, with dangers generally over-stated.
  • shodanshok - Wednesday, December 31, 2014 - link

    It's not over-stated: any filesystem that proactively scrubs the disk/array (BTRFS and ZFS, at the moment) subsystem _need_ ECC memory.

    While you can ignore this fact on a client system (where the value of the corrupted data is probably low), on NAS or multi-user storage system ECC is almost mandatory.

    This is the very same reason why hardware RAID cards have ECC memory: when they scrubs the disks, any memory-related corruption can wreak havoc on array (and data) integrity.

    Regards.
  • creed3020 - Monday, December 29, 2014 - link

    I hope that Synology is working on something similar to the QvM solution here. The day I started my Synology NAS was the day I shutdown my Windows Server. I would, however, still love to have an always on Windows machine for the use cases that my NAS cannot perform or would be onerous to set up and get running.

Log in

Don't have an account? Sign up now