Mixed Random Performance

Real-world storage workloads usually aren't pure reads or writes but a mix of both. It is completely impractical to test and graph the full range of possible mixed I/O workloads—varying the proportion of reads vs writes, sequential vs random and differing block sizes leads to far too many configurations. Instead, we're going to focus on just a few scenarios that are most commonly referred to by vendors, when they provide a mixed I/O performance specification at all. We tested a range of 4kB random read/write mixes at queue depth 32, the maximum supported by SATA SSDs. This gives us a good picture of the maximum throughput these drives can sustain for mixed random I/O, but in many cases the queue depth will be far higher than necessary, so we can't draw meaningful conclusions about latency from this test. As with our tests of pure random reads or writes, we are using 32 threads each issuing one read or write request at a time. This spreads the work over many CPU cores, and for NVMe drives it also spreads the I/O across the drive's several queues.

The full range of read/write mixes is graphed below, but we'll primarily focus on the 70% read, 30% write case that is a fairly common stand-in for moderately read-heavy mixed workloads.

4kB Mixed Random Read/Write

4kB Mixed Random Read/Write
Power Efficiency in MB/s/W Average Power in W

The Kingston and Samsung SATA drives are rather evenly matched for performance with a 70% read/30% write random IO workload: the DC500M is tied with the 883 DCT, and the DC500R is a bit slower than the 860 DCT. The two Kingston drives use the same amount of power, so the slower DC500R has a much worse efficiency score. The two Samsung drives have roughly the same great efficiency score; the slower 860 DCT also uses less power.

The Kingston DC500M and Samsung 883 DCT perform similarly for read-heavy mixes, but once the workload is more than about 30% writes, the Samsung falls behind. Their power consumption is very different: the Samsung plateaus at just over 3W while the DC500M starts at 3W and steadily climbs to over 5W as more writes are added to the mix.

Between the DC500R and the 860 DCT, the Samsung drive has better performance until the workload has shifted to be much more write-heavy than either drive is intended for. The Samsung drive's power consumption also never gets as high as the lowest power draw recorded from the DC500R during this test.

Aerospike Certification Tool

Aerospike is a high-performance NoSQL database designed for use with solid state storage. The developers of Aerospike provide the Aerospike Certification Tool (ACT), a benchmark that emulates the typical storage workload generated by the Aerospike database. This workload consists of a mix of large-block 128kB reads and writes, and small 1.5kB reads. When the ACT was initially released back in the early days of SATA SSDs, the baseline workload was defined to consist of 2000 reads per second and 1000 writes per second. A drive is considered to pass the test if it meets the following latency criteria:

  • fewer than 5% of transactions exceed 1ms
  • fewer than 1% of transactions exceed 8ms
  • fewer than 0.1% of transactions exceed 64ms

Drives can be scored based on the highest throughput they can sustain while satisfying the latency QoS requirements. Scores are normalized relative to the baseline 1x workload, so a score of 50 indicates 100,000 reads per second and 50,000 writes per second. Since this test uses fixed IO rates, the queue depths experienced by each drive will depend on their latency, and can fluctuate during the test run if the drive slows down temporarily for a garbage collection cycle. The test will give up early if it detects the queue depths growing excessively, or if the large block IO threads can't keep up with the random reads.

We used the default settings for queue and thread counts and did not manually constrain the benchmark to a single NUMA node, so this test produced a total of 64 threads scheduled across all 72 virtual (36 physical) cores.

The usual runtime for ACT is 24 hours, which makes determining a drive's throughput limit a long process. For fast NVMe SSDs, this is far longer than necessary for drives to reach steady-state. In order to find the maximum rate at which a drive can pass the test, we start at an unsustainably high rate (at least 150x) and incrementally reduce the rate until the test can run for a full hour, and the decrease the rate further if necessary to get the drive under the latency limits.

Aerospike Certification Tool Score

The Kingston drives don't handle the Aerospike test well at all. The DC500R can only pass the test at twice the throughput of a baseline standard that was set years ago, and DC500M's score of 4x the base load is still much worse than even the Samsung 860 DCT. The Kingston drives can provide comparable throughput to the Samsung SATA drives (as seen on the 70/30 test above), but they don't pass the strict latency QoS requirements imposed by the Aerospike test. This test is more write-intensive than the above 70/30 test and is definitely beyond what the DC500R is intended to be used for, but the DC500M should be able to do better.

Aerospike ACT: Power Efficiency
Power Efficiency Average Power in W

The Kingston drives don't draw any more power than the Samsung drives for once, but since they are running at much lower throughput their efficiency scores are a fraction of what the Samsung drives earn.

Peak Throughput and Steady State Conclusion
Comments Locked

28 Comments

View All Comments

  • Christopher003 - Sunday, November 24, 2019 - link

    I had an Agility 3 60gb, used for just over 2 years for my system, mom used now an additional over 2.5 years, however it was either starting to have issues, or the way mom was using caused it to "forget" things now and then.

    I fixed with a crucial mx100 or 200 (forget LOL) that still has over 90% life either way, the Agility 3 was "warning" though still showed as over 75% life left (christmas '18-19) .. def massive speed up by swapping to more modern as well as doing some cleaning for it..
  • Samus - Wednesday, June 26, 2019 - link

    I agree, I hated how they changed the internals without leaving any inclination of a change on the label.

    But the thing that doesn't stop me from recommending them: had anyone ever actually seen a Kingston drive fail?

    It seems their firmware and chip binning is excellent. The later of which is easy for a company that makes so many God damn USB flash drives and can use the shitty NAND elsewhere...
  • jabber - Tuesday, June 25, 2019 - link

    Kingston are my go to budget SSD brand. I bought dozens of those much moaned at V300 SSDs in the day. Did I care? No, because they were light years better than any 5400rpm pile of junk in a laptop or desktop.

    The other reason? Not one of them to date has failed. Including the V400 and onwards.

    They may not be the fastest (what's 30MBps between friends) but they are solid drives.

    Nothing more boring than a top end enthusiast SSD that is bust.

    Recommended.
  • GNUminex_l_cowsay - Tuesday, June 25, 2019 - link

    This whole article raises a question for me. Why is SATA still locked into 6Gbps? I get that there is an alternative higher performance interface but considering how frequently USB 3 has had its bandwidth upgraded lately it seems like a maximum bandwidth increase should be reasonable.
  • thomasg - Tuesday, June 25, 2019 - link

    There's just no point in updating SATA.
    6 Gbps is plenty for low-performance systems, SATA works well, is cheap and simple.

    For all that need more performance, the market has moved to PCIe and NVMe in their various form factors, which is just a lot more expensive (especially due to the numerous and frequently changed form factors).

    USB, as not only an, but THE external port that all users are facing has a lot more pressure behind it to get updated.
    Users touch USB all the time, there's demand for a lot of things over USB; most users never touch internal drives (in fact, most users actively buy hardware without replaceable internal drives), so there's no point in updating the standard.
    The manufactures can just spin new ports and new connectors, since they ship only complete systems anyway.
  • Dug - Tuesday, June 25, 2019 - link

    "There's just no point in updating SATA."
    That could be said for USB, pci, etc.
    There is a very good reason to go beyond an interface that is already saturated, and it doesn't have to be regulated to low performance systems.
  • Samus - Wednesday, June 26, 2019 - link

    SATA is an ancient way of transferring data. Why have a host controller on the PCI BUS when you can have a native PCIe device like NVMe. Further, SATA even with AHCI simply lacks optimization for flash storage. There doesn't seem to be an elegant way of adding NVMe features to SATA without either losing backwards compatibility with AHCI devices or adding unnecessary complexity.
  • TheUnhandledException - Saturday, June 29, 2019 - link

    SATA the protocol was built around supporting spinning discs. Making it work at all for solid state drices was a hack. A hack with a lot of unnecessary overhead. It was useful because it provided a way to put flash drives on existing systems. Future flash will will NVMe over PCIe directly. The only reason for upgrading SATA would be if hard drives actually needed >600 MB/s and they likely never will. So while we will have faster and faster interfaces for drives it won't be SATA. It would be like saying well because we made HDMI/DP faster and faster why not enhance VGA port to support 8K. I mean in theory we could but VGA to support a digital display is a hack and largely just existed for backwards and forwards compatibility because of analog displays.
  • MDD1963 - Tuesday, June 25, 2019 - link

    Why limit yourself to 550 MB/sec? I think having 6-8 ports of SATA4/SAS spec (12 Gbps) would breathe new life into local storage solutions...(certainly a NAS would be limited by even 10 Gbps networks, however, so equipped, but,..gotta start somewhere with incremental improvements, and, many SATA3 spec drives have now been limited to 500-550 MB/sec for years!)
  • Spunjji - Wednesday, June 26, 2019 - link

    You kinda covered the reason right there - where the performance is really needed, SAS (or PCIe) is where it's at. There really is no call for a higher-performing SATA standard.

Log in

Don't have an account? Sign up now