QD1 Random Read Performance

Drive throughput with a queue depth of one is usually not advertised, but almost every latency or consistency metric reported on a spec sheet is measured at QD1 and usually for 4kB transfers. When the drive only has one command to work on at a time, there's nothing to get in the way of it offering its best-case access latency. Performance at such light loads is absolutely not what most of these drives are made for, but they have to make it through the easy tests before we move on to the more realistic challenges.

4kB Random Read QD1

4kB Random Read QD1 (Power Efficiency)
Power Efficiency in kIOPS/W Average Power in W

The Kingston DC500 SSDs offer similar QD1 random read throughput to Samsung's current SATA SSDs, but the Kingston drives require 35-45% more power. Samsung's most recent SATA controller platform has provided a remarkable improvement to power efficiency for both client and enterprise drives, while the new Phison S12DC controller leaves the Kingston drives with a much higher baseline for power consumption. However, the Samsung entry-level NVMe drive has even higher power draw, so despite its lower latency it is only as efficient as the Kingston drives at this light workload.

4kB Random Read QD1 QoS

The Kingston DC500M has slightly better QoS than the DC500R for QD1 random reads, but both of the Samsung SATA drives are better still. The NVMe drive is better in all three latency metrics, but the 99th percentile latency has the most significant improvement over the SATA drives.

The Kingston drives offer 8-9k IOPS for QD1 random reads of 4kB or smaller blocks, but jumping up to 8kB blocks cuts IOPS in half, leaving bandwidth unchanged. After that, increasing block size does bring steady throughput improvements, but even at 1MB reading at QD1 isn't enough to saturate the SATA link.

QD1 Random Write Performance

4kB Random Write QD1

The steady-state QD1 random write throughput of the DC500s is pretty good, especially for the DC500R that is only rated for 28k IOPS regardless of queue depth. At higher queue depths, the Samsung 883 DCT is supposed to reach the speeds the DC500s are providing here, but then the DC500M should also be much faster. The entry-level NVMe drive outpaces all the SATA drives, despite having a fourth the capacity.

4kB Random Write QD1 (Power Efficiency)
Power Efficiency in kIOPS/W Average Power in W

The good random write throughput of the Kingston DC500s comes with a proportional cost in power consumption, leaving them with comparable efficiency to the Samsung drives. The DC500M was fractionally slower than the DC500R but uses much less power, probably due to the extra spare area allowing the -M to have a much easier time with background garbage collection.

4kB Random Write QD1 QoS

The latency statistics for the DC500R and DC500M only differ meaningfully for the 99.99th percentile score, where the -R is predictably worse off—but not too much worse off than the Samsung drives. Overall, the Kingston drives have competitive QoS with the Samsung SATA drives during this test.

The Kingston DC500R has oddly poor random write performance for 1kB blocks, but otherwise both Kingston drives do quite well across the range of block sizes, with a clear IOPS advantage over the Samsung SATA drives for small block size random writes and better throughput once the drives have saturated somewhere around 8-32kB.

QD1 Sequential Read Performance

128kB Sequential Read QD1

128kB Sequential Read QD1 (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The Kingston DC500s clearly aren't saturating the SATA link when performing 128kB sequential reads at QD1, while the Samsung drives are fairly close. We've noticed in our consumer SSD reviews that Phison-based drives often require a moderately high queue depth (or block sizes above 128kB) in order to start delivering good sequential performance, and that seems to have carried over to the S12DC platform. This disappointing performance really hurts the power efficiency scores for this test, especially considering that the DC500s are drawing a bit more power than they're supposed to for reads.

Performing sequential reads with small block operations isn't particularly useful, but the Samsung drives are much better at it. They start getting close to saturating the SATA link with block sizes around 64kB, while the Kingston drives still haven't quite caught up when the block size reaches 1MB—showing again that they really need queue depths above 1 to deliver the expected sequential read performance.

QD1 Sequential Write Performance

128kB Sequential Write QD1

128kB Sequential Write QD1 (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

Sequential writes at QD1 isn't a problem for the Kingston drives the way reads were: the DC500M is a hair faster than the Samsung SATA SSDs and the DC500R is less than 10% slower. However, Samsung again comes out way ahead on power efficiency, and the DC500R exceeds its specified power draw for writes by 30%.

The DC500M and the Samsung 883 DCT are fairly evenly matched for sequential write performance across the range of block sizes, except that the Kingston is clearly faster for 512-byte writes (which in practice are basically never sequential). The DC500R differs from the -M by hitting a throughput limit sooner than the rest of the drives, and a bit lower during this test than the above 128kB sequential write test.

Introduction Peak Throughput and Steady State
Comments Locked

28 Comments

View All Comments

  • KAlmquist - Tuesday, June 25, 2019 - link

    Good points. I'd add that there is an upgrade to SATA called "SATA Express" which basically combines two PCIe lanes and traditional SATA into a single cable. It never really took off for the reasons you explained: it's simpler just to switch to PCIe.
  • MDD1963 - Tuesday, June 25, 2019 - link

    It would be nice indeed to see a new SATA4 spec at SAS speeds, 12 Gbps....
  • TheUnhandledException - Saturday, June 29, 2019 - link

    Why? Why not just use PCIe directly. Flash drives don't need the SATA interface and ultimately the SATA interface becomes PCIe at the SATA controller. It is just adding additional pointless translation to fit a round peg into a square hole. Connect your flash drive to PCIe and it is as slow or fast as you want it to be. 2x PCIe 3.0 you got ~2GB/s to work with, 4 lanes gets you 4GB/s. Upgrade to PCIe 4 and you now have 8 GB/s.
  • jabber - Wednesday, June 26, 2019 - link

    They could stay with 6GBps just fine. I'd say work on reducing the latency.

    Bandwidth is done. Latency is more important now IMO. Ultra low latency SATA would do fine for years to come.
  • RogerAndOut - Friday, July 12, 2019 - link

    In an enterprise environment, the 6Gbps speed is not much of an issue as deployment does not involve individual drives. Once you have 8,16,32 etc. in some form of RAID configuration the overall bandwidth increase. Such systems may also have NVMe based modules acting as a cache to allow fast retrieval of frequently access blocks and to speed up the 'commit' time of writes.
  • Dug - Tuesday, June 25, 2019 - link

    I would like to see the Intel and Micron Pro included.
    We need drives with power loss protection.
    And I don't think write heavy is regulated to nvme territory. That's just not in the cards for small businesses or even large businesses. 1 because of cost, 2 because of size, 3 because of scalability.
  • MDD1963 - Tuesday, June 25, 2019 - link

    1.3 DWPD endurance (9100+ TB of writes! for a 3.8 TB drive? Impressive! $800+...lower it to $399, and count me in! :)
  • m4063 - Tuesday, September 8, 2020 - link

    LISTEN! The most important feature, and reason to buy these drives, is they have power-loss-protected (PLP) cache, not for protecting your data, BUT FOR SPEED!
    In my believe the most important thing about PLP is it should improve direct synchronous I/O (ESX and SQL) because the drive can report back that the data is "written to disk" as soon as the data hit the cache, where a non PLP drive actually need to write the data to the nand before reporting "OK"!
    And for that reason it's obvious the size of the PLP protected cache is pretty important.
    None of those two features are considered and tested in this review, which is very criticizable.
    This is the main-reasons you should go for these drives. I've asked Kingston about the PLP protected cache size and I got:
    SEDC500M/480 - 1GB
    SEDC500M/960 - 2GB
    SEDC500M/1920 - 4GB
    These sizes could play a huge different in synchronous I/O intensive systems/applications.
    Anatech: please cover these factors in your tests/review!
    (admittedly, I have done any benchmark myself In lack of PLP drives)

Log in

Don't have an account? Sign up now