The ADATA XPG SX6000 Pro 1TB SSD Review: Realtek's Entry-level NVMe Solutionby Billy Tallis on December 18, 2019 12:30 PM EST
This test starts with a freshly-erased drive and fills it with 128kB sequential writes at queue depth 32, recording the write speed for each 1GB segment. This test is not representative of any ordinary client/consumer usage pattern, but it does allow us to observe transitions in the drive's behavior as it fills up. This can allow us to estimate the size of any SLC write cache, and get a sense for how much performance remains on the rare occasions where real-world usage keeps writing data after filling the cache.
As with the Realtek-based ADATA SU750, the ADATA SX6000 Pro shows a large SLC cache and a sudden performance drop when it runs out. Since the SX6000 Pro's RTS5763DL controller is a 4-channel design compared to the 2-channel RTS5733, the post-SLC write speed is quite a bit faster than the SU750, but it's also less consistent. Aside from a few momentary drops during the SLC phase, the cache lasts for about 348GB of writes, which is pretty much the largest SLC cache size possible with 1TB of TLC NAND. The write speed to the SLC cache is just a hair faster than the advertised 1.5GB/s.
|Average Throughput for last 16 GB||Overall Average Throughput|
The post-SLC write speed from the SX6000 Pro is over twice as fast as the ADATA SU750 and is ahead of the Intel 660p, but is not up to the speed of the Mushkin Helix-L. And the Toshiba BG4 shows that DRAMless drives don't have to be anywhere near this slow; the BG4 is 3-4 times faster after its admittedly small SLC cache runs out.
Working Set Size
Most mainstream SSDs have enough DRAM to store the entire mapping table that translates logical block addresses into physical flash memory addresses. DRAMless drives only have small buffers to cache a portion of this mapping information. Some NVMe SSDs (the SX6000 Pro included) support the Host Memory Buffer feature and can borrow a piece of the host system's DRAM for this cache rather needing lots of on-controller memory.
When accessing a logical block whose mapping is not cached, the drive needs to read the mapping from the full table stored on the flash memory before it can read the user data stored at that logical block. This adds extra latency to read operations and in the worst case may double random read latency.
We can see the effects of the size of any mapping buffer by performing random reads from different sized portions of the drive. When performing random reads from a small slice of the drive, we expect the mappings to all fit in the cache, and when performing random reads from the entire drive, we expect mostly cache misses.
When performing this test on mainstream drives with a full-sized DRAM cache, we expect performance to be generally constant regardless of the working set size, or for performance to drop only slightly as the working set size increases.
The ADATA SX6000 Pro shows very consistent QD1 random read performance regardless of the test's working set size, so it appears that it is not deriving any benefit from the NVMe Host Memory Buffer feature, unlike the Toshiba BG4. Nor do we see an obvious cache size effect from on-controller SRAM as with the WD Blue SN500. The SX6000 Pro is not alone in this; the Mushkin Helix-L with Silicon Motion's DRAMless NVMe controller also leaves us largely wondering how HMB earns its keep.