Compatibility Issues

One of the major new features of Intel's Tiger Lake mobile processors is support for PCIe 4.0 lanes coming directly off the CPU. The chipset's PCIe lanes are still limited to PCIe 3.0 speeds, but SSDs or a discrete GPU can now get twice the bandwidth.

This change is relevant because of how Intel's Optane Memory caching software interacts with the system's hardware and firmware. Earlier generations of Optane Memory and Intel's NVMe RAID solutions for their consumer platforms all relied on the NVMe SSDs being attached through the chipset. They used an ugly hack to hide NVMe devices from standard NVMe driver software and make them accessible only through the chipset's SATA controller, where only Intel's drivers could find them. Using chipset-attached NVMe devices with standard NVMe drivers as included in operating systems like Windows or Linux required changing the system's BIOS settings to put the SATA controller in AHCI mode rather than RAID/RST mode. Most of the PC OEMs who didn't provide that BIOS option were eventually shamed into adding it, or only activating this NVMe remapping mode when an Optane Memory device is installed.

For Tiger Lake and CPU-attached NVMe drives, Intel has brought over a feature from their server and workstation platforms. The Intel Volume Management Device (VMD) is a feature of the CPU's PCIe root complex. VMD leaves NVMe devices visible as proper PCIe devices, but enumerated in a separate PCI domain from all the other devices in the system. In the server space, this is a clear improvement as it made it easier to handle error containment and hotplug in the driver without involving the motherboard firmware, and VMD was used as the foundation for Intel's Virtual RAID on CPU (VROC) NVMe software RAID on those platforms. In the client space, VMD still accomplishes Intel's goal of ensuring that the standard Windows NVMe driver can't find the NVMe drive, leaving it available for Intel's drivers to manage.

Unfortunately, this switch seems to mean we're going through another round of compatibility headaches with missing BIOS options to disable the new functionality. It's not currently possible to do a clean install of Windows 10 onto these machines without providing an Intel VMD driver at the beginning of the installation process. Without it, Windows simply cannot detect the NVMe SSD in the CPU-attached M.2 slot. As a result, all of the Windows-based benchmark results in this review were using the Intel RST drivers (except for the Enmotus FuzeDrive SSD, which has its own driver). Normally we don't bother with vendor-specific drivers and stick with Microsoft's NVMe driver included with Windows, but that wasn't an option for this review.

We had planned to include a direct comparison of Intel's Optane Memory H20 against the Enmotus FuzeDrive P200 SSD, but Intel's VMD+RST situation on Tiger Lake prevents the Enmotus drivers from properly detecting the FuzeDrive SSD. On most platforms, installing the FuzeDrive SSD will cause Windows Update to fetch the Enmotus drivers and associate them with that particular NVMe device. Their Fuzion application can then be downloaded from the Microsoft Store to configure the tiering. Instead, on this Tiger Lake notebook, the Fuzion application reports that no FuzeDrive SSD is installed even when the FuzeDrive SSD is the only storage device in the system. It's not entirely clear whether the Intel VMD drivers merely prevent the FuzeDrive software from correctly detecting the drive as one of their own and unlocking the tiering capability, or if there's a more fundamental conflict between the Intel VMD and Enmotus NVMe drivers that prevents them from both being active for the same device. We suspect the latter.

Ultimately, this mess is caused by a combination of Intel and Enmotus wanting to keep their storage software functionality locked to their hardware (though Enmotus also sells their software independently), and Microsoft's inability to provide a clean framework for layering storage drivers the way Linux can (while allowing for the hardware lock-in these vendors demand). Neither of these reasons is sufficient justification for shipping such convoluted "solutions" to end users. It's especially disappointing to see that Intel's new and improved method for supporting Optane Memory caching now breaks a competitor's solution even when the Optane Memory hardware is removed from the system. The various software implementations of storage caching, tiering, RAID, and encryption available in the market are powerful tools, but they're at their best when they can be used together. Intel and Microsoft need to step up and address this situation, or attempts at innovation in this space will continue to be stifled by unnecessary complexity that makes these storage systems fragile and frustrating.

An Alternative: Enmotus FuzeDrive SSD Application Benchmarks and IO Traces
Comments Locked

45 Comments

View All Comments

  • deil - Wednesday, May 19, 2021 - link

    I still feel this is lazy solution.
    QLC for data storage, Optane for file metadata storage is the way.
    instant search and big size, best of both worlds.
  • Wereweeb - Wednesday, May 19, 2021 - link

    What you're describing is inferior to current QLC SSD's. Optane is still orders of magnitude slower than RAM, and I bet it would still be slower than just using system RAM like many DRAMless drives do. Plus, expensive for a consumer product.

    Optane's main use is to add terabytes of low-cost low-latency storage to workstations (That's how Intel uses it, to sell said workstations), and today both RAM and SLC drives are hot on it's heels.
  • jabber - Wednesday, May 19, 2021 - link

    All I want is a OS file system that can handle microfiles without grinding down to KBps all the time. Nothing more I love than seeing my super fast storage grind to a halt when I do file large user data copies.
  • Tomatotech - Wednesday, May 19, 2021 - link

    Pay for a 100% Optane SSD then. Or review your SSD / OS choices if this aspect is key to your income.
  • haukionkannel - Wednesday, May 19, 2021 - link

    If there only would be pure optane m2 ssd about 500 Gb to 1tb… and i,know… it would cost at least $1000 to $2000 but that would be quite usefull in highend nat storage or even as a main pc system drive.
  • Fedor - Sunday, May 23, 2021 - link

    There are, and have been for quite a few years. See the 900p, 905p (discontinued) and enterprise equivalents like 4800X and now the new 5800X.
  • jabber - Wednesday, May 19, 2021 - link

    They ALL grind to a halt when they hit thousands of microfiles.
  • ABR - Wednesday, May 19, 2021 - link

    As can be seen from the actual application benchmarks, these caching drives add almost nothing to (and sometimes take away from) performance. This matches my experience with a hybrid SSD - hard drive a few years ago on Windows that was also 16 or 32 GB for the fast part – it was indistinguishable from a regular hard drive in performance. Upgrading the same machine to a full SSD on the other hand was night and day. Basically software doesn't seem to be able to do a good job of determining what to cache.
  • lightningz71 - Wednesday, May 19, 2021 - link

    I see a lot of people bagging on Optane in general, both here and at other forums. I admit to not being a fan of it for many reasons, however, when it works, and when it's implemented with very specific goals, it does make a big difference. The organization I work at got a whole bunch (thousands) of PCs a few years ago that had mechanical hard drives. Over the last few years, different security and auditing software has been installed on them that has seriously impacted their performance. The organization was able to bulk buy a ton of the early 32GB Optane drives and we've been installing them in the machines as workload has permitted. The performance difference when you get the configuration right is drastically better for ordinary day to day office workers. This is NOT a solution for power users. This is a solution for machines that will be doing only a few, specific tasks that are heavily access latency bound and don't change a lot from day to day. The caching algorithms figure out the access patterns relatively quickly and it's largely indistinguishable from the newer PCs that were purchased with SSDs from the start.

    As for the H20, I understand where Intel was going with this, and as a "minimum effort" refresh on an existing product, it achieves it's goals. However, I feel that Intel has seriously missed the mark with this product in furthering the product itself.

    I suggest that Intel should have invested in their own combined NVME/Optane controller chip that would do the following:
    1) Use PCIe 4.0 on the bus interface with a unified 4x setup.
    2) Instead of using regular DRAM for caching, use the Optane modules themselves in that role. Tier the caching with host-based caching like the DRAMless controller models do, then tier that down to the Optane modules. They can continue to use the same strategies that regular Optane uses for caching, but have it implemented on the on-card controller instead of the host operating system. A lot of the features that were the reason that the Optane device needed to be it's own PCIe device separate from the SSD were addressed in NVME Spec 1.4(a and b), meaning that a lot of those things can be done through the unified controller. A competent controller chip should have been achievable that would have realized all of the features of the existing, but with much better I/O capabilities.

    Maybe that's coming in the next generation, if that ever happens. This... this was a minimum effort to keep a barely relevant product... barely relevant.
  • zodiacfml - Thursday, May 20, 2021 - link

    I did not get the charts. I did not see any advantage except if the workload fits in Optane, is that correct?

Log in

Don't have an account? Sign up now