Western Digital announced this week that it has started shipments of its first products based on 3D QLC NAND memory. The initial devices to use the highly-dense flash memory are retail products (e.g., memory cards, USB flash drives, etc.) as well as external SSDs. Eventually, high-density 3D QLC NAND devices will be used to build high-capacity SSDs that will compete against nearline hard drives.

During Western Digital's quarterly earnings conference call earlier this week, Mike Cardano, president and COO of the company, said that in the third quarter of calendar 2019 (Q1 FY2020) the manufacturer “began shipping 96-layer 3D QLC-based retail products and external SSDs.” The executive did not elaborate which product lines now use 3D QLC NAND, though typically we see higher capacity NAND first introduced in products such as high-capacity memory cards and external drives.

Western Digital and its partner Toshiba Memory (now called Kioxia) were among the first companies to develop 64-layer 768 Gb 3D QLC NAND back in mid-2017 and even started sampling of these devices back then, but WD/Toshiba opted not to mass produce the NAND. Meanwhile, in mid-2018, Western Digital introduced its 96-layer 1.33 Tb 3D QLC NAND devices that could either enable to build storage products with considerably higher capacities, or cut costs of drives when compared to 3D TLC-based solutions.

At present, Western Digital’s 1.33 Tb 3D QLC NAND devices are the industry’s highest-capacity commercial NAND chips, so from this standpoint the company is ahead of its rivals. But while it makes a great sense to use 1.33 Tb 3D QLC NAND for advanced consumer storage devices, these memory chips were developed primarily for ultra-high-capacity SSDs that could rival nearline HDDs for certain applications.

It is hard to say when Western Digital commercializes such drives as the company is only starting to qualify 96-layer 3D QLC NAND for SSDs, but it will definitely be interesting to see which capacity points will be hit with the said memory.

On a related note, Western Digital also said that in Q3 2019 (Q1 FY2020), bit production of 96-layer 3D NAND exceeded bit production of 64-layer 3D NAND.

Related Reading:

Source: Western Digital

 

POST A COMMENT

28 Comments

View All Comments

  • Billy Tallis - Saturday, November 2, 2019 - link

    The standard for enterprise drives is 3 month retention vs 1 year for client/consumer drives. That allows enterprise write endurance limits to be set a bit higher than for client drives. If flash ever gets cheap enough to be attractive for backup use, that standard will have to be reconsidered. Reply
  • azazel1024 - Wednesday, November 6, 2019 - link

    The point though is 1 year isn't sufficient for archival purposes and not great for cold drives either. If a cold storage drive is only brought online periodically, even if more frequently than annually, it may still end up suffering bit rot eventually. The controller needs time to refresh cells that are aging. This doesn't take seconds or even minutes, especially if you are talking months of cold storage. Depends on the drive controller and what the drive is doing at the time, but is likely measured in at least tens of minutes if not hours to accomplish for a whole drive (think P/E the entire drive, or at least much of it if it has really been offline for months and only occasionally brought online to update some of the contents).

    That endurance is also at 20C IIRC. Room temperature. Which isn't a terribly bad metric to use, but the warmer the environment, the faster bit rot occurs. And it plummets quickly. At 40C, which is hot, but no worse than storage in a tropical/equatorial environment might be (or a heat wave where I live) that 1 year, might be more like a month or two. Now I usually don't store my SSDs sitting outside, but my cold storage drive likely varies in temp from about 18C to about 24C through out the year.

    I am sure plenty of people have even more varied temperatures.

    And that is for MLC flash JEDEC standard is 1 year of endurance at max P/E. IIRC TLC it is 9 months. I don't know what the standard is for QLC (I assume even worse).

    Now a QLC drive that might only ever see a handful of P/E cycles and used for cold storage might still be okay if you are putting it online a handful of times a year for an hour or two to back things up, let it run some refresh routines and putting it back in storage. Still not a good choice for archival (which implies more of a write once, read once/many where it might be offline for years).

    If the write and read speed can be improved some I'd consider QLC to replace my RAID array, but I'd probably still want to do a RAID array. I don't typically need good IOPS on it, as its file storage. Not app launching and not small files. Photos, video, music, applications (installers), etc. My current 2x3TB RAID0 setup can push around 360MB/sec max and around 210MB/sec min on the inner tracks. From what I've seen QLC drives around in the 350MB/sec read range and about 50-70MB/sec writes. Now with SLC caching that write speed is drastically increased, for relatively small amounts of writes (yes, I'd consider 10-15GB as a small amount of writes). If it was more like 50-60GB of SLC writes I might consider it.

    That said, even then, I don't know that I would. About once a year I manage to bork something bad enough I am restoring the RAID array in my desktop or in my server (so far never had to restore both from my cold storage USB drive). Slowing down to 70MB/sec would be pretty painful when right now it can chug along at 235MB/sec of my 2x1GbE link speed. Even with two QLC drives in RAID0...that's about 140MB/sec is a lot slower than what I have going on now (reads would be fine and easily exceed 5GbE with a pair of QLC drives in RAID 0). That said, TLC NAND is coming down in prices steadily too. That does have the performance that I'd probably consider a nicer TLC drive to replace my RAID0 HDD array, or at worst a pair of TLC drives in RAID0 to keep write performance beyond the SLC buffer high.
    Reply
  • npz - Friday, November 1, 2019 - link

    > even more nonsensical than the arguments i heard for why optical drives are here to stay

    Optical media IS here to stay. There's nothing more reliable for archiving than good optical media. As mentioned already. SSDs inherently have a cold storage problem --and no, the controller can't deal with that once in few months because it won't be plugged in or powered on!
    Reply
  • npz - Friday, November 1, 2019 - link

    Even Amazon uses special hard drives, not SSDs for their Glacier storage class Reply
  • shompa - Thursday, November 7, 2019 - link

    Add to that legal. Most countries have laws that sensitive data have to be stored 10 years on write-once material so the data cant be changed. That's why optical jukeboxes still exist in large corporations. Reply
  • ksec - Saturday, November 2, 2019 - link

    >in addition to being flat out wrong,

    LOL.

    HDD may be dying in consumer space, but it is doing extremely well in Enterprise and Datacenter. The Cost of amortised over shipment completely neglect the cost / GB shipped to these customers. MAMR or now renamed as EMAR still has life to bring 10x capacity improvement. While NAND has reached its die shrinking advantage and will now require die stacking to improve its Cost model. Something which is not quite proven yet, as the higher die stacking hurts yield.

    There are also other aspect which NAND currently does not fit the backup, archive, and high capacity volume storage requirement. So to say it is gone in 5 years time is premature.

    That is assuming HDD maker execute on their roadmap.
    Reply
  • 8lec - Saturday, November 2, 2019 - link

    Optical drives are great... Cuz 2 of our cars cannot play anything except CDs. Lmao Reply
  • Great_Scott - Monday, November 4, 2019 - link

    SSD endurance isn't the core issue here. Controller failure is very common, and in that case you lose all data with no chance for recovery.

    HDDs both tend to fail less often and give warning signs before they go, and when they do low-level data recovery is often possible.
    Reply
  • PeachNCream - Friday, November 1, 2019 - link

    That's my concern as well. It's hard to find reliable information about how long a QLC-based storage device will retain data in an unpowered state. My backup needs are modest enough that a 1TB 2.5" HDD in an external case is good enough to keep a cold copy of everything I care about (actually using less than 200GB of it in total at the moment). While I like the durability of NAND from a shock and vibration tolerance perspective AND responsiveness without noise is nice too, speed isn't that critical in my backups since I have to push everything over USB (2.0 in the case of my oldest laptop) so a hard drive that sits in a drawer for half a year at a time seems like a better option. WIth that said, USB flash drives are selling as backup devices and lots of people buy and use them without problems. During the back-to-school shopping at Walmart, my local store had 64GB USB drives available at less than $8 USD. I picked one up and although it's really slow, it has been working since late August with no problems and I've been using it regularly to sneaker net stuff between computers.'s Reply
  • Scipio Africanus - Monday, November 4, 2019 - link

    For cold storage needs, just go HDD honestly. There's no point is an SSD as you can read from above comments. Reply

Log in

Don't have an account? Sign up now