As operators of cloud datacenters need more storage capacity, higher capacity HDDs are being developed. As data hoarders need more capacity, higher capacity HDDs are needed. Last week Western Digital introduced its new Utrastar DC HC650 20 TB drives - hitting a new barrier in rotating data. 

The drives feature shingled magnetic recording (SMR) technology, which layers data on top of another much like a shingled roof, and therefore is designed primarily for write once read many (WORM) applications (e.g., content delivery services). Western Digital’s SMR hard drives are host managed, so they will be available only to customers with appropriate software.

Western Digital’s Utrastar DC HC650 20 TB is based on the company’s all-new nine-platter helium-sealed enterprise-class platform, a first for the company. The new 3.5-inch hard drives feature a 7200 RPM spindle speed and will be available with a SATA 6 Gbps or SAS 12 Gbps interface depending on the SKU. Since the product is not expected to be available immediately, the manufacturer does not disclose all of its specifications just yet, but has stated that key customers are already in the loop.

Featuring a very high per-platter capacity of around 2.2 GB, the Utrastar DC HC650 20 TB HDDs offer a higher sequential read performance than its predecessors, but its read IOPS per TB performance is lower than that of older HDDs. That said, Western Digital’s clients who will use the 20 TB SMR HDDs will need to mitigate two things: manage physical limitations of SMR by maximizing sequential writes (and minimizing random writes) as well as take into account lower IOPS per TB performance to minimize impact on their QoS.

As far as availability is concerned, the 20 TB version of the Ultrastar DC HC650 SMR drives will be available as samples by the end of the year. Actual shipments will start once the drives are qualified by customers. Because the HDDs will be available to select customers only, Western Digital does not publish per-unit pricing.

Related Reading

Source: Western Digital

Comments Locked

50 Comments

View All Comments

  • patrickjp93 - Wednesday, September 11, 2019 - link

    Okay, but that requires the storage know the native file system (encrypted object storage like S3 can't do this because the actual host can't read your data or metadata), and then it has to be tunable for people who have off-site redundancy or backups, such as me having encrypted certified true copies of some legal papers like tax returns and others. I'd want those kept indefinitely until I die. Then companies keep HR and payroll records for 7+ years for regulatory reasons...

    It's very tough to prescribe a one size fits all solution for this, which I think is the big driver behind not seeing it yet.
  • ads295 - Thursday, September 12, 2019 - link

    I think AI and machine learning is a solid application for this. But only if it's boxed into a PC with zero access to the internet with some way to NOT leak the generated pattern data.
    Imagine if you'd use it for like 5 years on the same drive, you'd be able to know EXACTLY what you access. The other files you can look into and decide whether to keep.
    Moving a step further, the AI could even be configured to move the unused data (without your explicit knowledge of the specific data being moved), to another of your empty hard drives. If you don't access that data for say the next few years, then you can give the software the power to delete that data by itself, saving you the false anxiety that comes with seeing something and thinking "But maybe I'll need this later on, I shouldn't delete it now."
    When I format my phone I literally only backup photos (contacts and SMSes are synced) and wipe out EVERYTHING else. If I remember the app then I needed it, else I didn't. Simple.
  • PeachNCream - Thursday, September 12, 2019 - link

    I'm ashamed to admit that I'm one of those people that keeps a bunch of junk data around that I really don't need now and won't ever need again, but I save it because I can. I figure that since I don't have a meat world storage unit filled with crap I never touch or a house with closets bursting at the seams with life's leftovers, the crufty data mountain is probably not a big deal. It all does fit inside 256GB of microSD.
  • mode_13h - Saturday, September 14, 2019 - link

    The "atime" metadata is frequently disabled, because it triggers a lot of writes for very little benefit. And with random writes on SMR being so painful, you can bet it won't be used there.
  • bebimbap - Wednesday, September 11, 2019 - link

    My main concern is how long it takes to write/read to these things
    even at 266MB/s or 2.1gbps it'd take the drive 20hours to fill/read. but irl you should expect closer to 40-50 hours to do so.
    this is the same reason i always avoided anything bigger than 8GB for USB2 drives.
  • eastcoast_pete - Wednesday, September 11, 2019 - link

    That's one of the two key reasons why I don't like SMR drives. SMR drives are notoriously slower on rewrites, as that "upsets the apple cart", and requires adjacent data to be shuffled around - hence the need for "additional software" for data centers to minimize that. Also, shingling allows for a tighter squeeze, but can magnify data loss if it occurs.
  • bcronce - Wednesday, September 11, 2019 - link

    I remember watching a presentation on OpenZFS about supporting feeding SMR drive layout information into ZFS allocation logic in order to allow the COW (copy-on-write) pattern to minimize mid-shingle updates. You can append to a shingle all you want until it's full, but updating is expensive because of (RMW)read-modify-write of potentially the entire shingle.

    I want to say that the HD supports knowing if the data in a shingle is free or not, so it can know when to append vs RMW. So similar to an SSD, I think the host can tell the drive to TRIM/ZERO an entire shingle to start over. With an FS like ZFS, this is not horribly difficult because of COW.
  • John_M - Friday, September 13, 2019 - link

    It's called a "band", not a "shingle".
  • PeachNCream - Monday, September 16, 2019 - link

    Guitar riffs! Big hair! The drummer getting arrested after a concert! That band!!!!
  • nandnandnand - Wednesday, September 11, 2019 - link

    USB2 drives were artificially limited by the data rate and bad NAND. With HDDs, it's not getting much better, unless they can use that multi-actuator technology for a one time speed doubling.

    Considering the size of the drive, you will happily wait 2 days if that's what it takes. Because there's no other cost effective way to store 10, 16, 20 terabytes. And you might need that capability.

    You could use RAID to speed things up, until the point that it's better for you to get SSDs.

Log in

Don't have an account? Sign up now