My first IBM-compatible was a 386 that had two hard drives, a 60MB and a 40MB drive. Even given the caveats of SMR, packing 20TB into a single 3.5 inch drive is impressive even if there are over 9000 platters that each feature the article's quoted 2.2GB per platter capacity.
I remember my Amiga600HD with 20MB HD. The readspeed was on par with an IBM-compatible floppy-disk drive.. Yup. pretty slow but I was able to play games like Agony, The Secret of Monkey Island 1+2 and Heart of China from it so I was happy.
No, he means core memory, probably. Some old DEC engineers I worked with used to regale us young'ns of times when they had to enter diag programs via front-panel switches. The best of them supposedly knew the programs by heart, after entering them so many times. But, I'm guessing those machines had a fair bit more than 105 bytes of RAM.
Your first drive is 1000x bigger than my first one from the 80s. Though my dad got us all beat, he was checking individual bits on vacuum tubes. Mom was punching punch cards. Though I think every generation has their "Wait, you don't need candles or lamp oil you just flip a switch and have light?" or "What do you mean there's no horse it just runs by itself?" moments to mess with our heads. Once you grow up with it that's just the way it is.
What frys my brain is getting a new phone a couple years ago and spending 30-40$ on a 128GB microSD card. Something the size of my pinky fingernail holding 2000 times the data my PC had as a teenager! (As a kid my dads TRS-80 used cassette tapes for "mass" storage)
"Western Digital’s SMR hard drives are host managed, so they will be available only to customers with appropriate software." This might be a barrier to entry for the short term...
If it was Q1 they'd've said Q1. H1 normally means Q2 but with low grade obfuscation.
Also as noted by UltraWide these drives offload a lot of logic to the host system and will never see general availability. What we can look forward to for building big NASes/etc is the 18TB non-SMR model.
I'm surprised the research into how much dead data is around is not a thing. You know how you can see a date on a file when it was last accessed? That should be a thing in real time on internet to know if data is actually still useful.
If a cloud storage had a option to tick to "delete files if not accessed within so many days.." it would be useful i think to people AND the datacenter in size it saves itself.
I mean lets be real, lots of data people store is stuff "I might use later"..its why people have crap stored in garages, storage units, etc around in real life. They won't ever use it and after they die its going to get tossed anyway be family. Plus, we all know you all have favorite porn or nudes you don't want to be around. lol
Okay, but that requires the storage know the native file system (encrypted object storage like S3 can't do this because the actual host can't read your data or metadata), and then it has to be tunable for people who have off-site redundancy or backups, such as me having encrypted certified true copies of some legal papers like tax returns and others. I'd want those kept indefinitely until I die. Then companies keep HR and payroll records for 7+ years for regulatory reasons...
It's very tough to prescribe a one size fits all solution for this, which I think is the big driver behind not seeing it yet.
I think AI and machine learning is a solid application for this. But only if it's boxed into a PC with zero access to the internet with some way to NOT leak the generated pattern data. Imagine if you'd use it for like 5 years on the same drive, you'd be able to know EXACTLY what you access. The other files you can look into and decide whether to keep. Moving a step further, the AI could even be configured to move the unused data (without your explicit knowledge of the specific data being moved), to another of your empty hard drives. If you don't access that data for say the next few years, then you can give the software the power to delete that data by itself, saving you the false anxiety that comes with seeing something and thinking "But maybe I'll need this later on, I shouldn't delete it now." When I format my phone I literally only backup photos (contacts and SMSes are synced) and wipe out EVERYTHING else. If I remember the app then I needed it, else I didn't. Simple.
I'm ashamed to admit that I'm one of those people that keeps a bunch of junk data around that I really don't need now and won't ever need again, but I save it because I can. I figure that since I don't have a meat world storage unit filled with crap I never touch or a house with closets bursting at the seams with life's leftovers, the crufty data mountain is probably not a big deal. It all does fit inside 256GB of microSD.
The "atime" metadata is frequently disabled, because it triggers a lot of writes for very little benefit. And with random writes on SMR being so painful, you can bet it won't be used there.
My main concern is how long it takes to write/read to these things even at 266MB/s or 2.1gbps it'd take the drive 20hours to fill/read. but irl you should expect closer to 40-50 hours to do so. this is the same reason i always avoided anything bigger than 8GB for USB2 drives.
That's one of the two key reasons why I don't like SMR drives. SMR drives are notoriously slower on rewrites, as that "upsets the apple cart", and requires adjacent data to be shuffled around - hence the need for "additional software" for data centers to minimize that. Also, shingling allows for a tighter squeeze, but can magnify data loss if it occurs.
I remember watching a presentation on OpenZFS about supporting feeding SMR drive layout information into ZFS allocation logic in order to allow the COW (copy-on-write) pattern to minimize mid-shingle updates. You can append to a shingle all you want until it's full, but updating is expensive because of (RMW)read-modify-write of potentially the entire shingle.
I want to say that the HD supports knowing if the data in a shingle is free or not, so it can know when to append vs RMW. So similar to an SSD, I think the host can tell the drive to TRIM/ZERO an entire shingle to start over. With an FS like ZFS, this is not horribly difficult because of COW.
USB2 drives were artificially limited by the data rate and bad NAND. With HDDs, it's not getting much better, unless they can use that multi-actuator technology for a one time speed doubling.
Considering the size of the drive, you will happily wait 2 days if that's what it takes. Because there's no other cost effective way to store 10, 16, 20 terabytes. And you might need that capability.
You could use RAID to speed things up, until the point that it's better for you to get SSDs.
@Anton, I thought that these drives reach their density by, yes, shingled arrangement, but especially by using MAMR (microwave-assisted magnetic recording) ? I didn't see mentioning of it in your note. Can you confirm or correct whether these are MAMR drives? If yes, that would be the key news: WD ready to ship MAMR drives.
In some ways I think the most interesting bit is that the 2tb 3.5" non-SMR platters mean density has finally hit the point of 1tb 2.5" platters. That means for people who need more space to go than is affordable in NAND, but who don't want to lug a power brick should finally be able to get 3TB 2.5" drives in standard thicknesses, instead of the long term top out at only 2tb.
Host managed? What does that mean. The drives controller lag some functionality that only specific motherboard chips can perform? I don't see any advantage
Host-managed means that the controller queries the drive about its geometry and uses the information to make intelligent decisions about where to write on its surface. It's the opposite of device-managed where the controller treats the drive as a conventional one and lets it make its own decisions.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
50 Comments
Back to Article
Gunbuster - Wednesday, September 11, 2019 - link
"Featuring a very high per-platter capacity of around 2.2 GB" Must be a tall drive to fit 9090 platters. ;)YB1064 - Wednesday, September 11, 2019 - link
For anybody interested, a good introductory article was published in IEEE spectrum:https://spectrum.ieee.org/computing/hardware/laser...
Another one from 2009:
https://spectrum.ieee.org/computing/hardware/laser...
danielfranklin - Wednesday, September 11, 2019 - link
Access denied?ads295 - Thursday, September 12, 2019 - link
The URL literally ends at "laser...", please update.Oops, I mean post another comment. You know, because these are the 90's we're living in...
mode_13h - Saturday, September 14, 2019 - link
LOL.And, of course, it's the same link twice.
mode_13h - Saturday, September 14, 2019 - link
Oh, they must've just copied-and-pasted from the post below. The links in that one are real.MDD1963 - Wednesday, September 11, 2019 - link
Don't be silly...the platters are just very thin, and spaced close together!mode_13h - Saturday, September 14, 2019 - link
Mmmm... baklava.MrSpadge - Wednesday, September 11, 2019 - link
"Western Digital 20 TB HDD: Crazy Capacity for Cold Storage"There was a time when 20 MB or 20 GB HDDs were considered crazy. Actually, 20 MB is crazy again by today's standards ;)
PeachNCream - Thursday, September 12, 2019 - link
My first IBM-compatible was a 386 that had two hard drives, a 60MB and a 40MB drive. Even given the caveats of SMR, packing 20TB into a single 3.5 inch drive is impressive even if there are over 9000 platters that each feature the article's quoted 2.2GB per platter capacity.Soda - Wednesday, September 18, 2019 - link
I remember my Amiga600HD with 20MB HD. The readspeed was on par with an IBM-compatible floppy-disk drive.. Yup. pretty slow but I was able to play games like Agony, The Secret of Monkey Island 1+2 and Heart of China from it so I was happy.Tunnah - Wednesday, September 11, 2019 - link
This drive is 1000x bigger than the first one I ever got some 20-odd years ago. That is just nuts.Slash3 - Wednesday, September 11, 2019 - link
One million times larger than my first 20MB Seagate, which was itself was a great upgrade vs my first system, a dual floppy drive (only) Apple ][.And yet, still somehow never large enough. :)
Slash3 - Wednesday, September 11, 2019 - link
(Ignore my extraneous "was," plz k thx)johnnycanadian - Wednesday, September 11, 2019 - link
5MB Rana here, connected to an Apple ][+ for BBS use, circa 1981? You could kill a man with that massive piece of steel.Bulat Ziganshin - Wednesday, September 11, 2019 - link
my first had 105 bytes of memory and I had to reenter program on each power on. I won!29a - Wednesday, September 11, 2019 - link
Your hard drive?mode_13h - Saturday, September 14, 2019 - link
No, he means core memory, probably. Some old DEC engineers I worked with used to regale us young'ns of times when they had to enter diag programs via front-panel switches. The best of them supposedly knew the programs by heart, after entering them so many times. But, I'm guessing those machines had a fair bit more than 105 bytes of RAM.mode_13h - Saturday, September 14, 2019 - link
And that 20 MB sure beat floppies, or dare I say punch cards, eh?Kjella - Wednesday, September 11, 2019 - link
Your first drive is 1000x bigger than my first one from the 80s. Though my dad got us all beat, he was checking individual bits on vacuum tubes. Mom was punching punch cards. Though I think every generation has their "Wait, you don't need candles or lamp oil you just flip a switch and have light?" or "What do you mean there's no horse it just runs by itself?" moments to mess with our heads. Once you grow up with it that's just the way it is.mode_13h - Saturday, September 14, 2019 - link
Yeah, kidz in a couple more generations are gonna be like "OMG, your bits had only 2 states, and they were mutually-exclusive?"mode_13h - Saturday, September 14, 2019 - link
OMG! Technology!Bp_968 - Thursday, September 19, 2019 - link
What frys my brain is getting a new phone a couple years ago and spending 30-40$ on a 128GB microSD card. Something the size of my pinky fingernail holding 2000 times the data my PC had as a teenager! (As a kid my dads TRS-80 used cassette tapes for "mass" storage)mooninite - Wednesday, September 11, 2019 - link
So... Retail in Q1 2020?UltraWide - Wednesday, September 11, 2019 - link
"Western Digital’s SMR hard drives are host managed, so they will be available only to customers with appropriate software."This might be a barrier to entry for the short term...
mode_13h - Saturday, September 14, 2019 - link
I expect SMR drives will mainly be used for volume-level backups.DanNeely - Wednesday, September 11, 2019 - link
If it was Q1 they'd've said Q1. H1 normally means Q2 but with low grade obfuscation.Also as noted by UltraWide these drives offload a lot of logic to the host system and will never see general availability. What we can look forward to for building big NASes/etc is the 18TB non-SMR model.
romrunning - Wednesday, September 11, 2019 - link
Yeah, I don't look forward to SMR drives at all. :)Thunder 57 - Wednesday, September 11, 2019 - link
They have their place. Most ordinary users would not want them though.imaheadcase - Wednesday, September 11, 2019 - link
I'm surprised the research into how much dead data is around is not a thing. You know how you can see a date on a file when it was last accessed? That should be a thing in real time on internet to know if data is actually still useful.If a cloud storage had a option to tick to "delete files if not accessed within so many days.." it would be useful i think to people AND the datacenter in size it saves itself.
I mean lets be real, lots of data people store is stuff "I might use later"..its why people have crap stored in garages, storage units, etc around in real life. They won't ever use it and after they die its going to get tossed anyway be family. Plus, we all know you all have favorite porn or nudes you don't want to be around. lol
patrickjp93 - Wednesday, September 11, 2019 - link
Okay, but that requires the storage know the native file system (encrypted object storage like S3 can't do this because the actual host can't read your data or metadata), and then it has to be tunable for people who have off-site redundancy or backups, such as me having encrypted certified true copies of some legal papers like tax returns and others. I'd want those kept indefinitely until I die. Then companies keep HR and payroll records for 7+ years for regulatory reasons...It's very tough to prescribe a one size fits all solution for this, which I think is the big driver behind not seeing it yet.
ads295 - Thursday, September 12, 2019 - link
I think AI and machine learning is a solid application for this. But only if it's boxed into a PC with zero access to the internet with some way to NOT leak the generated pattern data.Imagine if you'd use it for like 5 years on the same drive, you'd be able to know EXACTLY what you access. The other files you can look into and decide whether to keep.
Moving a step further, the AI could even be configured to move the unused data (without your explicit knowledge of the specific data being moved), to another of your empty hard drives. If you don't access that data for say the next few years, then you can give the software the power to delete that data by itself, saving you the false anxiety that comes with seeing something and thinking "But maybe I'll need this later on, I shouldn't delete it now."
When I format my phone I literally only backup photos (contacts and SMSes are synced) and wipe out EVERYTHING else. If I remember the app then I needed it, else I didn't. Simple.
PeachNCream - Thursday, September 12, 2019 - link
I'm ashamed to admit that I'm one of those people that keeps a bunch of junk data around that I really don't need now and won't ever need again, but I save it because I can. I figure that since I don't have a meat world storage unit filled with crap I never touch or a house with closets bursting at the seams with life's leftovers, the crufty data mountain is probably not a big deal. It all does fit inside 256GB of microSD.mode_13h - Saturday, September 14, 2019 - link
The "atime" metadata is frequently disabled, because it triggers a lot of writes for very little benefit. And with random writes on SMR being so painful, you can bet it won't be used there.bebimbap - Wednesday, September 11, 2019 - link
My main concern is how long it takes to write/read to these thingseven at 266MB/s or 2.1gbps it'd take the drive 20hours to fill/read. but irl you should expect closer to 40-50 hours to do so.
this is the same reason i always avoided anything bigger than 8GB for USB2 drives.
eastcoast_pete - Wednesday, September 11, 2019 - link
That's one of the two key reasons why I don't like SMR drives. SMR drives are notoriously slower on rewrites, as that "upsets the apple cart", and requires adjacent data to be shuffled around - hence the need for "additional software" for data centers to minimize that. Also, shingling allows for a tighter squeeze, but can magnify data loss if it occurs.bcronce - Wednesday, September 11, 2019 - link
I remember watching a presentation on OpenZFS about supporting feeding SMR drive layout information into ZFS allocation logic in order to allow the COW (copy-on-write) pattern to minimize mid-shingle updates. You can append to a shingle all you want until it's full, but updating is expensive because of (RMW)read-modify-write of potentially the entire shingle.I want to say that the HD supports knowing if the data in a shingle is free or not, so it can know when to append vs RMW. So similar to an SSD, I think the host can tell the drive to TRIM/ZERO an entire shingle to start over. With an FS like ZFS, this is not horribly difficult because of COW.
John_M - Friday, September 13, 2019 - link
It's called a "band", not a "shingle".PeachNCream - Monday, September 16, 2019 - link
Guitar riffs! Big hair! The drummer getting arrested after a concert! That band!!!!nandnandnand - Wednesday, September 11, 2019 - link
USB2 drives were artificially limited by the data rate and bad NAND. With HDDs, it's not getting much better, unless they can use that multi-actuator technology for a one time speed doubling.Considering the size of the drive, you will happily wait 2 days if that's what it takes. Because there's no other cost effective way to store 10, 16, 20 terabytes. And you might need that capability.
You could use RAID to speed things up, until the point that it's better for you to get SSDs.
eastcoast_pete - Wednesday, September 11, 2019 - link
@Anton, I thought that these drives reach their density by, yes, shingled arrangement, but especially by using MAMR (microwave-assisted magnetic recording) ? I didn't see mentioning of it in your note. Can you confirm or correct whether these are MAMR drives? If yes, that would be the key news: WD ready to ship MAMR drives.DanNeely - Wednesday, September 11, 2019 - link
At this point I'm assuming not. If these were the first MAMR drives WD would be shouting it to the rooftops.YB1064 - Wednesday, September 11, 2019 - link
For anybody interested, a good introductory article was published in IEEE spectrum:https://spectrum.ieee.org/computing/hardware/laser...
Another one from 2009:
https://spectrum.ieee.org/computing/hardware/laser...
DanNeely - Wednesday, September 11, 2019 - link
In some ways I think the most interesting bit is that the 2tb 3.5" non-SMR platters mean density has finally hit the point of 1tb 2.5" platters. That means for people who need more space to go than is affordable in NAND, but who don't want to lug a power brick should finally be able to get 3TB 2.5" drives in standard thicknesses, instead of the long term top out at only 2tb.Samus - Thursday, September 12, 2019 - link
Damn WD made a solid investment buying HGST.not_anton - Thursday, September 12, 2019 - link
I tried to use SMR external drive for work... these 3 IOPS in random writes still give me shivers!mode_13h - Saturday, September 14, 2019 - link
OMG. Did you not know? Or you were just hoping for more like 30 IOPS?danwat1234 - Friday, September 13, 2019 - link
Host managed? What does that mean. The drives controller lag some functionality that only specific motherboard chips can perform? I don't see any advantageJohn_M - Friday, September 13, 2019 - link
Host-managed means that the controller queries the drive about its geometry and uses the information to make intelligent decisions about where to write on its surface. It's the opposite of device-managed where the controller treats the drive as a conventional one and lets it make its own decisions.lukx - Saturday, October 31, 2020 - link
it's almost end of 2020 and still those drives are not available...why?