Final Words

From a performance perspective, the Intel SSD 910 is an absolute beast. If you take into account encrypted or otherwise incompressible data performance, the 800GB 910 is easily the fastest SSD we've ever tested. The performance loss for the 400GB drive makes it a bit more normal, but it's still among the best. The only thing to be concerned about with the 910 is its poor low queue depth, small file write performance. If your workload is dominated by 2KB (or smaller) writes then the 910 isn't going to be a great performer and you'd be much better off with a standalone 2.5" drive. For all other workloads however, the 910 is great.

Pricing is also extremely competitive with other high-end enterprise PCIe offerings. Intel comes in at $5/GB for its top of the line enterprise SSD, despite the 710 being introduced at over $6 per GB. If you really want to get nostalgic, the old X25-E launched at over $15/GB. The cost per GB is much lower if you take into account how much NAND Intel is actually putting on board the 910. With 56 x 32GB 25nm MLC-HET die on a single 800GB 910, you're talking about around $2.23 per GB. I'd even be interested in seeing Intel offer a higher capacity version of the 910 with less endurance for those applications that need the performance but aren't tremendously write heavy.

Of course there's Intel's famed reliability to take into account. All of the components on the 910 are either widely used already or derived from SSDs that have been shipping for years. There's bound to be some additional firmware complexity but it's nothing compared to doing a completely new drive/controller. Most of the server shops I've worked with lately tend to prefer Intel's 2.5" SSDs, even though there are higher performing alternatives on the market today. The 910 simply gives these folks an even higher end option should their customers or workloads demand it.

My only real complaint is about the inflexibility on the volume side. It would be nice to be able to present two larger volumes (or maybe even a single full capacity volume) to the OS rather than four independent volumes on an 800GB 910. Some VM platforms don't support software RAID and at only 200GB per volume capacity could become an issue. You really need to make sure that your needs are properly lined up with the 910 before pulling the trigger.

As a secondary issue, although I appreciate the power of Intel's SSD Data Center Tool, I would like to see something a bit easier to use. Not everyone wants to grok hexadecimal temperature values (although doing so wins you cool points).

Overall I'm pleased with the 910. It's (for the most part) a solid performer, it's competitively priced and it should last for a good while. If you're space constrained and need to get a lot of local IO performance in your server, Intel's SSD 910 is worth considering.

 

Enterprise Storage Bench - Microsoft SQL WeeklyMaintenance
Comments Locked

39 Comments

View All Comments

  • JellyRoll - Friday, August 10, 2012 - link

    WOW. low QD testing on an enterprise PCIe storage card is ridiculous. End users of these SSDs will use them in datacenters, and the average QD will be ridiculously high. This evaluation shows absolutely nothing that will be encountered in this type of SSDs actual usage. No administrator in their right mind would purchase these for such ridiculously low workloads.
  • SanX - Friday, August 10, 2012 - link

    and you do not need more then 16/32/64GB size for your speedy needs, then consider almost free RAMdisk with the backup. It will be 4-8x faster then this card
  • marcplante - Friday, August 10, 2012 - link

    It seems that there would be a market for a consumer desktop implementation.
  • Ksman - Friday, August 10, 2012 - link

    Given how well the 520's perform, perhaps a RAID with 520's on a LSI RAID adapter would be a very good solution and a comparison VS the 910 would be interesting. If RAID>0, then one could pull drives and attach direct for TRIM etc which would eliminate the problem where SSD's in a RAID cannot be managed.
  • Pixelpusher6 - Friday, August 10, 2012 - link

    I was wondering the exact same thing. What are the advantages of offering a PCIe solution like this compared to say just throwing in a SAS RAID card and connecting a bunch of SSD SAS drives in a RAID 0? Is the Intel 910 mainly targeted at 1U/2U servers that might not have space available for a 2.5" drive? Is it possible to over-provision any 2.5" drive to increase endurance and reduce write amplification (I think the desktop Samsung 830 I have allows this)? Seeing the performance charts I wonder how 2 of those Toshiba 400GB SAS drives would compare against the Intel 910.

    Is the enterprise market moving towards MLC-HET NAND with tons of spare area vs. SLC NAND because of the low cost of MLC NAND now since fabs have ramped up production? I was under the impression that SLC NAND was preferable in the enterprise segment but I might be wrong. What are some usage scenarios where SLC would be better than MLC-HET and vice versa?

    I think lorribot brought up a good point:

    "I like the idea but coming from a highly redundant arrays point of view how do you set this all up in a a safe and secure way, what are the points of failure? what happens if you lose the bridge chip, is all your data dead and buried?"

    I wonder if it is possible to just swap the 1st PCIe PCB board with all the controllers and DRAM in case of a failure of the bridge chip or controller thus the data remains safe. Can SSD controllers fail? Is it likely that the Intel 910 will be used in RAID 0? I didn't think RAID 0 was used much in enterprise. Sorry for all the questions. I have been visiting this site for over 10 years and I just now registered an account.
  • FunBunny2 - Saturday, August 11, 2012 - link

    eMLC/MLC-HET/foo-MLC are all attempts to get cheaper parts into SSD chassis, even for enterprise companies such as Texas Memory. Part of the motivation is yet more sophisticated controllers, and, I suspect, the realization that enterprises understand duty life far better than consumers (who'll run a HDD forever if it survives infant mortality). The SSD survival curve (due to NAND failure) is more predictable than HDD, so with the very much faster operations, if 5 years remains the lifetime, the parts used don't matter. The part gets swapped out at 90% or 95% of duty life (or whatever %-age the shop decides); end of story. 5 years ago, SLC was the only way to 5 years. That's not true any longer.
  • GatoRat - Sunday, August 12, 2012 - link

    "the 800GB 910 is easily the fastest SSD we've ever tested."

    Yet the tests clearly show that it isn't. In fact, the Oracle tests show it's a dog. In other tests, it doesn't come up on top. The OCZ Z-Drive R4 CM84 600GB is clearly the faster overall drive.
  • Galcobar - Sunday, August 12, 2012 - link

    Grok!

    I'm impressed both to see the literary reference, correctly used, and that nobody has called it a typo in the comments. Not bad for a fifty-year-old novel once dismissed by the New York Times as a puerile mishmash.
  • a50505 - Thursday, August 30, 2012 - link

    So, has anyone heard of a workstation class laptop that with a PCIe based ssd?

Log in

Don't have an account? Sign up now