We reviewed several types of server memory back in August 2012. You still have the same three choices—LRDIMMs, RDIMMs, and UDIMMs—but the situation has significantly changed now. The introduction of the Ivy Bridge EP is one of those changes. The latest Intel Xeon has better support for LR-DIMMs and supports higher memory speeds (up to 1866 MHz).

But the biggest change is that the pricing difference between LRDIMMs and RDIMMs has shrunk a lot. Just a year ago, a 32GB LRDIMM cost $2000 and more, while a more "sensible" 16GB RDIMM costs around $300-$400. You paid about three times more per GB to get the highest capacity DIMMs in your servers. Many servers could benefit from more memory, but that kind of pricing made LRDIMMs only an option for IT projects where hardware costs were dwarfed by other costs like consulting and software licenses. Fifteen months in IT is like half a decade in other industries; just look at the table below.

Memory type: Low
voltage
Ranks Price ($)
Q4 2013
Price ($)
perGB
8GB RDIMM—1600 yes Dual 153 19
16GB RDIMM—1600 yes Dual 243 15
8GB RDIMM—1866 no Dual 169 21
16GB RDIMM—1866 no Dual 257 16
32GB RDIMM—1333 yes Quad 808 25
32GB LRDIMM—1333 yes dual* 822 26
32GB LRDIMM—1866 no dual* 822 26

(*) Quad, but load of dual

If you need a refesher on UDIMMs, RDIMMs and LRDIMMs, check out our technical overview here. The price per GB of LRDIMMs is only 60% higher than that of the best RDIMMs. Quadrank 32GB RDIMMs used to be a lot cheaper than their load reduced competition and that difference is now negligible.

DIMM Limitations
Comments Locked

27 Comments

View All Comments

  • slideruler - Thursday, December 19, 2013 - link

    Am I the only one who's concern with DDR4 in our future?

    Given that it's one-to-one we'll lose the ability to stuff our motherboards with cheap sticks to get to "reasonable" (>=128gig) amount of RAM... :(
  • just4U - Thursday, December 19, 2013 - link

    You really shouldn't need more than 640kb.... :D
  • just4U - Thursday, December 19, 2013 - link

    seriously though .. DDR3 prices have been going up. as near as I can tell their approximately 2.3X the cost of what they once were. Memory makers are doing the semi-happy dance these days and likely looking forward to the 5x pricing schemes of yesteryear.
  • MrSpadge - Friday, December 20, 2013 - link

    They have to come up with something better than "1 DIMM per channel using the same amount of memory controllers" for servers.
  • theUsualBlah - Thursday, December 19, 2013 - link

    the -Ofast flag for Open64 will relax ansi and ieee rules for calculations, whereas the GCC flags won't do that.

    maybe thats the reason Open64 is faster.
  • JohanAnandtech - Friday, December 20, 2013 - link

    Interesting comment. I ran with gcc, Opencc with O2, O3 and Ofast. If the gcc binary is 100%, I get 110% with Opencc (-O2), 130% (-O3) and the same with Ofast.
  • theUsualBlah - Friday, December 20, 2013 - link

    hmm, thats very interesting.

    i am guessing Open64 might be producing better code (atleast) when it comes to memory operations. i gave up on Open64 a while back and maybe i should try it out again.

    thanks!
  • GarethMojo - Friday, December 20, 2013 - link

    The article is interesting, but alone it doesn't justify the expense for high-capacity LRDIMMs in a server. As server professionals, our goal is usually to maximise performance / cost for a specific role. In this example, I can't imagine that better performance (at a dramatically lower cost) would not be obtained by upgrading the storage pool instead. I'd love to see a comparison of increasing memory sizes vs adding more SSD caching, or combinations thereof.
  • JlHADJOE - Friday, December 20, 2013 - link

    Depends on the size of your data set as well, I'd guess, and whether or not you can fit the entire thing in memory.

    If you can, and considering RAM is still orders of magnitude faster than SSDs I imagine memory still wins out in terms of overall performance. Too large to fit in a reasonable amount of RAM and yes, SSD caching would possibly be more cost effective.
  • MrSpadge - Friday, December 20, 2013 - link

    One could argue that the storage optimization would be done for both memory configurations.

Log in

Don't have an account? Sign up now