Performance per Watt rules the datacenter, right? Wrong. Yes, you would easily be lead astray after the endless "Green ICT" conferences, the many power limited datacenters, and the flood of new technologies that all have the "Performance/Watt" stamp. But if performance per Watt is all that counts, we would be all be running atom and ARM based servers. Some people do promote Atom based servers, but outside niche markets we don't think it will be a huge success. Why not? Think about it: what is the ultimate goal of a datacenter? The answer is of course the same as for the enterprise as a whole: serve as many (internal or external) customers as possible with the lowest response time at the lowest cost.

So what really matters? Attaining a certain level of performance. At that point you want the lowest power consumption possible, but first you want to attain the level of performance where your customers are satisfied. So it is power efficiency at a certain performance level that you are after, not the best performance/Watt ratio. Twenty times lower power for 5 times lower performance might seem an excellent choice from the performance/watt point of view, but if your customers get frustrated with the high response times they will quit. Case closed. And customers are easily frustrated. "Would users prefer 10 search results in 0.4 seconds or 25 results in 0.9 seconds?" That is a question Google asked [1]. They found out to their surprise that a significant number of users got bored and moved on if they had to wait 0.9 seconds. Not everyone has an application like Google, but in these virtualized times we don't waste massive amounts of performance as we used to in the beginning of this century. Extra performance and RAM space is turned into more servers per physical server, or business efficiency. So it is very important not to forget how demanding we all are as customers when we are browsing and searching.

Modern CPUs have a vast array of high-tech weapons to offer good performance at the lowest power possible. PowerNow!, SpeedStep, Cache Sizing, CoolCore, Smart Fetch, PCU, Independent Dynamic Core Technology, Deep Sleep, and even Deeper Sleep. Some of those technologies have matured and offer significant power savings with negligible performance impact. A lot of them are user configurable: you can disable/enable them in the BIOS or they get activated if you chose a certain power plan in the operating system. Those that are configurable are so for a good reason: the performance hit is significant in some applications and the power savings are not always worth the performance hit. In addition, even if such technologies are active under the hood of the CPU package, it is no guarantee that the operating system makes good use of it.

How do we strike the right balance between performance and energy consumption? That is the goal of this new series of articles. But let's not get ahead of ourselves; before we can even talk about increasing power efficiency at a certain performance point, we have to understand how it all works. This first article dives deep into power management, to understand what works and what only works on PowerPoint slides. There is more to it than enabling SpeedStep in your server. For example, Intel has been very creative with Turbo Boost and Hyper-Threading lately. Both should increase performance in a very efficient way. But does the performance boost come with an acceptable power consumption increase? What is acceptable or not depends on your own priorities and applications, but we will try to give you a number of data points that can help you decide. Whether you enable some power management technologies, how you configure your OS is not the only decision you have to make as you attempt to provide more efficient servers.

Both AMD and Intel have been bringing out low power versions of their CPUs that trade clock speed for lower maximum power. Are they really worth the investment? A prime example of how the new generation forces you to make a lot of decisions is the Xeon L3426: a Xeon "Lynnfield" which runs at 1.86GHz and consumes 45W in the worst case according to Intel. What makes this CPU special is that it can boost its clock to 3.2GHz if you are running only a few active threads. This should lower response times when relatively few users are using your application, but what about power consumption? AMD's latest Opteron offers six cores at pretty low power consumption points, and it can lower its clock from 2.6GHz all the way down to 800MHz. That should result in significant power savings but the performance impact might be significant too. We have lots of questions, so let's start by understanding what happens under the hood, in good old AnandTech "nuts and bolts" tradition.

Warning: This article is not suited for quick consumption. Remember, you come to AnandTech for hard hitting analysis, and that's what this article aims to provide! Please take your time… there will be a quiz at the end. ;-)

How Does Power Management Work?
Comments Locked

35 Comments

View All Comments

  • UrQuan3 - Thursday, January 21, 2010 - link

    I'm trying to remember for 2008, but wasn't there a way to either force or suggest thread/core affinity? It looks like the scheduler was hopping all over the place on the Opterons.
  • JarredWalton - Thursday, January 21, 2010 - link

    You guys better pay attention and answer this post, or his species will try to enslave and/or wipe out the entire galaxy! ;-)
  • mino - Wednesday, January 20, 2010 - link

    I mean, not, why do you use them for this article.
    They are fine examples of low-power platforms, even if from vastly different markets.

    But,
    WHY ON EARTH DO YOU KEEP TALKING LIKE THEY WERE COMPARABLE THROUGHOUT THE ARTICLE ???
  • IntelUser2000 - Wednesday, January 20, 2010 - link

    By the way, I don't know if you have the settings wrong or that's how it works, the Turbo Boost mode is not affected on the Home PC versions of Windows. Balanced uses Turbo Boost just as well on my Windows 7 Home Premium with Core i5 661.

  • JarredWalton - Wednesday, January 20, 2010 - link

    I was wondering this as well, but I'm not familiar with Windows Server... what I do know is that Power Saver on consumer Windows OSes really limits the CPU frequency scaling features, and it sort of looks like Balanced on the Server OS has aspects of consumer "Power Saver" as well as some elements of "Balanced". Odd to see only two power settings available, where Win7 now has at least 3 and often 5.
  • mino - Wednesday, January 20, 2010 - link

    It seems a classic example of KISS strategy of choosing the most-sensible options and so reducing decision complexity for IT people.

    Modes like "Max battery" have anyway no reason for existence on a server box.
  • RobinBee - Tuesday, January 19, 2010 - link

    If you use your pc as a music server:

    Power saving methods ruin sound quality even if using a good sound card. The problem is »electronic« sound distortion. I do not know why this happens.

    Also: The chosen number of IRQ pr. second in a net card can ruin sound quality too. Why, I do not know.
  • Anato - Tuesday, January 19, 2010 - link

    I'm interested to see results from different operating systems which may be better at controlling processes in different CPU's. Namely no CPU hopping and is their power management as efficient as Windows is.

    Most interested at:
    Linux and Solaris
  • JohanAnandtech - Tuesday, January 19, 2010 - link

    Excellent suggestion :-). Problem is to keep the application the same. We currently tested SQL Server 2008 on Windows 2008 and of course this can not be done on Linux. However, I am not stranger to linux as a server.

    I am no fan of MySQL on Windows, but maybe this has improved. Would MySQL on Windows and Linux makes sense as a comparison?
  • maveric7911 - Tuesday, January 19, 2010 - link

    Why not use oracle ;)

Log in

Don't have an account? Sign up now