The new methodology

At Anandtech, giving you real world measurements has always been the goal of this site. Contrary to the vast majority of IT sites out there, we don’t believe in letting some consultant or analyst spell it out for you.  We give you our measurements, as close to the real world as possible. We give you our opinion based on those measurements, but ultimately it is up to you to decide how to interpret the numbers.  You tell us in our comment box if we make a mistake in our thoughts somewhere. And we will investigate it, and get back to you. It is a slow process, but we firmly believe in it. And that is what happened in our article about  “dynamic power management”and “testing low power CPUs”

The former article was written to understand how the current power management techniques work. We needed a very easy, well understood benchmark to keep the complexity down. And it allowed us to learn a lot about the current Dynamic Voltage and Frequency Scaling (DVFS) techniques that AMD and Intel use. But as we admitted, our Fritz Chess benchmark was and is not a good choice if you wanted to apply this new insights to your own datacenter.

“Testing low power CPUs” went much less in depth,  but used a real world benchmark: our vApus Mark I, which simulates a heavy consolidated virtualization load. The numbers were very interesting, but the article had one big shortcoming: it only measured at 90-100% workload or idle. The reason for this is that the vApus benchmark score was based upon throughput. And to measure the throughput of a certain system, you have to stress it close to the maximum. So we could not measure performance accurately unless we went for the top performance. And that is fine for an HPC workload, but not for a commercial virtualization/database/web workload.

Therefore we went for a different approach based upon our reader's feedback. We launched “one tile” of the vApus benchmark on each of tested servers. Such a tile consists of a OLAP database (4 vCPUs), an OLTP database (4 vCPUs) and two web VMs (2 vCPUs). So in total we have 12 virtual CPUs. These 12 virtual CPUs are much less than what a typical high-end dual CPU server can offer. From the point of view of the Windows 2008, Linux or VMware ESX scheduler, the best Xeon 5600 (“Westmere”) and Opteron 6100 (“Magny-cours”) can offer 24 logical or physical cores. To the hypervisor, those logical or physical cores are Hardware Execution Contexts (HECs). The hypervisor schedules VMs onto these HECs.  Typically each of the 12 virtual cores needs somewhere between 50 and 90% of one core. Since we have twice the number of cores or HECs than required, we expect the typical load on the complete system to hover between 25 and 45%.  And although it is not perfect, this is much closer to the real world. Most virtualized servers never run idle for a long time: with so many VMs, there is always something to do. System administrators also want to avoid CPU loads over 60-70% as this might make the response time go up exponentially.

There is more. Instead of measuring throughput, we focus on response time. At the end of the day, the number of pages that your server can maximally serve is nice to know, but not important. The response time that your system offers at a certain load is much more important. Users will appreciate low response times. Nobody is going to be happy about the fact that your server can serve up to 10.000 request per second if each page takes 10 seconds to load.

Lowering the energy costs Hardware configuration and measuring power
Comments Locked

49 Comments

View All Comments

  • cserwin - Thursday, July 15, 2010 - link

    Some props for Johan, too, maybe... nice article.
  • JohanAnandtech - Thursday, July 15, 2010 - link

    Thanks! We have more data on "low power choices", but we decided to cut them up in several article to keep it readable.
  • DavC - Thursday, July 15, 2010 - link

    not sure whats going on with your electricity cost calcs on your first page. firstly your converting current unnessacarily from watts to amps (meaning your unnessacarily splitting into US and europe figures).

    basically here in the UK, 1kW which is what your your 4 PCs in your example consume, costs roughly 10p per hour. working on an average of 720 hours in a month, that would give a grand total of £72 a month to run those 4 PCs 24/7.

    £72 to you US guys is around $110. And I cant imagine you're electricity is priced any dearer than ours.

    giving a 4 year life cycle cost of $5280.

    have I missed something obvious here or are you just out with the maths?
  • JohanAnandtech - Thursday, July 15, 2010 - link

    You are calculating from the POV of a datacenter. I take the POV of a datacenter client, which has to pay per amp that he/she "reserves". AFAIK, datacenters almost always count with amps, not Watts.

    (also 10p per KWh seems low)
  • MrSpadge - Thursday, July 15, 2010 - link

    With P=V*I at constant voltage power and amps are really just a different name for the same thing, i.e. equivalent. Personally I prefer W, because this is what matters in the end: it's what I pay for and what heats my room. Amps by themselves don't mean much (as long as you're not melting the wires), as voltages can easily be converted.
    Maybe the datacenter guys just like to juggle around smaller numbers? Maybe the should switch over to hecto watts instead? ;)

    MrS
  • JohanAnandtech - Thursday, July 15, 2010 - link

    I am surprised the electrical engineers have not jumped in yet :-). As you indicate yourself, the circuits/wires are made for a certain amount of amps, not watts. That is probably the reason datacenters specify the amount of power you get in watt.
  • JohanAnandtech - Thursday, July 15, 2010 - link

    I meant amps in that last sentence of course.
  • knedle - Thursday, July 15, 2010 - link

    Watts are universal, doesn't matter if you're in UK, or US - 220W is still 220W, but with ampers it's different. Since in the Europe voltage is higher than in the USA (EU=220V, US=110V), and P=U*I, you've got twice as much power for 1A, which means that in USA your server will use 2A, while the same server in UK will use only 1A...
  • has407 - Friday, July 16, 2010 - link

    No, not all Watts are the same.

    Watts in a decent datacenter come with power distribution, cooling, UPS, etc. Those typically add 3-4x to the power your server actually consumes. Add to that the amortized cost of the infrastructure and you're looking at 6-10x the cost of the power your server consumes directly.

    Such is the fallacy of simplistic power/cost comparisons (and Johan, you should know better). Can we now dispense with the idiotic cost/KWH calculations?
  • Penti - Saturday, July 17, 2010 - link

    A high-performance server probably can't be used on 1A 230V which is the cheapest options in some datacenters. However something like half a rack or 1/4 would probably have 10A/230V, more then enough for a small servercollection of 4 moderate servers. The big cost is cooling, normal racks might handle 4kW (up to 6kW over that then it's high density) of heat/power just. Then you need more expensive stuff. A cheap rack won't handle 40 250W servers in other regards. 6 kW power/cooling and 2x16A/230V shouldn't be that expensive. Any way you also pay for cooling (and UPS). Even cheap solutions normally charge per used kW here though. 4 2U is about 1/4 rack anyway. And like 15 amps is needed if in the states.

Log in

Don't have an account? Sign up now