System Benchmarks

Power Consumption

Power consumption is normally tested on the system while in a single MSI GTX 770 Lightning GPU configuration with a wall meter connected to the OCZ 1250W power supply, however for this review due to the PCIe arrangement we had an R7 240 equipped. This power supply is Gold rated, and as I am in the UK on a 230-240 V supply, leads to ~75% efficiency > 50W, and 90%+ efficiency at 250W, suitable for both idle and multi-GPU loading. This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency. These are the real world values that consumers may expect from a typical system (minus the monitor) using this motherboard.

While this method for power measurement may not be ideal, and you feel these numbers are not representative due to the high wattage power supply being used (we use the same PSU to remain consistent over a series of reviews, and the fact that some boards on our test bed get tested with three or four high powered GPUs), the important point to take away is the relationship between the numbers. These boards are all under the same conditions, and thus the differences between them should be easy to spot.

Power Consumption: Long Idle with GTX 770

Power Consumption: Idle with GTX 770

Power Consumption: OCCT Load with GTX 770

Having two processors installed doesn't take much more power at idle than our i7-5960X X99 counterparts, but when the CPU load starts to flow, the obvious differences arise. Interestingly the dual 65W combination for the E5 2650L v3 CPUs used less power than a single 130W CPU.

Windows 7 POST Time

Different motherboards have different POST sequences before an operating system is initialized. A lot of this is dependent on the board itself, and POST boot time is determined by the controllers on board (and the sequence of how those extras are organized). As part of our testing, we look at the POST Boot Time using a stopwatch. This is the time from pressing the ON button on the computer to when Windows 7 starts loading. (We discount Windows loading as it is highly variable given Windows specific features.) 

Windows 7 POST Time - Default

Windows 7 POST Time - Stripped

As mentioned earlier in the review, POST time on server motherboards is naturally slow due to the server management tools as well as the extra controllers. POST times are not that important for servers anyway, given that they tend to be restarted far less frequently than desktops or workstations.

USB Backup

For this benchmark, we transfer a set size of files from the SSD to the USB drive using DiskBench, which monitors the time taken to transfer. The files transferred are a 1.52 GB set of 2867 files across 320 folders – 95% of these files are small typical website files, and the rest (90% of the size) are small 30 second HD videos. In an update to pre-Z87 testing, we also run MaxCPU to load up one of the threads during the test which improves general performance up to 15% by causing all the internal pathways to run at full speed.

USB 3.0 Copy Times

DPC Latency

Deferred Procedure Call latency is a way in which Windows handles interrupt servicing. In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority. Critical interrupts will be handled as soon as possible, whereas lesser priority requests such as audio will be further down the line. If the audio device requires data, it will have to wait until the request is processed before the buffer is filled.

If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time.  This can lead to an empty audio buffer and characteristic audible pauses, pops and clicks. The DPC latency checker measures how much time is taken processing DPCs from driver invocation. The lower the value will result in better audio transfer at smaller buffer sizes. Results are measured in microseconds.

DPC Latency

The DPC Latency for the dual E5 2697 v3 setup was not bad - our previous barrier to good/bad was 200 microseconds, although Z97 and X99 have both push the average well below 100. The other two CPUs caused large spikes in our DPC testing, giving results of 502 and 714 microseconds.

In The Box, Test Setup Professional Performance
Comments Locked

17 Comments

View All Comments

  • macwhiz - Wednesday, December 3, 2014 - link

    I'm not surprised that there's no temperature data in the BIOS. Server admins don't look at the BIOS after they complete initial setup (or a major overhaul). It's accessible from the BMC, where it's useful in a server environment. When a server overheats, the admin is usually not in the same room—and often not in the same building, or even the same state. The important question is how the BMC firmware does at exposing that data for out-of-band management via IPMI, SNMP, or another standard solution. Does it play well with an Avocent UMG managment device, for instance? As a server admin, I could care less about seeing the temperature in the BIOS. What I care about is that my chosen monitoring solution can see if the temperature is going up—or any hardware fault is detected—and page me, even if the operating system isn't running. That's what BMCs are for!

    Don't apologize for using 240VAC power. Chances are very good that, even in a U.S. data center, it'll be on 240VAC power. Given the current needs of most servers, it's impractical to use 120VAC power in server racks—you'll run out of available amperage on your 120VAC power-distribution unit (power strip) long before you use all the outlets. Keep going down that road and you waste rack space powering PDUs with two or three cords plugged into them. It's much easier and more efficient all the way around to use 240VAC PDUs and power in the data center. Comparing a 20-amp 120V circuit to a 20-amp 240V circuit, you can plug at least twice as many of a given server model into the 240V circuit. Because the U.S. National Electrical Code restricts you to using no more than 80% of the rated circuit capacity for a constant load, you can plug in 16A of load on that 20A circuit. If the servers draw 6A at 120V or 3A at 240V, you can plug in two servers to the 120V power strip, or five servers into the 240V strip, before you overload it. So, once you get beyond a handful of computers, 240V is the way to go in the datacenter (if you're using AC power).
  • leexgx - Wednesday, December 3, 2014 - link

    mass server racks are Pure DC in some cases or 240v (i would of thought there be some very basic Temp monitoring in the BIOS but guess most of this is exposed elsewhere

    so i agree with this post
  • jhh - Thursday, December 4, 2014 - link

    208V 3-phase is probably more popular than 240V, as most electricity is generated as 3-phase, and using all 3 phases is important for efficiently using the power without being charged for a poor power factor.
  • mapesdhs - Thursday, December 4, 2014 - link


    In, you're still using the wrong source link for the C-ray test. The Blinkenlights site is
    a mirror over which I have no control; I keep the main c-ray page on my SGI site.
    Google for, "sgidepot 'c-ray'", 1st hit will be the correct URL.

    Apart from that, thanks for the review!

    One question: will you ever be able to review any quad-socket systems or higher?
    I'd love to know how well some of the other tests scale, especially CB R15.

    Ian.
  • fackamato - Friday, December 5, 2014 - link

    No 40Gb benchmarks?
  • sor - Monday, December 8, 2014 - link

    I was excited to see the QSFP, but it seems like it's not put to use. I've been loving our mellanox switches, they have QSFP and you can run 40Gbe or 4 x 10Gbe with a breakout cable, with each port. It provides absolutely ridiculous port density and great cost. You can find SX1012s (12 port QSFP) for under $5k, and have 48 10G ports in 1/2U at about $100/port. No funny business with extra costs to license ports. The twinax cable is much cheaper than buying 10G optics, too, but you have to stay close. Usually you only need fibre on the uplinks, anyway.
  • dasco - Saturday, March 9, 2019 - link

    Does it support udimm. As the documentation says that it supports only rdimm or lrdimm.
    Does gskill ram used in this test is udimm or rdimm Ecc ram.

Log in

Don't have an account? Sign up now