Test Bed and Setup

As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency. This is also typically run at JEDEC subtimings where possible.

It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date.

Test Setup
Intel 9th Gen i9-9900K
i7-9700K
i5-9600K
ASRock Z370
Gaming i7**
P1.70 TRUE
Copper
Crucial Ballistix
4x8GB
DDR4-2666
Intel 8th Gen i7-8086K
i7-8700K
i5-8600K
ASRock Z370
Gaming i7
P1.70 TRUE
Copper
Crucial Ballistix
4x8GB
DDR4-2666
Intel 7th Gen i7-7700K
i5-7600K
GIGABYTE X170
ECC Extreme
F21e Silverstone*
AR10-115XS
G.Skill RipjawsV
2x16GB
DDR4-2400
Intel 6th Gen i7-6700K
i5-6600K
GIGABYTE X170
ECC Extreme
F21e Silverstone*
AR10-115XS
G.Skill RipjawsV
2x16GB
DDR4-22133
Intel HEDT i9-7900X
i7-7820X
i7-7800X
ASRock X299
OC Formula
P1.40 TRUE
Copper
Crucial Ballistix
4x8GB
DDR4-2666
AMD 2000 R7 2700X
R5 2600X
R5 2500X
ASRock X370
Gaming K4
P4.80 Wraith Max* G.Skill SniperX
2x8 GB
DDR4-2933
AMD 1000 R7 1800X ASRock X370
Gaming K4
P4.80 Wraith Max* G.Skill SniperX
2x8 GB
DDR4-2666
AMD TR4 TR 1920X ASUS ROG
X399 Zenith
0078 Enermax
Liqtech TR4
G.Skill FlareX
4x8GB
DDR4-2666
GPU Sapphire RX 460 2GB (CPU Tests)
MSI GTX 1080 Gaming 8G (Gaming Tests)
PSU Corsair AX860i
Corsair AX1200i
SSD Crucial MX200 1TB
OS Windows 10 x64 RS3 1709
Spectre and Meltdown Patched
*VRM Supplimented with SST-FHP141-VF 173 CFM fans
** After Initial testing with the ASRock Z370 motherboard, we noted it had a voltage issue with the Core 9th Gen processors. As a result, we moved to the MSI MPG Z390 Gaming Edge AC for our power measurements. Benchmarking seems unaffected.

We must thank the following companies for kindly providing hardware for our multiple test beds. Some of this hardware is not in this test bed specifically, but is used in other testing.

Hardware Providers
Sapphire RX 460 Nitro MSI GTX 1080 Gaming X OC Crucial MX200 +
MX500 SSDs
Corsair AX860i +
AX1200i PSUs
G.Skill RipjawsV,
SniperX, FlareX
Crucial Ballistix
DDR4
Silverstone
Coolers
Silverstone
Fans
Spectre, Meltdown, STIM, and Z390 Our New CPU Testing Suite for 2018 and 2019
Comments Locked

274 Comments

View All Comments

  • vext - Friday, October 19, 2018 - link

    Very good article, but here are my beefs.

    Why is there no mention of temperatures?

    According to Techspot the 9900k runs ridiculously hot under heavy loads. At stock clocks under a heavy Blender load it reaches 85C with a Corsair H100i Pro, or Noctua NH-D15. Pushed to 5Ghz, it hits 100C. At 5.1 Ghz it FAILS. I suggest that Anandtech has failed by not discussing this.

    Techspot says:

    "There’s simply no way you’re going to avoid thermal throttling without spending around $100 on the cooler, at least without your PC sounding like a jet about to take off. Throw in the Corsair H100i Pro and the 9900K now costs $700 and you still can’t overclock, at least not without running at dangerously high temperatures."

    Why the focus on single threaded benchmarks? For the most part they are irrelevant. Yet they are posted in their own graph, at the front of each testing section, as though they were the most important data point. Just include them as a separate bar with the multi-thread benchmarks. Good Grief!

    Why post MSRP prices in every single benchmark? You can't even buy them for MSRP. There should be a single chart at the front of the article with a rough retail estimate for each processor, and links to the retailers. If the MSRP is necessary, then just add a column to the chart. Sheesh.

    Why no in depth cost/benefit comparison? A Ryzen 2600 with included cooler at $160 costs only one quarter of a 9900k with an aio cooler at $700. The $540 difference would buy a new RTX 2070 video card. Or three more Ryzen 2600's. For crying out loud.

    I like the 9900k, it's a good processor. It's intended for hobbyists that can play with custom loop cooling. But it's not realistic for most people.
  • mapesdhs - Sunday, October 21, 2018 - link

    All good questions... the silence is deafening. Thankfully, there's plenty of commentary on the value equation to be found. A small channel atm, but I like this guy's vids:

    https://www.youtube.com/watch?v=EWO5A9VMcyY
  • abufrejoval - Friday, October 19, 2018 - link

    I needed something a little bigger for my lab two or three years ago and came across an E5-2696v3 on eBay from China, a Haswell generation 18-core at $700.

    That chips didn't officially exist, but after digging a little deeper I found it's basically an E5-2699v3 which clocks a little higher (3.8 instead of 3.6GHz) with 1-2 cores active. So it's basically a better chip for a fraction of the going price of the lesser one (E5-2699v3 is still listed at €4649 by my favorite e-tailer). And yes, it's a perfect chip, Prime95'd it for hours, POVrayd and Blendered for days until I was absolutely sure it was a prime quality chip.

    Officially it has 145Watts TDP, but I've only ever seen it go to 110Watts on HWiNFO with Prime95 in its meanest settings: It must be a perfect bin. With the particle pusher it's never more than 93Watts while no part of the CPU exceeds 54°C with a Noctua 140mm fan practically inaudible at 1000rpm cooling it: That because the 18 cores and 36 threads never run faster than 2.8GHz fully loaded. They also don't drop below it (except for idle, 1.855 Watts minimum btw.), so you can pretty much forget about the 2.3GHz 'nominal' speed.

    It gets 2968.245803 on that benchmark, slightly above the i9-9900k, somewhat below the ThreadRipper. That's 22nm Haswell against 14++/12nm current and 18 vs 8/12 cores.

    This is rather typical for highly-threaded workloads: It's either cores or clocks and when the power ceiling is fixed you get higher throughput and energy efficiency when you can throw cores instead of clocks at the problem.

    I think it's a data point worth highlighting in this crazy clock race somewhat reminiscent of Pentium 4 days, heat vs. efficiency, a four year old chip beating the newcomer in performance and almost 3:1 in efficiency at far too similar prices.

    Yet, this specific chip will clock pretty high for a server chip, easily doing 3.6 GHz with eight cores seeing action from your game engine, while the remaining ten are often ignored: Perhaps that's a Ryzen effect, it used to be 4:14 earlier.

    I've done BCLK overclock of 1.08 to have it reach the magic 4GHz at maximum turbo, but it's not noticeable in real-life neck-to-neck to an E3-1276v3 which also turbos to 4GHz on three cores out of four available, 3.9 at 4/4 with HT.
  • abufrejoval - Friday, October 19, 2018 - link

    2968.245803 on the particle pusher benchmark... need edit
  • icoreaudience - Friday, October 19, 2018 - link

    Move away from rar/lzma : the new darling of data compression is called Zstandard :
    https://www.zstd.net

    It comes with a nice integrated benchmark, which can easily ramp up with multithreading :
    zstd -b -1 -T8 fileToTest # benchmark level one on fileToTest using 8 threads

    Windows user can even download a pre-compiled binary directly in the release notice :
    https://github.com/facebook/zstd/releases/latest

    It would be great to see some numbers using this compressor on latest Intel cores !
  • Kaihekoa - Friday, October 19, 2018 - link

    Looks like all your gaming benchmarks are GPU bound and there pointless. Why not use a 2080 Ti to eliminate/reduce GPU bottleneck?
  • Kaihekoa - Friday, October 19, 2018 - link

    therefore*
  • palladium - Friday, October 19, 2018 - link

    Can you please run some SPEC2006 benchmarks and see if Apple's SOC really has caught on to Intel's performance (per core), as mentioned in Andrei in his iPhone XS review? Thanks
  • VirpZ - Friday, October 19, 2018 - link

    Apart from blender, your review is full Intel biased software for rendering.
  • Hifihedgehog - Friday, October 19, 2018 - link

    Hey Ian. I see your updated full load power consumptions results. Question: Why is it that the six-core i7-8086K is drawing so little power in comparison to everything else including the quad-cores? Is this due to its better binning or is this simply an error that crept in?

Log in

Don't have an account? Sign up now