CPU Performance: Office Tests

The Office test suite is designed to focus around more industry standard tests that focus on office workflows, system meetings, some synthetics, but we also bundle compiler performance in with this section. For users that have to evaluate hardware in general, these are usually the benchmarks that most consider.

All of our benchmark results can also be found in our benchmark engine, Bench.

PCMark 10: Industry Standard System Profiler

Futuremark, now known as UL, has developed benchmarks that have become industry standards for around two decades. The latest complete system test suite is PCMark 10, upgrading over PCMark 8 with updated tests and more OpenCL invested into use cases such as video streaming.

PCMark splits its scores into about 14 different areas, including application startup, web, spreadsheets, photo editing, rendering, video conferencing, and physics. We post all of these numbers in our benchmark database, Bench, however the key metric for the review is the overall score.

PCMark10 Extended Score

As a general mix of a lot of tests, the new processors from Intel take the top three spots, in order. Even the i5-9600K goes ahead of the i7-8086K.

Chromium Compile: Windows VC++ Compile of Chrome 56

A large number of AnandTech readers are software engineers, looking at how the hardware they use performs. While compiling a Linux kernel is ‘standard’ for the reviewers who often compile, our test is a little more varied – we are using the windows instructions to compile Chrome, specifically a Chrome 56 build from March 2017, as that was when we built the test. Google quite handily gives instructions on how to compile with Windows, along with a 400k file download for the repo.

In our test, using Google’s instructions, we use the MSVC compiler and ninja developer tools to manage the compile. As you may expect, the benchmark is variably threaded, with a mix of DRAM requirements that benefit from faster caches. Data procured in our test is the time taken for the compile, which we convert into compiles per day.

Compile Chromium (Rate)

Pushing the raw frequency of the all-core turbo seems to work well in our compile test.

3DMark Physics: In-Game Physics Compute

Alongside PCMark is 3DMark, Futuremark’s (UL’s) gaming test suite. Each gaming tests consists of one or two GPU heavy scenes, along with a physics test that is indicative of when the test was written and the platform it is aimed at. The main overriding tests, in order of complexity, are Ice Storm, Cloud Gate, Sky Diver, Fire Strike, and Time Spy.

Some of the subtests offer variants, such as Ice Storm Unlimited, which is aimed at mobile platforms with an off-screen rendering, or Fire Strike Ultra which is aimed at high-end 4K systems with lots of the added features turned on. Time Spy also currently has an AVX-512 mode (which we may be using in the future).

For our tests, we report in Bench the results from every physics test, but for the sake of the review we keep it to the most demanding of each scene: Ice Storm Unlimited, Cloud Gate, Sky Diver, Fire Strike Ultra, and Time Spy.

3DMark Physics - Ice Storm Unlimited3DMark Physics - Cloud Gate3DMark Physics - Sky Diver3DMark Physics - Fire Strike Ultra3DMark Physics - Time Spy

The older Ice Storm test didn't much like the Core i9-9900K, pushing it back behind the R7 1800X. For the more modern tests focused on PCs, the 9900K wins out. The lack of HT is hurting the other two parts.

GeekBench4: Synthetics

A common tool for cross-platform testing between mobile, PC, and Mac, GeekBench 4 is an ultimate exercise in synthetic testing across a range of algorithms looking for peak throughput. Tests include encryption, compression, fast Fourier transform, memory operations, n-body physics, matrix operations, histogram manipulation, and HTML parsing.

I’m including this test due to popular demand, although the results do come across as overly synthetic, and a lot of users often put a lot of weight behind the test due to the fact that it is compiled across different platforms (although with different compilers).

We record the main subtest scores (Crypto, Integer, Floating Point, Memory) in our benchmark database, but for the review we post the overall single and multi-threaded results.

Geekbench 4 - ST Overall

Geekbench 4 - MT Overall

CPU Performance: Rendering Tests CPU Performance: Encoding Tests
Comments Locked

274 Comments

View All Comments

  • evernessince - Saturday, October 20, 2018 - link

    I'm sure for him money is a fixed resource, he is just really bad at managing it. You'd have to be crazy to blow money on the 9900K when the 8700K is $200 cheaper and the 2700X is half the price.
  • Dug - Monday, October 22, 2018 - link

    Relative to how much you make or have. $200 isn't some life threatening amount that makes them crazy because they spent it on a product that they will enjoy. We spend more than that going out for a weekend (and usually don't have anything to show for it). If an extra 200 is threatening to your lively hood, you shouldn't be shopping for new cpu's anyway.
  • close - Saturday, October 20, 2018 - link

    @ekidhardt: "I think far too much emphasis has been placed on 'value'. I simply want the fastest, most powerful CPU that isn't priced absurdly high."

    That, my good man, is the very definition of value. It happens automatically when you decide to take price into consideration the price. I also don't care about value, I just want a CPU with a good performance to price ratio. See what I did there? :)
  • evernessince - Saturday, October 20, 2018 - link

    A little bit extra? It's $200 more then the 8700K, that's not a little.
  • mapesdhs - Sunday, October 21, 2018 - link


    The key point being, for gaming, use the difference to buy a better GPU, whether one gets an 8700K or 2700X (or indeed any one of a plethora of options really, right back to an old 4930K). It's only at 1080p and high refresh rates where strong CPU performance stands out, something DX12 should help more with as time goes by (the obsession with high refresh rates is amusing given NVIDIA's focus shift back to sub-60Hz being touted once more as ok). For gaming at 1440p or higher, one can get a faster system by choosing a cheaper CPU and better GPU.

    There are two exceptions: those for whom money is literally no object, and certain production workloads that still favour frequency/IPC and are not yet well optimised for more than 6 cores (Premiere is probably the best example). Someone mentioned pro tasks being irrelevant because ECC is not supported, but many solo pros can't afford XEON class hw (I mean the proper dual socket setups) even if the initial higher outlay would eventually pay for itself.

    What we're going to see with the 9900K for gaming is a small minority of people taking Intel's mantra of "the best" and running with it. Technically, they're correct, but most normal people have budgets and other expenses to consider, including wives/gfs with their own cost tolerance limits. :D

    If someone can genuinely afford it then who cares, in the end it's their money, but as a choice for gaming it really only makes sense via the same rationale if they've also then bought a 2080 Ti to go with it, though even there one could retort that two used 1080 TIs would be cheaper & faster (at least for those titles where SLI is functional).

    If anything good has come from this and the RTX launch, it's the move away from the supposed social benefit of having "the best"; the street cred is gone, now it just makes one look like a fool who was easily parted from his money.
  • Spunjji - Monday, October 22, 2018 - link

    Word.
  • Total Meltdowner - Sunday, October 21, 2018 - link

    This comment reads like shilling so hard. So hard. Please try harder to not be so obvious.
  • Spunjji - Monday, October 22, 2018 - link

    I think they placed just the right amount of emphasis on "value". Your post basically explains why it's not relevant for you in terms of you being an Intel fanboy with cash to burn. I'll elaborate.

    The MSRP is in the realm of irrational spending for a huge number of people. "Rational" here meaning "do I get out anything like what I put in", wherein the answer in all metrics is an obvious no.

    Following that, there are a HUGE number of reasons not to pre-order a high-end CPU, especially before proper results are out. Pre-ordering *anything* computer related is a dubious prospect, doubly so when the company selling it paid good money to paint a deceptive picture of their product's performance.

    Your assertion that Intel have never launched a bad CPU is false and either ignorance or wilful bias on your part. They have launched a whole bunch of terrible CPUs, from the P3 1.2Ghz that never worked, through the P4 Emergency Edition and the early "dual-core" P4 processors, all the way through to this i9 9900K which is the thirstiest "95W" CPU I've ever seen. Their notebook CPUs are now segregated in such a way that you have to read a review to find out how they will perform, because so much is left on the table in terms of achievable turbo boost limits.

    Sorry, I know I replied just to disagree which may seem argumentative, but you posted a bunch of nonsense and half-turths passed off as common-sense and/or logic. It's just bias; none of it does any harm but you could at least be up-front that you prefer Intel. That in itself (I like Intel and am happy to spend top dollar) is a perfectly legitimate reason for everything you did. Just be open and don't actively mislead people who know less than you do.
  • chris.london - Friday, October 19, 2018 - link

    Hey Ryan. Thanks for the review.

    Would it be possible to check power consumption in a test in which the 2700x and 9900k perform similarly (maybe in a game)? POV-Ray seems like a good way to test for maximum power draw but it makes the 9900k look extremely inefficient (especially compared to the 9600k). It would be lovely to have another reference point.
  • 0ldman79 - Friday, October 19, 2018 - link

    I'm legitimately surprised.

    The 9900k is starving for bandwidth, needs more cache or something. I never expected it to *not* win the CPU benchmarks vs the 9700k. I honestly expected the 9700k to be the odd one out, more expensive than the i5 and slower than the 9900k. This isn't the case. Apparently SMT isn't enabling 100% usage of the CPU's resources, it is allowing a bottleneck due to fighting over resources. I'd love to see the 9900K run against it's brethren with HT disabled.

Log in

Don't have an account? Sign up now