CPU Performance: New Tests!

As part of our ever on-going march towards a better rounded view of the performance of these processors, we have a few new tests for you that we’ve been cooking in the lab. Some of these new benchmarks provide obvious talking points, others are just a bit of fun. Most of them are so new we’ve only run them on a few processors so far. It will be interesting to hear your feedback!

NAMD ApoA1

One frequent request over the years has been for some form of molecular dynamics simulation. Molecular dynamics forms the basis of a lot of computational biology and chemistry when modeling specific molecules, enabling researchers to find low energy configurations or potential active binding sites, especially when looking at larger proteins. We’re using the NAMD software here, or Nanoscale Molecular Dynamics, often cited for its parallel efficiency. Unfortunately the version we’re using is limited to 64 threads on Windows, but we can still use it to analyze our processors. We’re simulating the ApoA1 protein for 10 minutes, and reporting back the ‘nanoseconds per day’ that our processor can simulate. Molecular dynamics is so complex that yes, you can spend a day simply calculating a nanosecond of molecular movement.

NAMD 2.31 Molecular Dynamics (ApoA1)

 

Crysis CPU Render

One of the most oft used memes in computer gaming is ‘Can It Run Crysis?’. The original 2007 game, built in the Crytek engine by Crytek, was heralded as a computationally complex title for the hardware at the time and several years after, suggesting that a user needed graphics hardware from the future in order to run it. Fast forward over a decade, and the game runs fairly easily on modern GPUs, but we can also apply the same concept to pure CPU rendering – can the CPU render Crysis? Since 64 core processors entered the market, one can dream. We built a benchmark to see whether the hardware can.

For this test, we’re running Crysis’ own GPU benchmark, but in CPU render mode. This is a 2000 frame test, which we run over a series of resolutions from 800x600 up to 1920x1080.

Crysis CPU Render
Frames Per Second
AnandTech 800
x600
1024
x768
1280
x800
1366
x768
1600
x900
1920
x1080
AMD
Ryzen 9 4900HS 11.50 8.75 7.44 6.83 5.21 4.30
Ryzen 5 3600 9.98 7.84 6.69 6.15 4.75 3.92
Ryzen 3 3300X 8.42 6.52 5.43 5.01 3.92 3.07
Ryzen 3 3100 7.50 5.78 4.87 4.5 3.54 2.77
Intel
Core i7-7700K 7.63 5.87 4.95 4.55 3.57 2.79
Core i7-9750H 6.78 5.17 4.37 3.99 3.12 2.46

 

Dwarf Fortress

Another long standing request for our benchmark suite has been Dwarf Fortress, a popular management/roguelike indie video game, first launched in 2006. Emulating the ASCII interfaces of old, this title is a rather complex beast, which can generate environments subject to millennia of rule, famous faces, peasants, and key historical figures and events. The further you get into the game, depending on the size of the world, the slower it becomes.

DFMark is a benchmark built by vorsgren on the Bay12Forums that gives two different modes built on DFHack: world generation and embark. These tests can be configured, but range anywhere from 3 minutes to several hours. I’ve barely scratched the surface here, but after analyzing the test, we ended up going for three different world generation sizes.

Dwarf Fortress (Small) 65x65 World, 250 YearsDwarf Fortress (Medium) 125x125 World, 250 YearsDwarf Fortress (Big) 257x257 World, 550 Years

Interestingly Intel's hardware likes Dwarf Fortress.

 

We also have other benchmarks in the wings, such as AI Benchmark (ETH), LINPACK, and V-Ray, however they still require a bit of tweaking to get working it seems.

Test Bed and Setup CPU Performance: System Tests
Comments Locked

249 Comments

View All Comments

  • lightningz71 - Friday, May 8, 2020 - link

    Because I was typing it on mobile, didn’t proofread before I hit submit, and the spell checker didn’t flag it as being wrong because it doesn’t know context.

    It’s my fault, my mistake, and I normally strive to do a better job with my spelling in general. Thank you for pointing out my mistake so that I can be more cognizant of my future errors.
  • Holliday75 - Saturday, May 9, 2020 - link

    Now I feel like a d*ck for pointing it out.

    In all honesty just poking fun and genuinely curious because I see this mistake made daily all over the place. Facebook, comments, even articles by professional journalists and a work email or two. I find it curious when I know the people who speak American English natively and still make this mistake.
  • Spunjji - Monday, May 11, 2020 - link

    Well, Autocorrect is one answer - and the other is the paradoxical relationship between the long "oo" sound in in lose and the shorter "oo" sound in loose. It's hard to argue that the spelling shouldn't be the other way around, although I have no doubt would still trip over it even then.
  • notb - Thursday, May 7, 2020 - link

    Idle power draw is atrocious. How can it be this high?

    It's not even that I'm worried about the unnecessary electricity use or noise (which could make an analogous APU a lot less interesting for HTPC and NAS).

    I'm just really shocked by interconnect using 16W when the cores it joins are hardly doing anything.
    Does anyone know what is I/O die doing? Is there a 10W ASIC mining bitcoin or what?
  • eastcoast_pete - Thursday, May 7, 2020 - link

    Hush! You're spilling the beans here (:
    Actually, if AMD had a highly efficient ASIC mining chip with good hash rates, I'd consider buying some. Same goes for Intel.
  • notb - Friday, May 8, 2020 - link

    Actually Intel is a major FPGA maker, so you can get one of those. It's not that hard to find an open-source coin miner (even on GitHub).

    The comment stand though. I googled a bit and there's no clear explanation for the high idle uncore.
    And 8-core mobile Zen2 chips use maybe 3W in idle. It's not like their IF is a lot slower or has less to do.

    This makes me wonder if we're even going to see desktop/server 35W chips? Not to mention it would be nice if they offered TDP down of 25W...

    Suddenly, I'm a lot less interested in an AMD-powered home server or NAS (and BTW: EPYC embedded lineup is still Zen-only).
  • kepstin - Friday, May 8, 2020 - link

    If they do make desktop 35W chips, they'll probably be based on the integrated APU die. I suspect the increased idle power is due either to off-die IF link to the IO chiplet needing more power than IF within a die, or perhaps the (14nm) IO chiplet itself having higher power usage.
  • notb - Friday, May 8, 2020 - link

    I'm OK with this kind of uncore under load (it's how Zen works).
    And I don't really mind high idle in workstation CPUs. It's an acceptable compromise.

    I just assumed that they'll adjust this for low-core CPUs, since these often go into home PCs used casually - spending a lot of time at idle / low. And under a cheap, slim cooler there will be a difference between 5 and 16W.

    AMD will have to fix this in the APUs if they want to take on low-power segments (NAS, HTPC, tiny office desktops).

    AFAIK Zen2 APUs will use the chiplet layout, not monolithic approach from the mobile offer. Hence, OEMs will probably use mobile chips anyway. DIY customers may have a problem.
  • Holliday75 - Saturday, May 9, 2020 - link

    We've seen updates addressing issues with previous Zen CPU's. Possible it could be a miss on their part of just didn't have the time to tweak it before release.
  • Namisecond - Thursday, May 7, 2020 - link

    Thanks for detailing the two new AMD CPUs. Any news on the new desktop APUs though? I'm hearing rumors of up to 8 cores but the GPUs on them will be worse than the previous generation.

Log in

Don't have an account? Sign up now