Discrete GPU Gaming Tests

1080p Max with RTX 2080 Ti

The last generation flagship GPU might be considered a bit overkill for 1080p gaming, however when we start cranking up settings, we end up crossing the realm from high refresh rate gaming down to regular gaming, and the CPU can end up being the bottleneck here. It provides an interesting set of results.

A full list of results at various resolutions and settings can be found in our Benchmark Database.

(a-4) Chernobylite - 1080p Max - Average FPS

Generation on generation, we're getting a small bump in Chernobylite.

(b-7) Civilization VI - 1080p Max - Average FPS(b-8) Civilization VI - 1080p Max - 95th Percentile

One of the critical elements here is that Civilization 6 likes the Zen 3 cores, but only when there's enough L3 cache to go around.

(c-7) Deus Ex MD - 1080p Max - Average FPS(c-8) Deus Ex MD - 1080p Max - 95th Percentile

Deus Ex gets a sizeable uplift with the new APUs over the previous generation.

(d-4) Final Fantasy 14 - 1080p Max - Average FPS

144 Hz on Final Fantasy 14, these chips are ready.

(h-7) F1 2019 - 1080p Ultra - Average FPS(h-8) F1 2019 - 1080p Ultra - 95th Percentile

F1 2019 is a game that enjoys the Zen 3 change as well.

(i-7) Far Cry 5 - 1080p Ultra - Average FPS(i-8) Far Cry 5 - 1080p Ultra - 95th Percentile

(l-7) Red Dead 2 - 1080p Max - Average FPS(l-8) Red Dead 2 - 1080p Max - 95th Percentile

CPU Tests: Synthetic and SPEC Discrete GPU Gaming Tests: 4K with RTX 2080 Ti
Comments Locked

135 Comments

View All Comments

  • abufrejoval - Thursday, August 5, 2021 - link

    There are indeed so many variables and at least as many shortages these days. And it's becoming a playground for speculators, who are just looking for such fragilities in the suppy chain to extort money.

    I remember some Kaveri type chips being sold by AMD, which had the GPU parts chopped off by virtue of being "borderline dies" on a round 300mm wafer. Eventually they also had enough of these chips with the CPU (and SoC) portion intact, to sell them as a "GPU-less APU".

    Don't know if the general layout of the dies allows for such "halflings" on the left or right of a wafer...
  • mode_13h - Wednesday, August 4, 2021 - link

    Ian, please publish the source of 3DPM, preferably to github, gitlab, etc.
  • mode_13h - Wednesday, August 4, 2021 - link

    For me, the fact that 5600X always beats 5600G is proof that the non-APUs' lack of an on-die memory controller is no real deficiency (nor is the fact that the I/O die is fabbed on an older process node).
  • GeoffreyA - Thursday, August 5, 2021 - link

    The 5600X's bigger cache and boost could be helping it in that regard. But, yes, I don't think the on-die memory controller makes that much of a difference compared to the on-package one.
  • mode_13h - Friday, August 6, 2021 - link

    I wrote that knowing about the cache difference, but it's not going to help in all cases. If the on-die memory controller were a real benefit over having it on the I/O die, I'd expect to see at least a couple benchmarks where the 5600G outperformed the 5600X. However, they didn't switch places, even once!

    I know the 5600X has a higher boost clock, but they're both 65W and the G has a higher base frequency. So, even on well-threaded, non-graphical benchmarks, it's quite telling that the G can never pass the X.
  • GeoffreyA - Friday, August 6, 2021 - link

    Remember how the Core 2 Duo left the Athlon 64 dead on the floor? And that was without an on-die MC.
  • mode_13h - Saturday, August 7, 2021 - link

    That's not relevant, since there were incredible differences in their uArch and fab nodes.

    In this case, we get to see Zen 3 cores on the same manufacturing process. So, it should be a very well-controlled comparison. Still not perfect, but about as close as we're going to get.

    Also, the memory controller is in-package, in both cases. The main difference of concern is whether or not it's integrated into the 7 nm compute die.
  • GeoffreyA - Saturday, August 7, 2021 - link

    In agreement with what you are saying, even in my first comment. I think Cezanne shows that having the memory controller on the package gets the critical gains (vs. the old northbridge), and going onto the main die doesn't add much more.

    As for K8 and Conroe, I always felt it was notable in that C2D was able to do such damage, even without an IMC. Back when K8 was the top dog, the tech press used to make a big deal about its IMC, as if there were no other improvements besides that.
  • mode_13h - Sunday, August 8, 2021 - link

    One bad thing about moving it on-die is that this gave Intel an excuse to tie ECC memory support to the CPU, rather than just the motherboard. I had a regular Pentium 4 with ECC memory, and all it required was getting a motherboard that supported it.

    As I recall, the main reason Intel lagged in moving it on-die is that they were still flirting with RAMBUS, which eventually went pretty much nowhere. At work, we built one dual-CPU machine that required RAMBUS memory, but that was about the only time I touched the stuff.

    As for the benefits of moving it on-die, it was seen as one of the reasons Opteron was able to pull ahead of Pentium 4. Then, when Nehalem eventually did it, it was seen as one of the reasons for its dominance over Core 2.
  • GeoffreyA - Sunday, August 8, 2021 - link

    Intel has a fondness for technologies that go nowhere. RAMBUS was supposed to unlock the true power of the Pentium 4, whatever that meant. Well, the Willamette I used for a decade had plain SDRAM, not even DDR. But that was a downgrade, after my Athlon 64 3000+ gave up the ghost (cheapline PSU). That was DDR400. Incidentally, when the problems began, they were RAM related. Oh, those beeps!

Log in

Don't have an account? Sign up now