The NVIDIA GeForce RTX 2080 Super Review: Memories of the Future
by Ryan Smith on July 23, 2019 9:00 AM EST- Posted in
- GPUs
- GeForce
- NVIDIA
- Turing
- GeForce RTX
Meet the GeForce RTX 2080 Super Founders Edition
Taking a closer look at the RTX 2080 Super, there aren’t too many surprises to be found. Since we’re dealing with a mid-generation kicker here, NVIDIA has opted to stick with their original RTX 2080 reference designs for the new card, rather than design wholly new boards. This has allowed them to get the new card out relatively quickly, and to be honest there’s not a whole lot NVIDIA could do here that wouldn’t be superficial. As a result, the RTX 2080 Super is more or less identical to the RTX 2080 it replaces.
GeForce RTX 20 Series Card Compariaon | ||||
RTX 2080 Super Founders Edition |
RTX 2080 Super (Reference Specs) |
|||
Base Clock | 1650MHz | 1650MHz | ||
Boost Clock | 1815MHz | 1815MHz | ||
Memory Clock | 15.5Gbps GDDR6 | 15.5Gbps GDDR6 | ||
VRAM | 8GB | 8GB | ||
TDP | 250W | 250W | ||
Length | 10.5-inches | N/A | ||
Width | Dual Slot | N/A | ||
Cooler Type | Open Air (2x Axial Fans) |
N/A | ||
Price | $699 | $699 |
As I noted earlier, the Founders Edition cards themselves are now purely reference cards. NVIDIA isn’t doing factory overclocks this time around – the high reference clock speeds making that process a bit harder – so the RTX 2080 Super Founders Edition is very straightforward examples of what reference-clocked RTX 2080 Super cards can deliver in terms of performance. It also means that the card no longer carries a price premium, with NVIDIA selling it at $699.
Externally then, possibly the only material change is quite literally in the materials. NVIDIA has taken the 2080 reference design and given the center segment of shroud a reflective coating. This, along with the Super branding, are the only two visually distinctive changes from the RTX 2080 reference design. For better or worse, the reflective section is every bit the fingerprint magnet that you probably expect, so thankfully most people aren’t handling their video cards as much as hardware reviewers are.
In terms of cooling, this means the RTX 2080 Super gets the RTX 2080’s cooler as well. At a high level this is a dual axial open air cooler, with NVIDIA sticking to this design after first introducing it last year. The open air cooler helps NVIDIA keep their load noise levels down, though idle noise levels on all of the RTX 20 series reference cards has been mediocre, and the new Super cards are no different. The fact that this reference design isn’t a blower means that the RTX 2080 Super isn’t fully self-exhausting, relying on the computer chassis itself to help move hot air away from the card. For most builders this isn’t an issue, but if you’re building a compact system or a system with limited airflow, you’ll want to make sure your system can handle the heat from a 250W video card.
Under the hood, the RTX 2080 Super inherits the RTX 2080’s heatsink design, with a large aluminum heatsink running the full length of the card. Deeper still, the heatsink is connected to the TU104 GPU with a vapor chamber, to help move heat away from the GPU more efficiently. Overall, the amount of heat that needs to be moved has increased, thanks to the higher TDP, however as this is also the same cooler design that NVIDIA uses on the 250W RTX 2080 Ti, it's more than up to the task for a 250W RTX 2080 Super.
According to NVIDIA the PCB is the same as on the regular RTX 2080. As I need this card for further testing, I haven’t shucked it down to its PCB to take inventory of components. But as the RTX 2080 was already a "fully populated" PCB as far as VRM circuitry goes, the same will definitely be true for the RTX 2080 Super as well. I have to assume NVIDIA is just driving their VRMs a bit harder, which shouldn't be an issue given what their cooler can do. It is noteworty though that as a result, the card's maximum power target is just +12%, or 280W. So while the card has a good bit of TDP headroom at stock, there isn't much more that can be added to it. Factoring in pass-through power for the VirtualLink port, and NVIDIA is right at the limit of what they can do over the 8pin + 6pin + slot power delivery configuration.
Finally, for display I/O, the card gets the continuing NVIDIA high-end standard of 3x DisplayPort 1.4, 1x HDMI 2.0b, and 1x VirtualLink port (DP video + USB data + 30W USB power).
111 Comments
View All Comments
willis936 - Tuesday, July 23, 2019 - link
I think there is an error on the first page comparison table: 2080 Ti memory clock.Also first I guess.
willis936 - Tuesday, July 23, 2019 - link
Also the last paragraph of the conclusion should have "barring" rather than "baring".Ryan Smith - Tuesday, July 23, 2019 - link
This is what happens when you get overeager with copying & pasting... Thanks!extide - Tuesday, July 23, 2019 - link
and you say ending the bundle when I think you mean extendingRyan Smith - Tuesday, July 23, 2019 - link
The fault with that one lies solely with Word!boozed - Tuesday, July 23, 2019 - link
You guys really need a "corrections" link so the comments section isn't full of people pointing out typos and malapropisms (I'm guilty of the latter myself, though).Cheers for the review
RSAUser - Wednesday, July 24, 2019 - link
They rather need a grammarly subscription.willis936 - Tuesday, July 23, 2019 - link
496 GB/s for $700. I'm curious to see a retrospective of GPU memory bandwidth vs. cost over the last ten years. It feels like it's really sat still compared to transistor count. Are GPU caches getting bigger? Even then there is little that can be done about the main memory bandwidth requirements of SIMD workloads. We have faster interconnects yet the buses are staying the same or getting smaller.Stuka87 - Tuesday, July 23, 2019 - link
Bandwidth matters less now than it did many years ago thanks different types of compression being used. You can fit more data into the same amount of bandwidth now than you could years ago.willis936 - Tuesday, July 23, 2019 - link
Lossy compression isn't free. At some point the user will say "this looks bad". If that wasn't the case then why not compress every 64x64 tile to 1 KB? It's dependent on the data's entropy and many textures are high entropy. It's nice to have tuneable control over a soft cap, but it isn't a magic bullet that makes things better.Lossless compression would be bad in this application. No one should make a system that imposes a maximum allowed entropy on artists.
Memory bandwidth always has and remains to be the bottleneck of SIMD systems.