The Unlikely Legacy: AMD GFX8 Enters Its Fifth Year

With the release of Polaris 30 and the Radeon RX 590, something very interesting is about to take place: AMD’s “GFX8” core graphics architecture is turning 5. And in the process AMD is setting up GFX8 for what’s likely to be the single longest-lived mainstream graphics architecture we've ever seen.

AMD’s first GFX8 GPU rolled out back in the summer of 2014 with the release of the Tonga GPU and the Radeon R9 285. And while somewhat unassuming and definitely underpromoted by AMD at the time – at a high level it was little more than a modernized Tahiti GPU, AMD’s first GCN GPU – the GFX8 graphics architecture has since become an unlikely staple of AMD’s GPU efforts. The cornerstone of both GCN 3 and GCN 4, it has been with us in the Radeon 200, 300, 400, and 500 series, and now is clearly setup to last well into 2019 (if not beyond) as part of Polaris 30 and the Radeon RX 590.

At this point, before I go too far down the rabbit hole, I should probably stop here and clarify what GFX8 is, as I’m sure more than a few of you have stopped and asked “aren’t AMD’s GPU architectures named Graphics Core Next?” You would of course be correct, but this is where there’s a fine degree of architectural nuance that we often don’t get into even in most AnandTech articles. And at this juncture, that’s nuance that will be very helpful in expressing my amazement that GFX8 of all architectures is primed to become the longest-lived graphics architecture.

Though we don’t hear about it from AMD as much these days as we did back in 2015/2016, a core part of AMD’s GPU strategy remains the ideas of architectural blocks. That the company separately develops the display controller, the memory controller, the core graphics processor, the geometry processor, etc, such that these parts can be mixed and matched to a degree. This allows AMD’s semi-custom arm to offer a variety of options to customers – something successfully leveraged for the likes of the current-gen game consoles and the Intel-exclusive “Vega M” GPU – while also giving AMD the ability to upgrade its GPU designs in a piecemeal fashion.

This is something we saw in spades with the launch of the Polaris GPU family, where AMD even put out a very high-level slide listing the major parts of the GPU and the various bits they changed. And in fact because it was so high level, that slide ended up overstating things in some cases. Polaris had a whole bunch new to it, but it also borrowed a lot from earlier AMD architectures.

Each of AMD’s blocks has their own version number system, outside of the public eye and outside of how the company numbers the iterations of their Graphics Core Next architecture. If you did through AMD’s developer tools and Linux kernel documentation long enough you’ll find all the parts, but this isn’t something that is meant to matter to consumers (or even most enthusiasts). Rather AMD periodically bundles together all the different blocks and packages them together as an iteration of Graphics Core Next, with any given version of GCN essentially laying a minimum standard for what version of a given block can be used.

AMD GFX IP Generations
  GCN Version Major Video Cards Year Introduced
GFX6 GCN 1 Radeon 7970 2011
GFX7 GCN 2 Radeon R9 290X 2013
GFX8 GCN 3
GCN 4 (Polaris)
Radeon R9 285
Radeon R9 Fury X
Radeon RX 480/580/590
2014
GFX9 GCN 5 (Vega) Radeon RX Vega 64
Radeon Instinct MI60
2017

The core of any GPU is of course its graphics and compute core – you won’t see AMD launch a new graphics core without also revving GCN to match – and this is where we get to GFX8. As you can probably guess from the name, GFX8 is the 8th iteration of AMD’s core graphics architecture. Meanwhile AMD also has GFX9, which is the heart of Vega. All of which is ultimately a longwinded way of saying that AMD has multiple core graphics architectures in flight at any given time, and that consequently AMD has tended on waiver on how much they’re willing to promote a new graphics core, as they don’t want to undermine their existing products.

Anyhow, let’s talk about GFX8. Introduced in 2014, GFX8 is a bit of an oddity in that in the consumer space, it’s a bit of a footnote. GFX7 already supported Direct3D feature level 12_0, so GFX8 didn’t bring anything new to the table in that regard. Instead the biggest updates there were to the compute side of matters: GFX8 introduced new compute instructions and, while they weren’t full-on Rapid Packed Math, support for 16-bit data types and associated instructions. That it was such a small update on the graphics side of matters is why AMD was able to slip it into the Radeon 200 product stack late in its life, and then easily carry over the Tonga GPU into the 300 series as well. Similarly, GFX8 became the heart of the late-28nm Fiji GPU, AMD’s first High Bandwidth Memory product.


GFX8/GCN 3 Recap

Since 2014, GFX8 has gone on to live a long and productive life, and at this point it’s been a much longer life than I was ever expecting. If anything I was expecting GFX8 to be short-lived; AMD was clearly building up to what would eventually become Vega. Instead GFX8 was carried into Polaris (GCN 4), something that AMD didn’t make very obvious at the time, and has been a critical component of all Polaris GPUs since then. Including, of course, the new Polaris 30.

The end result is that if you wrote low-level shader code against Tonga’s ISA back in 2014, you can today run it unchanged on Polaris 30. The use of GFX8 means that the entire span of products are ISA compatible, which is a remarkable development since GPU ISAs are prone to changing every couple of years. GFX6 and GFX7 didn’t have this kind of shelf life, and it doesn’t look like GFX9 will have quite the same lifetime either. Even within NVIDIA’s stack, Maxwell would have needed to go another couple of years to keep pace.

Consequently, that GFX8 has lived for so long is a remarkable testament to AMD’s GPU design team; they built a solid, if unassuming architecture, and it has lasted the test of time. Technically now it’s on its second die shrink, having gone from 28nm to 14nm to 12nm, and has even shown up in the oddest of places like the not-quite-Vega “Vega M” GPU. If you had asked me back in 2014 or 2015 what architecture I thought would live the longest, GFX8 would not have been my answer.

The flip side to that however is that it does underscore AMD’s technical situation. This is a D3D feature level 12_0 part – meaning it lacks 12_1 features like conservative rasterization and raster ordered views. Which was fine back in 2014, but NVIDIA has been shipping 12_1 hardware since 2014 and AMD since 2017. So from one perspective, a brand-new Radeon RX 590 in 2018 is still lacking graphics features introduced by GPUs 4 years ago.

Ultimately however this is not a consumer concern, but more of a developer concern. The launch of a new feature level 12_0 GPU and card series – and one I expect will sell moderately well – means that the clock has been pushed back on developers being able to use 12_1 features as a baseline in their games. RX 590 cards are going to be around for a while even after their day is done, and developers will need to include support for them. All of which is going to make things very interesting once we reach the next generation of consoles, and multi-platform simplicity butts heads with PC compatibility.

Still, this only goes to show that the only thing predictable about the GPU market is how unpredictable it is. We’re in uncharted territory here, and strangely enough it’s a core graphics architecture from 2014 that’s blazing the trail.

The AMD Radeon RX 590 Review Meet the Cards: XFX RX 590 Fatboy & PowerColor RX 590 Red Devil
Comments Locked

136 Comments

View All Comments

  • Diji1 - Thursday, November 15, 2018 - link

    You know what they say about a fool and his money.

    Almost every gamer on Steam is using a GTX 1060 class GPU which is less powerful. Every single gamer on Steam made a loss on their "investment".
  • gopher1369 - Friday, November 16, 2018 - link

    " I believe that the GTX 1070/Vega 56 ...should be considered as the minimum investment for a gamer in 2019"

    Meanwhile I'll continue to enjoy the vast majority of my games quite happily in 1080p / 60FPS on my perfectly good 1050Ti.
  • AMD#1 - Tuesday, November 20, 2018 - link

    No, those prices are still to high. Vega is that expensive deu to HBM, and 1070 because NVIDIA is asking to dam much. 1070/V56 are high end, compaired to next gen it will be mainstream. Navi will hit early 2019, my guess is prices will get lower
  • del42sa - Thursday, November 15, 2018 - link

    pathetic
  • Dragonstongue - Thursday, November 15, 2018 - link

    Agree, 12nm might have helped them to hit higher clocks, but it certainly has not helped much at all in regards to power consumption or temps IMO, all for the "low price" of an additional 50+$ when it hits the shelf (knowing the AIB likely will not be $299 will most likely be $339 (~445-448 CAD)

    for me, the 570 seems "the better pick" for an overall capable 1080p level card or 1440p at reduced settings, at lest the power use is not terribad and pricing is much more "palatable" on the shelf compared to the 580s and likely very much compared to this 590 and the V56 which is over $600 where I can get them up here in the great white north.
  • WithoutWeakness - Thursday, November 15, 2018 - link

    There is no "reference" 590 card. They are all AIB cards. The XFX card featured in the article is on sale on Amazon right now for the $279 MSRP. Sure, there will be triple-fan OC cards for $300+ and some RGB LED monstrosity models pushing closer to $400 but this is available today for the advertised price.

    At the same time, go buy a 570 or 580 (or even a used 480 8GB if you can find one that wasn't mined on) and OC the thing if you want. Nearly the same card and keep money in your pocket.
  • dazz112 - Thursday, November 15, 2018 - link

    Seems like there's no reason to buy gtx1060 anymore (unless it's a lot less cheaper)
  • ToTTenTranz - Thursday, November 15, 2018 - link

    Unless you're stuck with a tiny form factor or a 300W PSU.
  • silverblue - Thursday, November 15, 2018 - link

    Except for the significantly better performance per watt, and the fact you can put that in a SFF case. There are obvious benefits to Polaris 30 such as FreeSync compatibility and the larger frame buffer, but if you require a new PSU when you didn't with the 1060, that's an extra cost.

    With the power figures on show here, I'm immediately wondering about the benefits of undervolting, as well as where the actual frequency sweet spot is. 12nm hasn't exactly been a notable success story for AMD, and with 7nm on its way, I'm not sure what this experiment was supposed to show.
  • Cooe - Thursday, November 15, 2018 - link

    This is a completely different 12nm process than what AMD used for Zen+ (TSMC vs GloFo; Nate's article is wrong), so any equivalencies between them are actually largely just coincidence. Though I SERIOUSLY don't really know in what world you wouldn't described Zen +/Ryzen 2nd Gen as a success story.

Log in

Don't have an account? Sign up now