Imagination's PowerVR Rogue Architecture Explored
by Ryan Smith on February 24, 2014 3:00 AM EST- Posted in
- GPUs
- Imagination Technologies
- PowerVR
- PowerVR Series6
- SoCs
When it comes to our coverage of SoCs, one aspect we’ve been trying to improve on for some time now is our coverage and understanding of the GPU portion of those SoCs. In the PC space we’re fortunate that there are just three major players – Intel, NVIDIA, and AMD – and that all three of them have over the years learned how to become very open and forthcoming about their GPU architectures. As a result we’ve had a level of access that has allowed us to better understand PC GPUs in a way that in earlier times simply wasn’t possible.
In the SoC space however we haven’t been so fortunate. Our understanding of most SoC GPU architectures has not been nearly as deep due to the fact that SoC GPU designers have been less willing to come forward with public details about their architectures and how those architectures have evolved over the years. And this has been for what’s arguably a good reason – unlike the PC GPU space, where only 2 of the 3 players compete in either the iGPU or dGPU markets, in the SoC GPU space there are no fewer than 7 players, all of whom are competing in one manner or another: NVIDIA, Imagination Technologies, Intel, ARM, Qualcomm, Broadcom, and Vivante.
Some of these players use their designs internally while others license out their designs as IP for inclusion in 3rd party SoCs, but all these players are in a much more competitive market that is in a younger place in its life. All the while SoC GPU development still happens at a relatively quick pace (by GPU standards), leading to similarly quick turnarounds between GPU generations as GPU complexity has not yet stretched out development to a 3-4 year process. As a result of SoC GPUs still being a young and highly competitive market, it’s a foregone conclusion that there is still a period of consolidation ahead of us – not unlike what has happened to SoC integrators such as TI – which provides all the more reason for SoC GPU players to be conservative about providing public details about their architectures.
With that said, over the years we have made some progress in getting access to the technical details, due in large part to the existing openness policies of NVIDIA and Intel. Nevertheless, as two of the smaller players in the mobile GPU space this still leaves us with few details on the architectures behind the majority of SoC GPUs. We still want more.
This brings us to today. In what should prove to be an extremely eventful and important day for our coverage and understanding of SoC GPUs, we’d like to welcome Imagination Technologies to the “open architecture” table. Imagination chosen to share more details about the inner workings of their Rogue Series 6 and Series 6XT architectures, thereby giving us our first in-depth look at the architecture that’s powering a number of high-end products (not the least of which is all of Apple’s current-gen products) and descended from some of the most widely used SoC GPU designs of all time.
Now Imagination is not going to be sharing everything with us today. The bulk of the details Imagination is making available relate to their Unified Shading Cluster (USC) shading block, the heart of the Series 6/6XT GPUs. They aren’t discussing other aspects of their designs such as their geometry processors, cache structure, or Tile Based Deferred Rendering system – the company’s secret sauce and most potent weapon for SoC efficiency – but hopefully one day we’ll get there. In the meantime we will have our hands full just taking our first look at the Series 6/6XT USCs.
Finally, before we begin we’d like to thank Imagination for giving us this opportunity to evaluate their architecture in such great detail. We’ve been pushing for this for quite some time, so we’re pleased that this is coming to pass.
Imagination is publishing a pair of blogs and pseudo whitepapers on their website today: Graphics cores: trying to compare apples to apples, and PowerVR GX6650: redefining performance in mobile with 192 cores. Along with this they have also been answering many of our deepest technical questions, so we should have a good handle on the Rogue USC. So with that in mind, let’s dive in.
95 Comments
View All Comments
MrPoletski - Sunday, March 9, 2014 - link
Factor of 4X, where is the edit button?iwod - Monday, February 24, 2014 - link
So that is a pretty decent GPU even from Desktop perspective. But Why we dont see this being used on Laptop or Desktop? It doesn't seem hard to scale the Imagination PVR GX6650 to NVIDIA GTX 650 level.StevoLincolnite - Monday, February 24, 2014 - link
Imagination used to build graphics processors for the Desktop, they were unable to compete with ATI, nVidia, Matrox, S3, 3dfx, NEC etc'. - Instead they shifted their focus to a niche market, the low-powered market, if only the other players knew how big that market would eventually grow to.Intel has also used PowerVR graphics chips for it's IGP's in the past like the GMA 3600 in the Intel Atom.
In general, they are far from ideal, they leave much to be desired in the drivers department.
One of the earlier PowerVR chips in the Intel Atom still doesn't have it's decoder functioning.
Krysto - Monday, February 24, 2014 - link
Imagination is losing the war for the exact same reason they lost the last time - their tile-based rendering, that was only meant for low-end "embedded" chips. But the chips are becoming "desktop class" these days, and need to work on a lot more advanced content with super high resolutions - and that's why Imagination's tile-based rendering will fail. Tile-based rendering is meant for simple operations, and that's where it shows its greater efficiency. The more complex those operations (the games) get the harder it will be for the PowerVR architecture to keep up.It used to be that their competitors couldn't even touch them. Now every single one will match or exceed their performance and features, and I imagine next year's 16nm FinFET Mobile Maxwell will leave it in the dust (wouldn't surprise me to see higher performance than Xbox One in it, or at least 1 TF).
michael2k - Monday, February 24, 2014 - link
You mean Imagination is still winning the war because everyone else only just realized there was a market in SoC? Intel is only barely in the game, AMD isn't, and Mali and Adreno is the only real competitor in terms of unit share. Unlike GPU, this market is tied to the success of your SoC, and PVR has a strong ally in Apple unless NVIDIA can convince Apple to license some of their GPU tech.Apple ships something like 1 in 5 smartphones, 1 in 2 tablets, etc. They have a huge presence in the market right now. Qualcomm definitely ships more SoC, but their GPUs don't all sit in the high end performance space.
ryszu - Monday, February 24, 2014 - link
Our TBDR front-end is absolutely not just designed for simple operations. Pure FUD, it scales very well.Scali - Monday, February 24, 2014 - link
How do you figure that TBDR is only for simple operations? It actually excels at more complex pixel operations, because it defers most of the shading and texturing until after visibility has been solved.khanov - Monday, February 24, 2014 - link
Pure nonsense. Go back to the kiddy table.Tile-Based Deferred Rendering eliminates overdraw and the performance gains it achieves INCREASE with scene complexity.
phoenix_rizzen - Friday, February 28, 2014 - link
Haven't you been saying the same thing for the past two years with the release of Tegra3 and Tegra4? And nVidia is still way behind.Series 6 is out now in products you can actually buy. Tegra K1 isn't.
Series 6 XT will be out in the next year-ish. Tegra K1 will probably be out by then.
The follow-up to 6XT will probably be out in two years. Who knows when the next Tegra after K1 will actually be out?
Until there's actual, physical devices out there with an nVidia chipset in it that betters the other, actual, physical devices out there, you're just barking smoke.
MrPoletski - Sunday, March 9, 2014 - link
You are completely wrong!