Small Performance Improvements - Uncertain Projections

Summing up all the different microarchitectural advancements, Arm presents with us the different performance improvements we can expect of the Mali-G78:

On the part of the asynchronous top-level performance improvements the GPU can achieve by improving the geometry to shader core capabilities, Arm projects to see a roughly 8% boost in benchmarks, with a larger ~14% boost in some game titles.

These improvements are quite small, but from a SoC vendor perspective I suppose it wouldn’t be too complicated to implement this, as it would only cost an additional PLL or just a frequency divider in order to achieve the extra performance.

The generational power efficiency improvements of the G78 over the G77 in a similar configuration are 10%, likely attributed to the FMA and cache improvements of the core. It’s small, but we take what we can get.

The async feature from an energy efficiency perspective is proclaimed to be around 6-13% depending on the workload. This is actually a bit of a more complex figure in my view. The main problem in my view is that to achieve this, the SoC vendor needs to actually go ahead and employ a second voltage rail for the GPU to gain the most benefit of the asynchronous frequencies. The efficiency benefit here is small enough, that it begs the question if it’s not just cheaper to add in a few more extra cores and lock them lower, rather than incurring the cost of the extra PMIC rail, inductors and capacitors. It’s an easy efficiency gain for flagship SoCs, but I’m really wondering what vendors will be deploying in the mid-range and lower.

Mali-G68 GPU: It's the same

Alongside the Mali-G78, Arm is today also announcing the new Mali-G68 GPU:

You might be wondering why I’m including this as a footnote at the end of the article rather than covering it in more detail. The truth is, this is the exact same IP as the Mali-G78, with the only difference being that this GPU configuration only scales up to 6 cores. In essence, if the microarchitecture is implemented with up to 6 cores, it’s branded as a G68, and if uses 7 or more cores, it’s branded as a G78.

Arm actually had used this marketing with the G57, which ended up being actually the same IP as the G77, leading to some confusion with the MediaTek Dimensity 800 SoC that was announced earlier this year. We had called that GPU as a derivative of the G77 until MediaTek had reached out to us to point out that it’s actually the same GPU.

It’s pretty disappointing to see Arm do such marketing exercises, as it can be technically misleading. We asked what their rationale is, and they explained that it’s actually a customer demand for them to better differentiate their products. It’s a somewhat credible argument, but on the other hand we’ve had MediaTek outright want to point out to us this misleading branding, so it seems that not everybody is on the same page on the matter.

Arm does say that they possibly envision that future iterations in this series might actually see real microarchitectural differentiations compared to the bigger implementations. In that scenario, the branding at least would make more sense.

Mali-G78: Meagre improvements, or just bad vendor implementations?

If you didn’t already catch on until now, I’m feeling quite pessimistic about the Mali-G78. First of all, it’s just not that big of a generational upgrade compared to the Mali-G77, even by Arm’s own standards and advertised figures.

You could forgive the smaller upgrades if we had started from an excellent baseline performance. The Mali-G77 promised a whole ton of improvements in both performance and efficiency. The actual results we’ve seen out of the Exynos 990 and the MediaTek D1000 were anything but stellar. On one hand we had a SoC which seemingly had a bad implementation on a seemingly immature process node, and on the other hand we had some very mid-range performance even though it was an MP9 GPU configuration. Truth is, we still don’t know if the Mali-G77 is a good GPU or not, as we simply haven’t seen a good implementation out there. If we don’t know if the G77 is good or not, then it’s also impossible to project if the G78 will be any good.

I see Arm having the exact same problem they’ve been facing in the CPU space until the just announced Cortex-X1, as in they’re stuck with having to design a scalable GPU that fits all target markets and having to please all customer design points. Technically, that’s never the best option, as you end up with something that always has compromises.

As for potential implementers of the G78, amongst the biggest vendors it’s likely HiSilicon to be the first adopter – if they can manage to bring out the new Kirin chipsets out to market amidst the current political situation. Whether Samsung and AMD will manage to bring out an RDNA based mobile Exynos next year is also still unclear, though I’m sure that’s what they’re striving for. The biggest issue on the competitive landscape is Apple. Even if the G77 had managed to live up to its projections, the G78 certainly is showcasing too meagre improvements to be able to catch up to the Apple GPUs. We’re also supposed to be seeing the first Imagination A-series GPU SoC designs later this year which is a whole other wildcard. That’s a very tough competitive landscape for Mali – let’s hope the G78 will see more positive success in the future.

More Scaling, Different Frequency Domains
POST A COMMENT

36 Comments

View All Comments

  • HardwareDufus - Thursday, May 28, 2020 - link

    Thank you for taking the time to teach the author by example. Concise vocabulary and brief statements are a must in tech journalism as the subject matter at hand is quite complex.

    While your intent was good, you might have refrained from using the phrase 'like pulling teeth'. A phrase like that will immediately put people on the defensive, as is evident by several responses to your post.

    You write very well and your constructive criticsm and suggestions are helpful. Strive to serve it in a manner where the recipient is more likely to accept. The best way to offer constructive criticism is the 3C rule: Commend, Counsel, Commend.
    Reply
  • Oldiewan - Saturday, September 11, 2021 - link

    Anal much? Reply
  • Oldiewan - Saturday, September 11, 2021 - link

    Reading that was like pulling teeth. Reply
  • brucethemoose - Tuesday, May 26, 2020 - link

    TBH, does ARM really need to design a top end GPU?

    Apple has definitely-not-PowerVR, and Imagination is hungry for any customer they can get. Samsungs going AMD. Broadcomm, Qcom, Nvidia and Hisilicon are all in firmly in house.

    That leaves Mediatek and small fish, which is not a very demanding market.
    Reply
  • SarahKerrigan - Tuesday, May 26, 2020 - link

    Hisilicon uses Mali; they aren't inhouse. Reply
  • brucethemoose - Tuesday, May 26, 2020 - link

    Ah, yeah, dont know what I was thiking. And HiSilicon is a big fish.

    Still, theres more demand from big customers for licensed CPUs than GPUs.
    Reply
  • dotjaz - Tuesday, May 26, 2020 - link

    True for now, there's no guarantee they won't switch to PowerVR B-series soon given the current political environment causing them being cut off from all other parts of ARM (bar the perpetual architectural-licensing).
    There's really no economy of scale left once they switch to in-house CPU core inevitably in the next year or two.
    Reply
  • s.yu - Sunday, May 31, 2020 - link

    In house based on what? RISC-V? Know that if they're banned, they can't license any further updates to ARMv8 either. Reply
  • Kamen Rider Blade - Tuesday, May 26, 2020 - link

    Apple has kicked PowerVR to the curb and gone in-house for their GPU's.
    Samsung says screw it, AMD take over the GPU side.
    Qualcomm has their own ADRENO brand that they bought from AMD's Imageon line.
    nVIDIA is obviously using their own GPU.
    MediaTek switches from Mali to PowerVR depending on product line / generation.
    HiSilicon has generally stuck with Mali
    Broadcom has their own VideoCore line of GPU's
    Reply
  • regsEx - Tuesday, May 26, 2020 - link

    Samsung still only uses Mali. AMD is just a rumor so far. Reply

Log in

Don't have an account? Sign up now