Arm Cortex A520: Same 2023 Core Optimized For 3 nm

The Arm Cortex-A520 isn't architecturally different, nor has it been changed compared with last year's TCS2023 introduction. Instead, it has been optimized for the latest 3 nm process technology, enhancing its efficiency and performance. This core, part of the second-generation Armv9.2 architecture, delivers some additional compute power for everyday tasks in mobile and embedded devices while maintaining peak levels of energy efficiency and reducing power consumption expected from Arm's smallest core.

These architectural tweaks ensure that the Cortex-A520 can maximize the potential of the 3 nm process, achieving higher transistor density and better overall performance without any significant changes to its fundamental design.

The Cortex-A520 showcases a significant 15% energy saving compared to the Cortex-A520 (TCS23). This improvement is critical for devices with prolonged battery life, such as smartphones and Internet of Things (IoT) devices. By optimizing power consumption, the Cortex-A520 ensures efficient performance without compromising energy usage.

The graph on the above slide clearly illustrates the relationship between power and performance for the Cortex-A520 compared to its predecessor, the Cortex-A55, and the previous Cortex-A520 (TCS23). The latest Cortex-A520 explicitly designed for the 3 nm notably improves power efficiency across various performance levels. This means that the Cortex-A520 consumes significantly less power for a given performance point, demonstrating Arm's commitment to providing performance gains across 2024's Core Cluster and focusing on refining things from a power standpoint to the smallest of the three Cortex cores.

 

Arm Cortex A725: Improvements to Middle Core Efficiency Closing Remarks: Pushing Forward on 3 nm For 2024
POST A COMMENT

55 Comments

View All Comments

  • ET - Thursday, May 30, 2024 - link

    I'm not sure why you're attributing this to insecurity and desperation when it's all about money. I can understand why end users would prefer companies to invest into things they feel are more relevant, but jumping on bandwagons (and driving them forward) is exactly the thing that companies wanting to keep their market healthy should do. Reply
  • GeoffreyA - Thursday, May 30, 2024 - link

    Agreed; it is all about money. Generally, it is not to the benefit of the consumer or the world. An AI PC might be good for Jensen, Pat, Satya, Tim, Lisa, and co. but does not help most people. Reply
  • mode_13h - Thursday, May 30, 2024 - link

    Ooh, you just got "named!"

    Seriously, your comment does indeed sound snarky and your reply sounds defensive and even a bit insecure. I don't think name99 was suggesting that you should want to be a genius, but rather pointing out that it pays to think beyond a single track.

    > when one see Microsoft and Intel making an "AI PC," or AMD calling their
    > CPU "Ryzen AI," and so on, it is little about true AI and more about money,
    > checklists, and the bandwagon.

    I'm reminded of when 3D-capable GPUs went so mainstream you could scarcely buy a PC without it. Yet, the killer app for the average PC user had yet to be invented. To some extent, the hardware needs to lead the way before mainstream apps can fully exploit the technology, because software companies aren't going to invest the time & effort in making features & functionality that only a tiny number of users can take advantage of.

    Also, you say you want AI models to use little power, but progress happens incrementally and having hardware assist indeed improves the efficiency of inferencing on models that aren't all as big or demanding as LLMs.
    Reply
  • GeoffreyA - Thursday, May 30, 2024 - link

    Fair enough. I apologise to everyone for negative connotations in my comment and replies, but the companies are free game and we ought to poke fun at them. I'm fed up, with the lies, marketing, double standards, doublespeak, and nonsense. These companies are only after money, and we are the fools at the end of the day. The last few years it was cloud; now, it's AI. What's next? Reply
  • GeoffreyA - Thursday, May 30, 2024 - link

    As I've said, both here and in several comments elsewhere, AI and LLMs are of immense interest to me. I believe they're the Stone Age version of the stuff in our brains. What I'm trying to criticise is not LLMs or the technology, but the marketing ripoff that is bombarding us everywhere, this so-called AI PC, Copilot PC, or whatever Apple calls theirs. It's laughable the way they're plastering the term AI all over products. Reply
  • SydneyBlue120d - Thursday, May 30, 2024 - link

    Can we expect Samsung S25 3nm Exynos 2500 SOC to be based on this cores? Reply
  • eastcoast_pete - Sunday, June 2, 2024 - link

    After their rather poor showing with their Mongoose custom cores, I'd be very surprised if Samsung doesn't stick with ARM's designs for the CPU side of the Exynos 2500. What's (IMHO) really interesting right now is what Samsung will use for their GPU for the 2500. Rumors abound, many saying that they'll walk away from XDNA and use an in-house designed GPU, or come back to the ARM Mali mothership. The latter would put them in an awkward position, as Mediatek is likely the first out of the gate with their new 9400 featuring both the newest ARM cores and whatever the new version of Immortalis will be called. And Mediatek's Dimensity 9400 is (will be?) fabbed on TSMC's newest 3 nm node, so Samsung will want to have maximum differentiation here. Reply
  • James5mith - Thursday, May 30, 2024 - link

    "The enhanced AI capabilities ensure these applications run efficiently and effectively, delivering faster and more accurate results."

    ARM hardware will magically fix AI algorithms to be better than they otherwise would be? Really?!?
    Reply
  • mode_13h - Thursday, May 30, 2024 - link

    They're probably referring to the fact that it can deliver good inferencing performance without having to resort to the sorts of extreme quantization behind some companies TOPS claims. Quantization often comes at the expense of accuracy, especially if it's done after training, rather than the model being designed and trained to utilize some amount of quantized weights. Reply
  • James5mith - Thursday, May 30, 2024 - link

    Also, amazing increases in performance per watt doesn't mean less power draw. If it draws 3x the power to do 4x the work, then it's increased efficiency 1.33x. But it's still drawing 3x the power. That means a battery will be drained 3x faster.

    Saying the 30w SoC does work more efficiently than the 10w SoC doesn't make it draw less power.
    Reply

Log in

Don't have an account? Sign up now