AMD on Tuesday formally announced its next-generation EPYC processor code-named Rome. The new server CPU will feature up to 64 cores featuring the Zen 2 microarchitecture, thus providing at least two times higher performance per socket than existing EPYC chips.

As discussed in a separate story covering AMD’s new ‘chiplet’ design approach, AMD EPYC ‘Rome’ processor will carry multiple CPU chiplets manufactured using TSMC’s 7 nm fabrication process as well as an I/O die produced at a 14 nm node. As it appears, high-performance ‘Rome’ processors will use eight CPU chiplets offering 64 x86 cores in total, as well as an eight-channel DDR4 memory controller supporting up to 4 TB of DRAM per socket. Besides, the new processor supports 128 PCIe 4.0 lanes to connect next-generation accelerators, such as the Radeon Instinct MI60 based on the Vega 7nm GPU.

Considering the fact that Zen 2 microarchitecture is expected to generally increase performance of CPU cores (especially when it comes to floating point performance, which AMD expects to double), the Rome processors will boost performance of servers quite dramatically when compared to existing machines. In particular, AMD expects performance per socket to double as a result of higher core count, and predicts that floating point performance per socket will quadruple because of arhitectural IPC improvements and increase of the core count.

One important peculiarity of AMD’s EPYC ‘Rome’ processor is that it is socket compatible with existing EPYC ‘Naples’ platform and will be forward compatible with AMD’s future ‘Milan’ platforms featuring CPUs powered by the Zen 3 microarchitecture. This will greatly simplify development of AMD-based servers and will enable server makers to reuse their existing designs for future machines, which is important for AMD that needs to capture market from Intel. To do that, it has to simplify job of server builders by making its platforms simple.

AMD is currently sampling its EPYC ‘Rome’ processor with server makers and customers. The company plans to launch ‘Rome’ products sometimes in 2019, but it does not disclose its launch schedule just now.

This is a breaking news. We are updating the news story with more details.

POST A COMMENT

67 Comments

View All Comments

  • abufrejoval - Wednesday, November 7, 2018 - link

    The way these modern CPUs work, that's what you get automatically if you *don't* use every other core: Faster clock rates on the remaining ones.

    Intel doesn't even sell these high-clock/low core chips any cheaper so here you get the same behavior.

    Just hope you're not on Oracle style licensing... But perhaps BIOS de-activation would help there.
    Reply
  • Kazu-kun - Wednesday, November 7, 2018 - link

    "So if I want a 16-24 core ‘Rome’ processor it will be low cost, due to 5 or 6 dead cores on each chiplet ... ?"

    No, they would just use less chiplets. For 32 cores, they would use 4 chiplets. For 16 cores 2 chiplets. And so on.

    The reason they couldn't do this with the first generation Epyc is because the IO and memory controller were on the chiplets, so in order to keep the full IO and memory bandwidth they needed keep all the chiplets and disable cores instead. This isn't a problem anymore thanks to moving all the uncore to the IO die. Now instead of disabling cores, they can just put less chiplets.
    Reply
  • jospoortvliet - Monday, November 12, 2018 - link

    "I'd rather have 32 cores at double the clocks, running under 200W - more useful than 64 cores at half the speed (not in _every_ case, I know!)."

    That is faster in nearly every case but 32 powerfull cores at 8ghz will not be possible under 2000 watt anytime soon, let alone 200...
    Reply
  • jospoortvliet - Monday, November 12, 2018 - link

    (Obviously right now it isnt possible at all, period. 7nm might allow 5-6ghz, at crazy power draw. Maybe. Clockspeed doubling is just not possible anymore, if it was it would be done - much nicer that doubling cores as that costs far more money in terms of die space!) Reply
  • Samus - Thursday, November 8, 2018 - link

    128 PCIe 4.0 lanes per SOCKET.

    Your move Intel.
    Reply
  • RogerAndOut - Thursday, November 8, 2018 - link

    Well, 128 PCIe 4.0 lanes per SYSTEM as both 1 processor and 2 processor based systems will have 128 PCIe lanes free. On a 2 processor system, 64 lanes from each processor are used as the interconnect. Reply
  • monglerbongler - Thursday, January 24, 2019 - link

    The question is whether this will support persistent memory/storage (eg optane)

    Since that is going to be significant in the near term evolution of server/data center/cluster hardware design.

    *especially* for computational clusters.

    No hard drives. Period. Just memory and peristent storage, with maybe a storage server somewhere back in the corner of the room to store the output data sets of whatever scientific or engineering computation is being performed.
    Reply

Log in

Don't have an account? Sign up now