The first major release of the Gen-Z systems interconnect specification is now available. The Gen-Z Consortium was publicly announced in late 2016 and has been developing the technology as an open standard, with several drafts released in 2017 for public comment.

Gen-Z is one of several standards that emerged from the long stagnation of the PCI Express standard after the PCIe 3.0 release. Technologies like Gen-Z, CAPI, CCIX and NVLink seek to offer higher throughput, lower latency and the option of cache coherency, in order to enable much higher performance connections between processors, co-processors/accelerators, and fast storage. Gen-Z in particular has very broad ambitions to blur the lines between a memory bus, processor interconnect, peripheral bus and even straying into networking territory.

The Core Specification released today primarily addresses connecting processors to memory, with the goal of allowing the memory controllers in processors to be media-agnostic: the details of whether the memory is some type of DRAM (eg. DDR4, GDDR6) or a persistent memory like 3D XPoint are handled by a media controller at the memory end of a Gen-Z link, while the processor itself issues simple and generic read and write commands over the link. In this use case, Gen-Z doesn't completely remove the need for traditional on-die memory controllers or the highest-performance solutions like HBM2, but Gen-Z can enable more scalability and flexibility by allowing new memory types to be supported without altering the processor, and by providing access to more banks of memory than can be directly attached to the processor's own memory controller.

At the lowest level, Gen-Z connections look a lot like most other modern high-speed data links: fast serial links, bonding together multiple lanes to increase throughput, and running a packet-oriented protocol. Gen-Z borrows from both PCI Express and IEEE 802.3 Ethernet physical layer (PHY) standards to offer per-lane speeds up to the 56Gb/s raw speed of 50GBASE-KR, and will track the speed increases from future versions of those underlying standards. The PCIe PHY is incorporated more or less as-is, while the Ethernet PHY standards have been modified to allow for lower power operation when used for shorter links within a single system, such as communication between dies on a multi-chip module. Gen-Z allows for asymmetric links with more links and bandwidth in one direction than the other. The Gen-Z protocol supports various connection topologies like basic point to point links, daisy-chaining, and switched fabrics, including multiple paths of connection between endpoints. Daisy-chain links are estimated to add about 5ns of latency per hop, and switch latencies are expected to be on the order of 10ns for a small 8-port switch up to 50-60ns for a 64-port switch, so using Gen-Z for memory access is reasonable, especially where the somewhat slower persistent memory technologies are concerned. The Gen-Z protocol expresses almost everything in memory terms, but with each endpoint performing its own memory mapping and translation rather than attempting to form a unified single address space across a Gen-Z fabric that could scale beyond a single rack in a data center.

Wide Industry Participation

The Gen-Z Consortium launched with the support of a dozen major technology companies, but its membership has now grown to the point that it is easier to list the big hardware companies who aren't currently involved: Intel and NVidia. Gen-Z has members from every segment necessary to build a viable product ecosystem: semiconductor design and IP (Mentor, Cadence, PLDA), connectors (Molex, Foxconn, Amphenol, TE), processors and accelerators (AMD, ARM, IBM, Cavium, Xilinx), switches and controllers (IDT, Microsemi, Broadcom, Mellanox), every DRAM and NAND flash memory manufacturer except Intel, software vendors (RedHat, VMWare), system vendors (Lenovo, HPE, Dell EMC). It is clear that most of the industry is paying attention to Gen-Z, even if most of them haven't yet committed to bringing Gen-Z products to market.

At the SuperComputing17 conference in November, Gen-Z had a multi-vendor demo of four servers sharing access to two pools of memory through a Gen-Z switch. This was implemented with heavy use of FPGAs, but with the Core Specification 1.0 release we will start seeing Gen-Z show up in ASICs. The focus for now is on datacenter use cases with products potentially hitting the market in 2019.

In the meantime, it will be interesting to see where industry support concentrates between Gen-Z and competing standards. Many companies are members or supporters of more than one of the new interconnect standards, and there's no clear winner at this time. Nobody is abandoning PCI Express, and it isn't clear which new interconnect will offer the most compelling advantages over the existing ubiquitous standards or over proprietary interconnects. Gen-Z seems to have one of the widest membership bases and the widest target market, but it could still easily be doomed to niche status if it only receives half-hearted support from most of its members.

Source: Gen-Z Consortium

Comments Locked

23 Comments

View All Comments

  • peevee - Tuesday, February 13, 2018 - link

    Without Intel, it is dead in the water for now at mainstream.
    Better memory interfaces are necessary of course, the current memory interfaces look straight from the 80s.
    But for device interfaces... PCIe 4 is around the corner, probably in the very next Intel and AMD architectures.
  • Pork@III - Tuesday, February 13, 2018 - link

    PCI-SIG...They were sleeping too much time for the impoverished relics of PCI-E 3.0
  • rahvin - Tuesday, February 13, 2018 - link

    Though Intel's absence will slow down adoption Intel's been dragged kicking and screaming into standards before.
  • beginner99 - Wednesday, February 14, 2018 - link

    "PCIe 4 is around the corner, probably in the very next Intel and AMD architectures."

    Issue is that PCIe 5 is just 1-2 years behind it and current indication is that we will go directly to PCIe 5. Only thing that would benefit from PCIe 4 is the CPU-chipset connection. On the other hand intel/amd could just offer 8 or 16 lanes to the chipset and that issue would be gone as well.

    GPUs themselves aren't even limited at 8xPCIe 3.0 at least not for gaming or other consumer tasks. Thing is since at least 5 years hardware is mostly good enough for the average user. The market will separate much stronger than previously into consumer and professional/server parts. The later likes powerful gpus and accelerators that need fast links to the CPU, fast and/or persistent memory and so forth. All this bleeding edge stuff has 0 benefits for the consumer.
  • Pork@III - Wednesday, February 14, 2018 - link

    fuck gamers minds gpu is not only device connected from pci-e.
  • peevee - Tuesday, February 20, 2018 - link

    Exactly. m.2 SSDs are pushing 4-lane speed for some time now, and it is not like anybody going to give them more.
  • willis936 - Wednesday, February 14, 2018 - link

    "0 benefits for the consumer"
    ye I really like my $500 SSD upgrade in my laptop being limited by an interconnect.
  • Santoval - Saturday, February 17, 2018 - link

    "All this bleeding edge stuff has 0 benefits for the consumer."
    Have you ever heard of M.2 NVME SSDs, particularly when you use 3 or 4 of them in RAID? What about networking or dual/triple GPUs for gaming, rendering or video editing work? Or using 8 to 10 GPUs for mining, by using adapters and providing them with just x4 or x2 PCIe 3.0 links each?
    All the above are starved for PCIe 3.0 links, and with PCIe 4.0 you can use half the links for the same I/O speed, the same number of links to double your I/O speed or any combination between these two.
    You can go even further with PCIe 5.0 (1/4 of links for the same speed etc), but it still unclear when that will be commercially available. Fewer links means more simplified motherboard design (fewer traces) and potentially simpler CPU PCIe controllers. If we used PCIe 4.0 or even 5.0 *today* strictly with 1/2 and 1/4 the number of links respectively nobody would call them "bleeding edge" since the speed they would provide would be the same. So what is "bleeding edge" is largely a matter of perspective, and is meaningless without context.
  • peevee - Tuesday, February 20, 2018 - link

    "current indication is that we will go directly to PCIe 5"

    Which indication is that? And why? PCIe 4 is standardized, they can produce stuff now.

    Intel needs PCIe4 ASAP, given how few PCIe lines their mainstream chips support...
  • mode_13h - Wednesday, February 21, 2018 - link

    AMD needs PCIe 4 ASAP, in order to answer NVLink. The infinity fabric underpinning their multi-die CPU setups would also benefit.

Log in

Don't have an account? Sign up now