Taking place this week is DARPA’s 2018 Electronic Resurgence Initiative (ERI) Summit, the defense research agency's first gathering to address the direction of US technology manufacturing as Moore's Law slows down, and how the agency can jumpstart alternative efforts to ensure continued performance scaling. To that end, in an interesting and unexpected move from the summit, Intel has announced that it would offer a royalty-free license for its Advanced Interface Bus (AIB) to participants of the agency’s Common Heterogeneous Integration and Intellectual Property (IP) Reuse Strategies (CHIPS) program. The move will help interested parties to design SoCs utilising heterogeneous components made using different process technologies.

Modern, complex chips are hard to manufacture and equally hard to design, as they frequently contain a number of different application-specific functional blocks (e.g., general-purpose cores, graphics cores, etc.). In a bid to somehow reduce costs of chip design, numerous companies have come up with modular development approaches that allow them to design SoCs using more or less standard components. This methodology is used to develop modern SoCs for mobiles, game consoles, and other consumer electronics. However, it is largely believed that the approach is still not enough cost-effective for future SoCs, as the entire chip must still be manufactured as a single, monolithic die. As a result, Intel has propose using so-called chiplets, which are heterogeneous components of a chip produced using different process technologies and interconnected using its AIB interface.

Intel’s Advanced Interface Bus is a low-power die-to-die interface that is based on a forward clock parallel I/O architecture. AIB features a 1 Gbps per lane SDR transfer rate for control signals and a 2 Gbps per lane DDR transer for data. AIB is agnostic to particular packaging technology used: it can rely on Intel’s own EMIB, or can use other packaging technologies (e.g., CoWoS, InFO, etc.) too. Intel itself of course uses its AIB for its Stratix 10 FPGAs (more on this here), but since the tech is universal, participants of the CHIPS program will be able to use the interface for whatever projects they are working on.

The whole idea behind the CHIPS initiative is to enable developers of SoCs to re-use existing semiconductor blocks for future projects, and to only build blocks on the smallest/most powerful (read: expensive) manufacturing process that part requires. Intel itself proposes to use 10-nm CPU and GPU cores along with communication blocks produced using a 14 nm fabrication process, as well as other components made using a 22 nm manufacturing tech. For developers who do not have access for Intel’s technologies and who have considerably lower R&D budgets, it will make sense to re-use some of the IP produced using a planar fabrication process with processing cores made using a leading-edge FinFET technology.

It remains to be seen when participants of DARPA’s CHIPS program actually start to use Intel’s AIB, but the fact that they now have a technology that could wed leading-edge IP with proven blocks looks rather promising.

Related Reading:

Source: Fuse.Wikichip.Org

Comments Locked


View All Comments

  • Arnulf - Thursday, July 26, 2018 - link

    Newer memory interfaces you mention (DDR4, DDR5) don't have higher latency than their predecessors - it's just that as their clock frequency goes up and clock cycle time goes down, the latency expressed as the number of clock cycles goes up, yielding similar latency when calculated in time units.
  • edzieba - Wednesday, July 25, 2018 - link

    I wonder how this ties into IDEA and POSH.
  • iwod - Thursday, July 26, 2018 - link

    Talking about Royalty Free, where is the Thunderbolt 3 license?
  • mode_13h - Saturday, July 28, 2018 - link

    I'm skeptical how well this would *really* suit CPU and GPU. CPU wants low-latency and medium memory bandwidth (assuming the sort of lower core-count chips that would include a GPU), while GPUs crave bandwidth. Adding a generic bus would seem to add some latency vs. a purpose-built, tightly integrated bus, and probably not scale as well.

    As for other blocks, sure. But I still foresee monolithic dies with CPU, GPU, and memory controller. Something like this can be used to tie in everything else.

Log in

Don't have an account? Sign up now