Intel this week announced that its processors, compute accelerators, and Optane DC persistent memory modules will power Aurora, the first supercomputer in the US projected to feature a performance of one exaFLOP. The system is expected to be delivered in about two years, and goes beyond its initial Xeon Phi specification released in 2014.

The US Department of Energy, Intel, and Cray have signed a contract under which the two companies and DOE’s Argonne National Laboratory will develop and build the Aurora supercomputer capable of a “quintillion” floating point computations per second. The deal is valued at more than $500 million, the system is expected to be delivered sometimes in 2021.

The Aurora machine will be based on Intel’s Xeon Scalable processors, the company’s upcoming compute accelerators based on the Xe compute architecture for datacenters, as well as a next-generation Optane DC persistent memory. The supercomputer will rely on Cray’s 'Shasta' architecture featuring Cray’s Slingshot interconnect, that was announced at Supercomputing back in November. The system will be programmed using Intel’s OneAPI and will also use the Shasta software stack tailored for Intel.

Around two years ago the DOE started its Exascale Computing Project to spur development of hardware, software, and applications for exaFLOP-class supercomputers. The organization awarded $258 million in research contracts to six technology companies, including AMD, Cray, Hewlett Packard Enterprise, IBM, Intel, and NVIDIA. As it turns out, Intel’s approach was considered as the most efficient one for the country’s first Exascale supercomputer.

It is noteworthy that ANL’s Aurora supercomputer back in 2014 was supposed to be based on Intel’s Xeon Phi codenamed Knights Hill produced using the company 10 nm process technology. The plan changed in 2017, when Intel canned the Knights Hill in favor of a more advanced architecture (and the fact that its Xeon processors were approaching a Xeon Phi-like implementation). Apparently, Intel and its partners are confident in the new chips to proceed with the project now.

The Aurora supercomputer will be able to handle both AI and traditional HPC workloads. At present, Argonne National Laboratory says that among other things this machine will be used for cancer research, cosmological simulations, climate modeling, discovering drug response, and exploring various new materials.

“There is tremendous scientific benefit to our nation that comes from collaborations like this one with the Department of Energy, Argonne National Laboratory, industry partners Intel and Cray and our close association with the University of Chicago,” said Argonne National Laboratory Director, Paul Kearns. ​“Argonne’s Aurora system is built for next-generation artificial intelligence and will accelerate scientific discovery by combining high-performance computing and artificial intelligence to address real world problems, such as improving extreme weather forecasting, accelerating medical treatments, mapping the human brain, developing new materials and further understanding the universe — and those are just the beginning.”

Related Reading:

Sources: Intel, Intel, Argonne National Laboratory

Comments Locked

25 Comments

View All Comments

  • Kevin G - Monday, March 25, 2019 - link

    Depends on the generation of Xeon Phi. The first wave did have to leverage specialized compilers to get any sort of acceleration: the 512 bit vector instructions were only found alongside those Pentium 1 based cores.

    The second generation of Knight's Landing was far superior with normal SSE and AVX implementations plus AVX-512. So as long as you were not running original Xeon Phi code on Knight's Landing, you had backwards compatibility with ordinary x86 software. Not a terrible thing for legacy code but Intel did miss their mark by finally getting the vision of backwards compatibility right on the second generation of products.

    The kicker for Knight's Landing was that it had 16 GB of HMC memory which requiring tuning code around its unique NUMA model. Otherwise it was bandwidth starved with only six DDR4 memory channels feeding up to 72 Airmont cores.
  • blu42 - Wednesday, March 27, 2019 - link

    True. KNC should have been what KNL was, but then again there's a causality link MIC->AVX512, so it seems Intel had to figure out what they wanted first, and that costed them them the product line.
  • mode_13h - Friday, March 22, 2019 - link

    Phi got killed off because it couldn't compete with GPUs in perf/watt or perf/mm^2 (and thereby probably also perf/$). The only reason anybody ever had for justifying Xeon Phi was to run legacy multi-threaded code. If you're using modern libraries/software, then it couldn't compete with GPUs.

    Intel had to fail at an x86-based GPU-compute competitor, before they could move beyond it. I think the internal politics of x86 were too strong, at Intel.
  • mode_13h - Friday, March 22, 2019 - link

    Look at the expected deployment date. It is surely using yet-to-be-released/announced products.

    Actually, I think you have it backwards. Phi going away is what finally made room for their compute-oriented GPU product. As long as Phi continued to look viable, they were probably reluctant to put resources behind a more purist GPU approach.
  • Kevin G - Monday, March 25, 2019 - link

    Xeon Phi died because of Intel's 10 nm delays. Even now, Intel is only pumping out a handful of small 10 nm parts (71 mm^2) which have a good portion of their die disabled (graphics). On 14 nm, Knight's Landing was 683 mm^2 with the successor Knight's Hill being a similarly large chip but on a 10 nm process. By the time Intel is able to ship Knight's Hill, they could end up shipping after the successor to nVidia's Volta architecture arrives. Had Intel shipped Knight's Hill last year as originally envisioned, they would be far more competitive.
  • HStewart - Tuesday, March 26, 2019 - link

    Or maybe Phi got killed off with replacement Xe series that is more efficient and also can help in graphics marked which Phi was never really designed for.
  • Yojimbo - Thursday, March 21, 2019 - link

    "As it turns out, Intel’s approach was considered as the most efficient one for the country’s first Exascale supercomputer."

    Not sure about that. It's more a matter of politics. Intel and Cray were awarded the original Aurora contract, but the DOE was apparently not happy with the way the system was shaping up, probably because of the deep learning performance of Xeon Phi Knights Hill chips that were supposed to go into the system, which was poor comparatively with GPUs. The DOE wanted an accelerated supercomputer but in general wants to spread out its purchases between at least two architectures. That throws out an IBM/NVIDIA system, and my guess is it also throws out anything relying on NVIDIA accelerators. Intel was already developing a discrete GPU and said they could get it out the door by 2021. There is a competition among countries going on to get to exascale first, and Intel was now in position to negotiate to take the money set aside for Aurora in addition to more money added to it to obtain a contract for a delayed and expanded system that would be the first American system to reach exascale.

    I imagine if they weren't Intel they wouldn't have been able to get the DOE to commit to something like that. After all, Intel isn't known for their GPUs. Intel is really on the hot seat to deliver here, I'd imagine.
  • TeXWiller - Thursday, March 21, 2019 - link

    <quote>The DOE wanted an accelerated supercomputer but in general wants to spread out its purchases between at least two architectures. That throws out an IBM/NVIDIA system, and my guess is it also throws out anything relying on NVIDIA accelerators. </quote>I'm sure you meant to say that throws in the IBM/NVidia (and maybe AMD/NVidia) systems? ;)

    Summit and such, along with the Aurora were part of the same CORAL multi-laboratory, pre-exascale contract (A as in Argonne), which then was reformulated for the later delivery of the Intel/Cray system at which point the performance target was adjusted.
  • Ktracho - Thursday, March 21, 2019 - link

    Certainly the government, for national security reasons, doesn't want there to be just one company that can supply the parts for a supercomputer. However, I wonder if Cray will make products similar to Aurora available to other customers, and give them the choice between Intel and NVIDIA GPUs. I can't imagine all customers being happy with spending millions and being limited to one choice for GPUs.
  • TeXWiller - Thursday, March 21, 2019 - link

    Very likely. Cray wouldn't just develop an interconnect for one customer and the Xeons used will be the next step for Intel in general. So the components must be able to be used to serve the various software stacks customers have. Intel probably makes the case for their coherent chip-to-chip interconnect with their own accelerators, however, just like NVidia currently does with the NVLink combined with Power chips.

Log in

Don't have an account? Sign up now