Cerebras Completes Series F Funding, Another $250M for $4B Valuation
by Dr. Ian Cutress on November 10, 2021 9:00 AM EST- Posted in
- AI
- Machine Learning
- ML
- Cerebras
- Wafer Scale
- WSE2
- CS-2
Every once in a while, a startup comes along with something out of left field. In the AI hardware generation, Cerebras holds that title, with their Wafer Scale Engine. The second generation product, built on TSMC 7nm, is a full wafer packed to the brim with cores, memory, and performance. By using patented manufacturing and packaging techniques, a Cerebras CS-2 features a single chip, bigger than your head, with 2.6 trillion transistors. The cost for a CS-2, with appropriate cooling, power, and connectivity, is ‘a few million’ we are told, and Cerebras has customers that include research, oil and gas, pharmaceuticals, and defense – all after the unique proposition that a wafer scale AI engine provides. Today’s news is that Cerebras is still in full startup mode, finishing a Series F funding round.
The new Series F funding round nets the company another $250m in capital, bringing the total raised through venture capital up to $720 million. In speaking to Cerebras ahead of this announcement, we were told that this $250 million was for effectively 6% of the company, bringing the valuation of Cerebras to $4 billion. Compared to Cerebras’ last Series E funding round in 2019, where the company was valued at $2.4 billion, we’re looking at about $800m extra value year on year. This round of funding was led by Alpha Wave Ventures, a partnership between Falcon Edge and Chimera, who are joining Cerebras’ other investors such as Altimeter, Benchmark, Coatue, Eclipse, Moore, and VY.
Cerebras explained to me that it’s best to get a funding round out of the way before you actually need it: we were told that they already had the next 2-3 years funded and planned, and this additional funding round provides some more on top of that, allowing the company to also grow as required. This encompasses not only the next generations of wafer scale (apparently a 5nm tape-out is around $20m), but also the new memory scale-out systems Cerebras announced earlier this year. Currently Cerebras has around 400 employees across four sites (Sunnyvale, Toronto, Tokyo, San Diego), and is looking to expand to 600 by the end of 2022, focusing a lot on engineers and full stack development.
Cerebras Wafer Scale | |||
AnandTech | Wafer Scale Engine Gen1 |
Wafer Scale Engine Gen2 |
Increase |
AI Cores | 400,000 | 850,000 | 2.13x |
Manufacturing | TSMC 16nm | TSMC 7nm | - |
Launch Date | August 2019 | Q3 2021 | - |
Die Size | 46225 mm2 | 46225 mm2 | - |
Transistors | 1200 billion | 2600 billion | 2.17x |
(Density) | 25.96 mTr/mm2 | 56.246 mTr/mm2 | 2.17x |
On-board SRAM | 18 GB | 40 GB | 2.22x |
Memory Bandwidth | 9 PB/s | 20 PB/s | 2.22x |
Fabric Bandwidth | 100 Pb/s | 220 Pb/s | 2.22x |
Cost | $2 million+ | arm+leg | ‽ |
To date Cerebras’ customers have been, in the company’s own words, from markets that have traditionally understood HPC and are looking into the boundary between HPC and AI. This means traditional supercomputer sites, such as Argonne, Lawrence Livermore, and PSC, but also commercial enterprises that have traditionally relied on heavy compute such as pharmaceuticals (AstraZeneca, GSK), medical, and oil and gas. Part of Cerebras roadmap is to expand beyond those ‘traditional’ HPC customers and introduce the technology in other areas, such as the cloud – Cirrascale recently announced a cloud offering based on the latest CS-2.
Coming up soon is the annual Supercomputing conference, where more customers and deployments are likely to be announced.
Related Reading
- Cerebras In The Cloud: Get Your Wafer Scale in an Instance
- Cerebras Unveils Wafer Scale Engine Two (WSE2): 2.6 Trillion Transistors, 100% Yield
- Cerebras Wafer Scale Engine News: DoE Supercomputer Gets 400,000 AI Cores
- 342 Transistors for Every Person In the World: Cerebras 2nd Gen Wafer Scale Engine Teased
- Cerebras’ Wafer Scale Engine Scores a Sale: $5m Buys Two for the Pittsburgh Supercomputing Center
- Hot Chips 2019 Live Blog: Cerebras' 1.2 Trillion Transistor Deep Learning Processor
- Hot Chips 2020 Live Blog: Cerebras WSE Programming
- Hot Chips 2021 Live Blog: Machine Learning (Graphcore, Cerebras, SambaNova, Anton)
24 Comments
View All Comments
Oxford Guy - Saturday, November 13, 2021 - link
Speaking of minimum cost...One question I’ve had from the start concerning wafer scale is what its minimum cost is. What node, what wafer size, etc.
Just how cheap can it be, taking the margin of the wafer chip seller out of the equation?
And, beyond some awful ancient node, how much would it cost for the venerable 28nm?
Wereweeb - Monday, November 15, 2021 - link
"Moore's Law is not dead. It's just that *says Moore's Law is dead*"eSyr - Wednesday, November 10, 2021 - link
“arm+leg” sounds incredibly chip for such a beast.Oxford Guy - Saturday, November 13, 2021 - link
The arm of Tom Brady and the leg of Naomi Osaka?Wrs - Wednesday, November 10, 2021 - link
Hmm, yields are almost 100% but they throw away 36% of the wafer. If only we could process round chips...SteinFG - Wednesday, November 10, 2021 - link
I think at best they can recover about 11 out of those 36 percents since rectangular masks that are projected partially onto the wafer will be unusableevanh - Wednesday, November 10, 2021 - link
I'm gonna guess Tr, as in mTr/mm2, means Transistor. Using a capital-M for Mega-Transistors, rather than milli-Transistors, would be a good idea me thinks: 56.246 MTr/mm2.mode_13h - Wednesday, November 10, 2021 - link
I'm a bit fuzzy on this stuff, but I seem to recall that valuations are traditionally about 10x expected annual revenue. So, if a single system costs $2M, then they're expecting to sustain delivery of about 200 per year? That seems like simultaneously a lot, and also not very much.nandnandnand - Thursday, November 11, 2021 - link
Compared to other startups you hear about, this one really made a splash. Keep up the good work.mode_13h - Thursday, November 11, 2021 - link
It's certainly neat tech & quite an achievement.Sets a bad precedent, though. We certainly don't need lots of copycats following in their footsteps, sopping up whatever tidbits of fab capacity remain.