Starting off the week bright and early, NVIDIA this morning announced that they’re acquiring datacenter networking and interconnect Mellanox. With a price tag of $6.9 billion, NVIDIA’s acquisition will be vaulting the company deep into the datacenter networking market, making them one of the leading vendors virtually overnight.

Mellanox is not a name we normally see much here at AnandTech, as it’s often a company in the background of bigger projects. Mellanox specializes in datacenter connectivity, particularly high-bandwidth Ethernet and InfiniBand products, for use in high-performance systems. Overall their technology is used in over half of the TOP500-listed supercomputers in the world, as well as countless datacenters. So depending on which metrics you use and how wide you define the market, they’re generally a top-tier competitor or a market leader in the datacenter networking space.

Meanwhile, with NVIDIA’s own datacenter and HPC revenues growing by leaps and bounds over the last few years – thanks in big part to the machine learning boom – NVIDIA has decided to expand their datacenter product portfolio by picking up Mellanox. According to NVIDIA, acquiring the company will not only give NVIDIA leading-edge networking products and IP, but it will also allow them to exploit the advantages of being able to develop in-house the high-performance interconnects needed to allow their own high-performance compute products to better scale.

Like many other companies in the datacenter space, NVIDIA already has significant dealings with Mellanox. The company’s DGX-2 systems incorporate Mellanox’s controllers for multi-node scaling, and on an even bigger scale, Mellanox’s hardware is used in both the Summit and Sierra supercomputers, both of which are also powered by NVIDIA GPUs. So acquiring the company gives NVIDIA some verticality to leverage for future system sales, as well as to further broaden their overall product offerings beyond GPUs.

In fact this will be about the least-GPU-like product in NVIDIA’s portfolio once the deal closes, as all other active NVIDIA product lines are ultimately compute products of some sort. Though to put the size of these businesses in perspective, Mellanox is a fraction of the size of NVIDIA, and so too is their business. Similarly, by 2023 NVIDIA is expecting a $61B total addressable market for compute + high-speed networking – but only $11B of that is networking. So Mellanox’s networking hardware is still one small piece of a much bigger NVIDIA.

As for the deal itself, NVIDIA will be paying $125/share for Mellanox, which is 14% over Mellanox’s previous closing price. Notably, this is going to be an all-cash transaction for NVIDIA; rather than buying out Mellanox’s shareholders with equity in NVIDIA, the company will instead just pay for the company outright via their ample (and growing) cash reserves. Though if reports are to be believed, the timing of this deal was spurred by Mellanox more than NVIDIA – Mellanox had put itself on the market and was supposedly looking at several bidders, so NVIDIA would have needed to spend the cash now if they didn’t want to miss the chance to buy a high-end networking company.

Finally, along with their own vertical integration plans, it sounds like NVIDIA intends to keep the rest of the Mellanox networking business largely status-quo, including keeping the company’s offices in Israel and as well as its existing sales & support infrastructure. Mellanox was already a profitable company – which helps NVIDIA’s own bottom line – so NVIDIA doesn’t necessarily need to change the company’s direction to profit from their new acquisition.

Source: NVIDIA

Comments Locked

34 Comments

View All Comments

  • londedoganet - Monday, March 11, 2019 - link

    “paratactically”?
  • ramdas2m - Monday, March 11, 2019 - link

    What happens to Nvlink now ?
  • Yojimbo - Monday, March 11, 2019 - link

    NVLink is currently for intranode communication. Infiniband and ethernet are for internode communication. Though I can imagine NVIDIA wanting to work on some sort of hybrid that allows multiple nodes to be linked together more closely than Infiniband can do. Surely that would be several years out, though, unless they are already well on their way developing it.
  • magreen - Monday, March 11, 2019 - link

    I had no idea that was a word until I looked it up just now.

    However, it's definitely used wrongly in this article.
  • The Chill Blueberry - Monday, March 11, 2019 - link

    It's my new favorite word.
  • Ryan Smith - Monday, March 11, 2019 - link

    Apparently MS Word knew it was a word as well. How it got there from "practically", I have no idea.
  • FreckledTrout - Monday, March 11, 2019 - link

    I came, I saw, I bought Mellanox!
  • atomt - Monday, March 11, 2019 - link

    I fear for the future of Mellanox open source engagement. They were getting very good at it but NVIDIA is a very closed company. Hopefully the NVIDIA lawyers wont meddle too much.
  • nathanddrews - Monday, March 11, 2019 - link

    NVIDIA NetWorks.
  • inighthawki - Monday, March 11, 2019 - link

    This is true for some stuff like their drivers, but not true for others. PhysX, for example, is completely open source on github. You just need to sign up for access which doesn't require anything more than a dev account (which is free to sign up for).

Log in

Don't have an account? Sign up now