There are multiple reasons to need a PCIe switch. These can include expanding PCIe connectivity to more devices than the CPU is capable, to extend a PCIe fabric across multiple hosts, to generate failover support, or to increase device-to-device communication bandwidth in limited scenarios. With the advent of PCIe 4.0 processors and devices such as graphics, SSDs and FPGAs, an upgrade from the range of PCIe 3.0 switches to PCIe 4.0 was needed. Microchip has recently announced its new Switchtec PAX line of PCIe Switches, offering up to 100 lane variants supporting 52 devices and 174 GBps switching capabilities.

For readers not embedded in the enterprise world, you may remember in the past we have had a number of PCIe switches enter the consumer market. Initially we saw devices like the nF200 appear on high-end motherboards like the EVGA SR2, and then the PLX PEX switches on Z77 motherboards allowing 16-lane CPUs to offer 32 lanes of connectivity. Some vendors even went a bit overboard, offering dual switches and up to 22 SATA ports with an add-in LSI Raid controller with four-way SLI connectivity, all through a 16-lane CPU.

Recently, we haven’t seen much consumer use of these big PCIe switches. This is due to a couple of main factors – PLX was acquired by Avago in 2014 in a deal that valued the company at $300m, and seemingly overnight the cost of these switches increased three-fold according to my sources at the time, making them unpalatable for consumer use. The next generation of PEX 9000 switches were, by contrast to the PEX 8000 we saw in the consumer space, feature laden with switch-to-switch fabric connectivity and failover support. Avago then purchased Broadcom, and renamed themselves Broadcom, but the situation is still the same, with the switches focused in the server space and making the market ripe for competition. Enter Microchip.

Microchip has been on my radar for a while, and I met with them at Supercomputing 2019. At the time, when asked about PCIe 4.0 switches, I was told ‘soon’. The new Switchtec PAX switches are that line.

There will be six products, varying from 28-lane to 100-lane support, and bifurcation down to x1. These switches operate in an all-to-all capacity, meaning any lane can be upstream or downstream supported. Thus if a customer wanted a 1-to-99 conversion, despite the potential bottleneck, it would be possible. The new switches support hot-plug per-port, operate low-power Serdes connections, support OCuLink, and can be used with passive, managed, or optical cabling.

Customers for these switches will have access to real-time diagnostics for signaling, as well as fabric management software for the end-point systems. The all-to-all connectivity supports partial chip failure and bypass, along with partial reset features. This makes building a fabric across multiple hosts and devices fairly straightforward, with a variety of topologies supported.

When asked, pricing was given, which means it will depend on the customer and volume. We can imagine a vendor like Dell or Supermicro if they haven’t got fixed contracts with Broadcom switches to perhaps look into these solutions for distributed implementations or storage devices. Some of the second/third tier server vendors I spoke to at Computex were only just deploying PEX 9000-series switches, so perhaps deployment of Gen 4 switches might be more of a 2021/2022 target.

Those interested in Microchip are advised to contact their local representative.

Users looking for a PCIe switch enabled consumer motherboard should look at Supermicro’s new Z490 motherboards. Both are using PEX 8747 chips to expand the PCIe offering on Intel’s Comet Lake from 16 lanes to 32 lanes.

Source: Microchip

Related Reading

Comments Locked

36 Comments

View All Comments

  • blaktron - Monday, June 1, 2020 - link

    And run your GPU through your chipset? Why? Its a more complex architecture with only downsides for like 99% of users.
  • sftech - Monday, June 1, 2020 - link

    My R9 295x2 shipped with a PAX switch and was quick. That said a PCIe switch is a high-end option, not a low cost venture
  • bananaforscale - Tuesday, June 2, 2020 - link

    The 99% already have options. Nobody's saying all boards should have a PCIe switch.
  • Pewzor - Monday, June 1, 2020 - link

    You don't want to run pcie4 thru chipsets.
    What does Intel have to offer currently anyways.
  • dotjaz - Tuesday, June 2, 2020 - link

    That's not what switches do, it's a SWITCH, not a converter. PCIE 4.0 x8 would be provided as is.
  • eek2121 - Tuesday, June 2, 2020 - link

    I don’t think you understand what a switch is.
  • Valantar - Tuesday, June 2, 2020 - link

    I sincerely hope you're talking about a HEDT or server platform. Otherwise, you would be increasing base motherboard costs by at least a hundred dollars, though more likely 2-3x that. Traces for 64 PCIe lanes will require more pcb layers, the switch would be expensive, etc.
  • Arsenica - Monday, June 1, 2020 - link

    The consumer market for PCIe switches is dead and these Micron Switches are not for consumers (making redundant the lengthy introduction to this article).

    Multi-GPU systems for consumers are a thing of the past and very few need more than 1 NVME drive.

    PCIe 4.0 switches currently would only make sense for a AMD B550 system wanting to have more than 1 PCIe 4.0 M.2 slot and for that niche using the X570 chipset makes more sense. Future platforms won't even allow for this marginal case as all PCIe lanes will be upgraded to 4.0.

    People needing more than
  • Tomatotech - Monday, June 1, 2020 - link

    I very much want more than 1 nvme drive to be standard on all mobos. SSD SATA is a dying technology, and even cheap SSDs max it out (for bulk transfer at least, still a way to go on random).

    I've run the same daily use computer on fast SATA SSD and medium speed nvme (2GB/sec max) and there was a significant performance uplift when moving to nvme. Everything 'felt' far faster, not just app opening.

    Also, nvme is easier to install for newbies - slot it in, do up one screw, and that's it - compared to SATA - fiddle around with up to 4 screws, and 2 different cables per SSD, and one of these cables often goes to other drives as well, or might not have any spare connectors available.
  • supdawgwtfd - Monday, June 1, 2020 - link

    I call bullshit.

    There is no "significant performance uplift" moving from decent SATA SSD to nvme.

    You are talking complete rubbish.

    Stop lying.

    There is a significant performance uplift moving from HDD to any SSD. Going from Sara to nvme there is only small benefits which you wouldn't notice for daily use.

    High end use. Sure if you are moving large amounts of data around or need high speed reads.

    Daily usage is neither of these.

Log in

Don't have an account? Sign up now