I've got a friend who needs a few hundred GB of RAM to do her work. (She's an economics professor.) We're tentatively planning on my building her an EPYC with 16 modules of 32 GB, because that's the most RAM she can afford. (64 GB modules are just too expensive.) I'm looking forward to the day when Optane DIMMs make it more affordable to have large amounts of RAM.
There are a lot of different uses for a workstation. For IO-bound workloads, some Optane memory could be an effective alternative to setting up a RAM drive for very frequently accessed files.
With fast NVMe SSDs and the way operating systems automatically use free RAM as a disk cache, there's almost no use case for RAM disks, any more. Maybe in some specialized server applications, but that's about it.
It's their business and their decision. It's not government's place in such matters. If you don't like it, don't buy it. Buy the alternative. I believe there is a nice new Epyc coming out soon that will address these matters.
"It's their business and their decision" did u hear about qualcom case recently. i guess not.
also its a monopoly not a competitive market. that sort of nonsense is how you get a 65 000 $ pill from the drug companies. its also not just usa. there are 200 countries in which intel does business, that nonsense is not legal in many of those. if you or intel doesnt like their laws don't do business there.
Are you serious? It's standard business practice to disable chip features in lower-end products. Everybody does it - not just Intel, and not just in CPUs.
Also disabling features in lower-end products is what makes the lower-end products affordable.
It is well known (among economists, at least), that price discrimination - in which different kinds of customers are charged different prices - chiefly benefits the less wealthy, at the expense of the more wealthy. It's basically a voluntary, free-market progressive tax.
In many cases, the less wealthy only have to pay the marginal cost, while the more wealthy pay the fixed costs of production too. For example, the purchasers of low-end, castrated chips might only pay the marginal cost of that particular piece of silicon. The more wealthy purchasers of high-end chips pay for the fixed costs of the fabrication plant as a whole.
Same reason why this Xeon W is limited to a single-socket configuration. They want to stay competitive with AMD by offering single-socket workstation CPUs with a competitive number of cores, without cannibalizing sales of their high-end server CPUs that they want to be able to sell for $10k. And the 28-core is the same underlying chip that they're selling for $10k, and that they're going to double-up with on the same package to achieve their 56-core.
We're only even seeing it here as a workstation product for ~$4k because they have no other choice to remain competitive with AMD given what they have available to sell.
First, the Xeon W-3275 actually has 64 lanes, which is not true of the Xeon Platinum 8280. Now, it's certainly the same die, but they allegedly withheld some lanes for things like in-package OmniPath and maybe FPGA dies. So, socket LGA 3647 has no pins to spare for the extra 16 lanes... unless you kill the UPI connections (which are needed for multi-CPU configs not supported by the Xeon W products).
Now, Intel being Intel, they probably used the extra PCIe lanes as a way to introduce some incompatibility between the new Xeon W processors and server boards featuring the same socket.
Second, the Xeon W-3275 turbo-boosts up to 4.4 GHz (although the page on ark.intel.com lists the "Turbo Boost Max 3.0" speed as 4.6 GHz), while the Platinum 8280 turbos only up to 4.0 GHz and has no "Turbo Boost Max 3.0". Yet both claim a TDP of 205 W. So, maybe there's some cherry-picking for the Xeon W dies?
Third, on pricing, the Xeon W-3275 is listed at $4.5k vs the Platinum 8280 at $10k. The Xeon W-3275M is listed at $7.5k vs the Platinum 8280M at $13k.
I suspect that Intel will also move the LGA 3647 socket into the HEDT market this fall. The current line up is a tough sell against Threadripper which offers more cores and more PCIe lanes than Intel's current i9 line up. While Intel may end up behind AMD in raw core count, they can reach parity in PCIe lane count but offer more memory channels.
As for Xeon W on socket LGA 2066, what other upgrades for it are there really? Cascade Lake is mainly an errata fix and small bump on the clock speed which isn't much to offer in this segment. If Intel changed their packaging a bit, they could potentially offer a 28 core part for socket LGA 2066 but it probably wouldn't be worth it for the low volume that would ship. For reference, Intel's high end socket LGA 2066 parts leverage a BGA package on top of another PCB which provides the socket 2066 interface. Intel does this because it re-uses the initial BGA part on socket LGA 3647. There are three version of the LGA 3647 PCB depending on what else resides in the processor package: Omnipath or a FPGA (or nothing). This is where Intel was able to get 16 additional PCIe lanes as they were always on-die but would be routed to these additional package options.
ThreadRipper offers higher clocks than EPYC, because workstation users still care about single-thread performance, while server is mostly about aggregate throughput and power-efficiency.
So, I would put ThreadRipper up against Xeon W - not EPYC.
AMD updated Threadripper roughly a year ago to bring it up to 32 cores. AMD has plans to update it later this year. And yes, Intel is playing catch up here and will likely match what AMD has had on the market for the better part of a year now until AMD's impending update at the end of this year.
Rome is targeted as a server part in particular due to its support of two sockets and a few other RAS features. Both Threadripper and these Xeon W's are single socket only. And yes, workstation Epyc boards exist just like workstation Xeon SP boards do as well that serve the high end of the workstation market.
PS: Rome is actually 130 PCIe lanes. AMD added two more by vendor request for IPMI so they don't have to sacrifice lane count for ordinary peripherals.
I don't feel like they really need more than 2066, in the HEDT space. For customers going beyond that, they can move into Intel's Xeon line. ThreadRipper competes with both, because unlike Intel, AMD didn't disable ECC support to make a lower-priced model line.
Looks like this could have been shorter: 1. Intel is now selling server SKUs configured for higher TDP as workstations 2. Trying to offer a $7500 alternative to the current 32 core ThreadRipper at $1800 vis-à-vis the Rome competition to come in a month or so
Should be interesting to see if AMD responds by offering also some higher clock Epyics to deliver 8 memory channels to those who either need more bandwidth or capacity than TR can provide.
NV-RAM would be really nice to have, preferably at lower prices, higher density and DRAM type endurance. Wish I'd know if Epyic & cousins did include all the instruction set modifications and memory controller logic that's required to support NV-DIMMs like Optane.
1) This has been common practice in the market historically. Main difference has been the IO selection which has favored more graphics and audio where there traditionally has been little need on the server side. 2a) This is Intel market segmentation at its finest. Artificially limiting memory capacity is just a stupid move, especially at this time given their competition. Intel can't uncap the Xeon W's as the Xeon SP's all have the memory limitation and this would severely undercut them in price. If anything, they should have segmented on Optane support and let ordinary DRAM capacity be unchecked. 2b) AMD's update to Threadripper is expected in Q4 this year. It looks like AMD had a significant orders placed for Epyc so it appears that all 8 core usable dies are headed that way. There is wide spread rumors that AMD has a 16 core desktop chip waiting for release but the reason why is that they can't spare any 8 core dies yet. 2c) AMD might be holding off on a Threadripper update due producing yet another IO die. They have an embedded market to fulfill with quad channel memory parts but using the eight channel Epyc IO die does not seem appropriate for it. If AMD is going this route, holding off on Threadripper updates for it would make sense.
The 8 core desktop part is assumed to be a single cpu chiplet. It could be one with bad power consumption characteristics though. The best power consumption die will go to Epyc. They can us all kinds of different chiplets in Epyc, with 1, 2, 3, or 4 cores active. A 1 core per CCX part (16 cores) would be a strange chip, but the massive per core cache could be useful in some applications. They could also disable entire CCXs for a 32 core part. The main concern is power consumption, so Epyc will get the best bins for those. The 16 core desktop part will need to be a relatively good power consumption bin though, so that may be in short supply and/or expensive.
I have wondered if they put some extra links on the desktop IO die to allow them to use two small IO die and 4 cpu chiplets for ThreadRipper. That would allow relatively cheap ThreadRipper processors up to 32 core before switching to the expensive Epyc IO die for higher core count.
AMD hasn't released anything on specific models: just indicated that they'll go up to 64 cores. How they do the lesser parts is speculative and well, AMD has LOTS of options. I would fathom that they'll lower the number of cores per CCX to maximize cache usage where possible.
What would be nice would be a server motherboard that would identify what chiller and CCX a core is found plus the ability to selectively disable it. Take a 33 core part and off hand there are three main ways of doing it: 8 chipslet each with one CCX disabled, 8 chiplets with two cores in each CCX disabled and then just four fully functional chiplets. I wouldn't expect large performance differences and they may only appear under certain workloads. Good research project for those doing a deep dive into the design.
Regardless, AMD has lots of options and I'm pretty sure they can use every chiplet manufactured in some fashion except to total duds. Very cost effective from a manufacturing standpoint.
The beauty of computers is that they can be used to run just about any workload one can dream up. For many customers, this necessitates virtualization. Most virtual workloads are memory-constrained rather than CPU. If you need to host databases, multiple environments (dev/test) you're going to need a lot of RAM.
The full 8 chiplet Epyc 2 will have 256 MB of on package L3 cache. The largest intel goes up to right now is 38.5. The massive cache will reduce the load on the memory controllers significantly. Intel cannot compete with that amount of cache with 14 nm parts.
That is a mind-boggling amount of L3 cache. Even the 'consumer' Ryzen chips have mind-boggling amounts of L3. The 12-core will have two chiplets... 64MB of L3.
Yes, Apple just announced it will be September. But as far as I know, it’s the first announced system to use this, and likely the first to ship. So we’re possibly talking 2.5 months, or so.
Dear AnandTech, why is your website inundated with ads? We understand having some ads, but now one must scroll multiple pages down before they can get to the comments. It's getting ridiculous. Please consider the user experience. I've been a reader since 2001 and while I still love your detailed reviews the website is definitely frustrating now.
The difference between Skylake-W and Skylake-SP is that Skylake-W only supports upto 8 DIMMs of memory, for a maximum of 512 GB while the Skylake-SP supports upto 12 DIMMs of memory, for upto 768 GB for non-M SKUs and 1536 GB for M SKUs.
You cannot run Skylake-W in any other configuration besides a single socket configuration whereas Skylake-SP is for dual socket systems and up (although I suppose that in theory, you might and should still be able to run Skylake-SP in a single socket configuration, if you so desire, but I don't know if that's necessarily 100% true).
However, both Skylake-W and Skylane-SP have a maximum of 48 PCIe 3.0 lanes.
LGA 2066 Xeon W is indeed based upon Sky Lake-SP. Intel simply changed the packaging where two of the memory channels simply are not present on LGA 2066 vs the bigger LGA 3647. Intel was rather clever here since LGA 3647 also have several on package options. Take a look at a delidded LGA 2066 part:
Notice the same that it is the same PCB on PCB style setup? The lower PCB dictates the socket and any on-package extras where as the upper one houses the CPU.
Similarly, the Sky Lake-SP dies have always had 64 PCIe lanes but if them never made it to the motherboard: there were for Omnipath or FPGA options in the package.
Now that TR4 motherboards with IPMI are appearing, I guess I can aim at/hope for any new Threadripper line-up. 64 to 128 lanes of PCI-E 4.0 with ECC and unlocked multiplier, all for a lower price, sounds quite sweet.
I wouldn't expect a massive IO bump but a small one is certainly within reason: AMD increased the number of PCIe lanes from 128 to 130 on Epyc. Why? Vendor feed back regarding storage and IPMI IO. Most IPMI and associated IO take up to PCIe lanes which left 126 for usage. Epyc was great for lots of NVMe drives but was limited to 31.5 at full 4x PCIe 3.0 bandwidth. Those extra two lanes bring that figure up to a nice clean 32 drive without sacrificing IPMI.
For Theadripper, something similar exists: 60 usable PCIe lanes. I can see this being bumped up to 64 or 66 so that four full 16x PCIe 4.0 slots are possible.
If you want 128 lanes of PCIe, I think you'll need to buy EPYC. They need to keep some product differentiation between these two, and I don't see why PCIe lane count wouldn't continue to be part of it.
Apple interestingly sums up L2 and L3 amounts on their stated "cache size" on their website. The SKUs used in the Mac Pro are most likely the following: 28-core: Xeon W-3275M, $7453 (compared to $4449 for the non-M version with max 1 TB RAM) 24-core: Xeon W-3265M, $6353 (for $3349 non-M version) 16-core: Xeon W-3245, $1999 12-core: Xeon W-3235, $1398 8-core: Xeon W-3223, $749 With the choices of 24 and 28-core models, Apple is assuming people who need the higher CPU performance also need more than 1 TB of RAM. There are extremely few use cases that need more than 128 GB of RAM, so I think there should be options for non-M versions too, in order to save those $3000.
This makes the $20k+ estimates for the new topped out Mac pros seem a little reasonable. Since the the highest CPU you can get from apple might cost up to $7453 list price alone. Though the $6k base price for the 8-core, 32GB of RAM, a single Radeon Pro 580X GPU, and a 256GB SSD seems a more crazy if the cpu list cost is $749, a a few hundred each for the ram, ssd, cpu and your at best around 2k... I guess regardless of the "Apple Tax" Apple is pricing this machine to make more cost since for upgraded version. Which I assume is want professionals will want anyway if they are in able to afford this price bracket :)
Intel is smoking something if they think they can compete against threadripper at those prices. Intel still wants people to believe that an 8-core processors is the bees-knees. Oh yah... and yet another socket change. The only thing decent about these CPUs is the memory support... but still, by taking away the socket upgrade option I wonder how Intel could possibly believe that anyone would want to build completely new systems around this new form factor.
I think most workstation users don't upgrade their CPU from one generation to the next. The only case for this would be if you buy-in at a low core count and go for more cores + newer gen. But, honestly, most workstation users probably never even open their case. A big upgrade for them would be the SSD or the GPU.
The bigger issue is for OEMs, who get only one CPU generation to reap the investment they put into system design & qualification. It might help that Intel is reusing a server socket for this, but of course with some pin-out changes to handle the extra PCIe lanes (probably re-purposing the UPI pins, since these are single-socket systems).
Hmmm... the 3000-series' Max Turbo only goes up to 4.4 GHz, while some members of the 2000-series could reach 4.5 GHz.
I don't know what I'd do with 48 lanes. All I really wanted was 20 CPU-connected lanes + DMI, like the original E3-series had. Then, I could plug my GPU in a x16 slot and my SSD in a x4 CPU-direct slot.
And I'll stick with a ring-bus, thank you. Mesh scales better, but I don't currently fancy more than about 8 cores.
With all this said, it should come as no surprise that I'm keenly interested in Ryzen 3000. I just wish they made an 8-core SKU with turbos as high as their 16-core can reach. I'd pay extra for that, but I'm sure they're binning their best dies for EPYC and the 16-core SKU.
Yes, something like a 3850X with 4.7 GHz Turbo, maybe even 120W TDP or something to give it room to stretch its legs a bit. maybe $449, right between 3800X and 3900X but basically an 8/16 with maximum possible bin/clocks out of the box.
Apple 2013 "trash can" Mac Pro was based on this "most workstation users probably never even open their case" line of thinking. It wasn't very well received and certainly didn't age well.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
56 Comments
Back to Article
brakdoo - Monday, June 10, 2019 - link
Who gives a damn about Optane DIMMs? Only SAP Hana and similar software that basically just need faster SSDs. That's not Workstation stuff.Why don't you do some real research on Optane instead of believing all that marketing? You even had a system to test if I'm not mistaken.
Mikewind Dale - Wednesday, June 12, 2019 - link
I've got a friend who needs a few hundred GB of RAM to do her work. (She's an economics professor.) We're tentatively planning on my building her an EPYC with 16 modules of 32 GB, because that's the most RAM she can afford. (64 GB modules are just too expensive.) I'm looking forward to the day when Optane DIMMs make it more affordable to have large amounts of RAM.mode_13h - Friday, June 21, 2019 - link
Is the data access pattern truly random? If not, you could get a fast NVMe drive and use it as swap.For this, you could even use Intel's Optane-based 900P, which are now available at some discount.
twtech - Wednesday, June 12, 2019 - link
There are a lot of different uses for a workstation. For IO-bound workloads, some Optane memory could be an effective alternative to setting up a RAM drive for very frequently accessed files.mode_13h - Friday, June 21, 2019 - link
With fast NVMe SSDs and the way operating systems automatically use free RAM as a disk cache, there's almost no use case for RAM disks, any more. Maybe in some specialized server applications, but that's about it.azfacea - Monday, June 10, 2019 - link
W-3275 28C / 56T 205 W 2.5 GHz 4.4 GHz 4.6 GHz 1 TiB $4449W-3275M 28C / 56T 205 W 2.5 GHz 4.4 GHz 4.6 GHz 2 TiB $7453
extra 3000$ for not disabling the extra memory address space. why are these ppl not in jail at this point ??
shabby - Monday, June 10, 2019 - link
Welcome to business 101 @ intel.dgingeri - Monday, June 10, 2019 - link
It's their business and their decision. It's not government's place in such matters. If you don't like it, don't buy it. Buy the alternative. I believe there is a nice new Epyc coming out soon that will address these matters.azfacea - Monday, June 10, 2019 - link
"It's their business and their decision" did u hear about qualcom case recently. i guess not.also its a monopoly not a competitive market. that sort of nonsense is how you get a 65 000 $ pill from the drug companies. its also not just usa. there are 200 countries in which intel does business, that nonsense is not legal in many of those. if you or intel doesnt like their laws don't do business there.
mode_13h - Tuesday, June 11, 2019 - link
That was about forcing customers to license patents.radar86 - Wednesday, June 12, 2019 - link
You know not what you speak about. Qualcomm was about licensing patents, not product pricing.Mikewind Dale - Wednesday, June 12, 2019 - link
"also its a monopoly not a competitive market"TIL that AMD doesn't exist.
alysdexia - Thursday, June 13, 2019 - link
That's not a question or the name."its": is your ' key broken?
65 000 = 0.
alysdexia - Thursday, June 13, 2019 - link
The Epyc isn't nice < niais < nescius := not-skilled; you are.mode_13h - Tuesday, June 11, 2019 - link
Are you serious? It's standard business practice to disable chip features in lower-end products. Everybody does it - not just Intel, and not just in CPUs.Mikewind Dale - Wednesday, June 12, 2019 - link
Also disabling features in lower-end products is what makes the lower-end products affordable.It is well known (among economists, at least), that price discrimination - in which different kinds of customers are charged different prices - chiefly benefits the less wealthy, at the expense of the more wealthy. It's basically a voluntary, free-market progressive tax.
In many cases, the less wealthy only have to pay the marginal cost, while the more wealthy pay the fixed costs of production too. For example, the purchasers of low-end, castrated chips might only pay the marginal cost of that particular piece of silicon. The more wealthy purchasers of high-end chips pay for the fixed costs of the fabrication plant as a whole.
twtech - Wednesday, June 12, 2019 - link
Same reason why this Xeon W is limited to a single-socket configuration. They want to stay competitive with AMD by offering single-socket workstation CPUs with a competitive number of cores, without cannibalizing sales of their high-end server CPUs that they want to be able to sell for $10k. And the 28-core is the same underlying chip that they're selling for $10k, and that they're going to double-up with on the same package to achieve their 56-core.We're only even seeing it here as a workstation product for ~$4k because they have no other choice to remain competitive with AMD given what they have available to sell.
mode_13h - Friday, June 21, 2019 - link
You're a bit loose on some details.First, the Xeon W-3275 actually has 64 lanes, which is not true of the Xeon Platinum 8280. Now, it's certainly the same die, but they allegedly withheld some lanes for things like in-package OmniPath and maybe FPGA dies. So, socket LGA 3647 has no pins to spare for the extra 16 lanes... unless you kill the UPI connections (which are needed for multi-CPU configs not supported by the Xeon W products).
Now, Intel being Intel, they probably used the extra PCIe lanes as a way to introduce some incompatibility between the new Xeon W processors and server boards featuring the same socket.
Second, the Xeon W-3275 turbo-boosts up to 4.4 GHz (although the page on ark.intel.com lists the "Turbo Boost Max 3.0" speed as 4.6 GHz), while the Platinum 8280 turbos only up to 4.0 GHz and has no "Turbo Boost Max 3.0". Yet both claim a TDP of 205 W. So, maybe there's some cherry-picking for the Xeon W dies?
Third, on pricing, the Xeon W-3275 is listed at $4.5k vs the Platinum 8280 at $10k. The Xeon W-3275M is listed at $7.5k vs the Platinum 8280M at $13k.
Kevin G - Monday, June 10, 2019 - link
I suspect that Intel will also move the LGA 3647 socket into the HEDT market this fall. The current line up is a tough sell against Threadripper which offers more cores and more PCIe lanes than Intel's current i9 line up. While Intel may end up behind AMD in raw core count, they can reach parity in PCIe lane count but offer more memory channels.As for Xeon W on socket LGA 2066, what other upgrades for it are there really? Cascade Lake is mainly an errata fix and small bump on the clock speed which isn't much to offer in this segment. If Intel changed their packaging a bit, they could potentially offer a 28 core part for socket LGA 2066 but it probably wouldn't be worth it for the low volume that would ship. For reference, Intel's high end socket LGA 2066 parts leverage a BGA package on top of another PCB which provides the socket 2066 interface. Intel does this because it re-uses the initial BGA part on socket LGA 3647. There are three version of the LGA 3647 PCB depending on what else resides in the processor package: Omnipath or a FPGA (or nothing). This is where Intel was able to get 16 additional PCIe lanes as they were always on-die but would be routed to these additional package options.
azfacea - Monday, June 10, 2019 - link
"parity in PCIe lane count ..."how is 128 lanes of pcie4 at parity with 64 lanes pcie3
Kevin G - Monday, June 10, 2019 - link
Last I checked, Threadripper is only at 64 PCIe 3.0 lanes (four of which go to the chipset) and quad memory channel support.MFinn3333 - Monday, June 10, 2019 - link
/me Checks the PCIe count of EPYC...“It is still 128 people!”
mode_13h - Tuesday, June 11, 2019 - link
ThreadRipper offers higher clocks than EPYC, because workstation users still care about single-thread performance, while server is mostly about aggregate throughput and power-efficiency.So, I would put ThreadRipper up against Xeon W - not EPYC.
azfacea - Monday, June 10, 2019 - link
you are comparing intel CPU from mac pro last week to thread ripper from 2 years ago, intel marketing might want to hire you.rome is 128 lanes pcie4
Kevin G - Monday, June 10, 2019 - link
AMD updated Threadripper roughly a year ago to bring it up to 32 cores. AMD has plans to update it later this year. And yes, Intel is playing catch up here and will likely match what AMD has had on the market for the better part of a year now until AMD's impending update at the end of this year.Rome is targeted as a server part in particular due to its support of two sockets and a few other RAS features. Both Threadripper and these Xeon W's are single socket only. And yes, workstation Epyc boards exist just like workstation Xeon SP boards do as well that serve the high end of the workstation market.
PS: Rome is actually 130 PCIe lanes. AMD added two more by vendor request for IPMI so they don't have to sacrifice lane count for ordinary peripherals.
damianrobertjones - Thursday, June 13, 2019 - link
Logitech and Microsoft sell a range of affordable keyboards with working 'shift' keys. Capitals can be your friend.mode_13h - Tuesday, June 11, 2019 - link
I don't feel like they really need more than 2066, in the HEDT space. For customers going beyond that, they can move into Intel's Xeon line. ThreadRipper competes with both, because unlike Intel, AMD didn't disable ECC support to make a lower-priced model line.abufrejoval - Monday, June 10, 2019 - link
Looks like this could have been shorter:1. Intel is now selling server SKUs configured for higher TDP as workstations
2. Trying to offer a $7500 alternative to the current 32 core ThreadRipper at $1800 vis-à-vis the Rome competition to come in a month or so
Should be interesting to see if AMD responds by offering also some higher clock Epyics to deliver 8 memory channels to those who either need more bandwidth or capacity than TR can provide.
NV-RAM would be really nice to have, preferably at lower prices, higher density and DRAM type endurance. Wish I'd know if Epyic & cousins did include all the instruction set modifications and memory controller logic that's required to support NV-DIMMs like Optane.
Kevin G - Monday, June 10, 2019 - link
1) This has been common practice in the market historically. Main difference has been the IO selection which has favored more graphics and audio where there traditionally has been little need on the server side.2a) This is Intel market segmentation at its finest. Artificially limiting memory capacity is just a stupid move, especially at this time given their competition. Intel can't uncap the Xeon W's as the Xeon SP's all have the memory limitation and this would severely undercut them in price. If anything, they should have segmented on Optane support and let ordinary DRAM capacity be unchecked.
2b) AMD's update to Threadripper is expected in Q4 this year. It looks like AMD had a significant orders placed for Epyc so it appears that all 8 core usable dies are headed that way. There is wide spread rumors that AMD has a 16 core desktop chip waiting for release but the reason why is that they can't spare any 8 core dies yet.
2c) AMD might be holding off on a Threadripper update due producing yet another IO die. They have an embedded market to fulfill with quad channel memory parts but using the eight channel Epyc IO die does not seem appropriate for it. If AMD is going this route, holding off on Threadripper updates for it would make sense.
jamescox - Monday, June 10, 2019 - link
The 8 core desktop part is assumed to be a single cpu chiplet. It could be one with bad power consumption characteristics though. The best power consumption die will go to Epyc. They can us all kinds of different chiplets in Epyc, with 1, 2, 3, or 4 cores active. A 1 core per CCX part (16 cores) would be a strange chip, but the massive per core cache could be useful in some applications. They could also disable entire CCXs for a 32 core part. The main concern is power consumption, so Epyc will get the best bins for those. The 16 core desktop part will need to be a relatively good power consumption bin though, so that may be in short supply and/or expensive.I have wondered if they put some extra links on the desktop IO die to allow them to use two small IO die and 4 cpu chiplets for ThreadRipper. That would allow relatively cheap ThreadRipper processors up to 32 core before switching to the expensive Epyc IO die for higher core count.
Kevin G - Tuesday, June 11, 2019 - link
AMD hasn't released anything on specific models: just indicated that they'll go up to 64 cores. How they do the lesser parts is speculative and well, AMD has LOTS of options. I would fathom that they'll lower the number of cores per CCX to maximize cache usage where possible.What would be nice would be a server motherboard that would identify what chiller and CCX a core is found plus the ability to selectively disable it. Take a 33 core part and off hand there are three main ways of doing it: 8 chipslet each with one CCX disabled, 8 chiplets with two cores in each CCX disabled and then just four fully functional chiplets. I wouldn't expect large performance differences and they may only appear under certain workloads. Good research project for those doing a deep dive into the design.
Regardless, AMD has lots of options and I'm pretty sure they can use every chiplet manufactured in some fashion except to total duds. Very cost effective from a manufacturing standpoint.
techguymaxc - Monday, June 10, 2019 - link
The beauty of computers is that they can be used to run just about any workload one can dream up. For many customers, this necessitates virtualization. Most virtual workloads are memory-constrained rather than CPU. If you need to host databases, multiple environments (dev/test) you're going to need a lot of RAM.jamescox - Monday, June 10, 2019 - link
The full 8 chiplet Epyc 2 will have 256 MB of on package L3 cache. The largest intel goes up to right now is 38.5. The massive cache will reduce the load on the memory controllers significantly. Intel cannot compete with that amount of cache with 14 nm parts.MattZN - Monday, June 10, 2019 - link
That is a mind-boggling amount of L3 cache. Even the 'consumer' Ryzen chips have mind-boggling amounts of L3. The 12-core will have two chiplets... 64MB of L3.-Matt
PVG - Monday, June 10, 2019 - link
But the new Apple Mac Pro is NOT currently available...melgross - Monday, June 10, 2019 - link
Yes, Apple just announced it will be September. But as far as I know, it’s the first announced system to use this, and likely the first to ship. So we’re possibly talking 2.5 months, or so.niva - Monday, June 10, 2019 - link
Dear AnandTech, why is your website inundated with ads? We understand having some ads, but now one must scroll multiple pages down before they can get to the comments. It's getting ridiculous. Please consider the user experience. I've been a reader since 2001 and while I still love your detailed reviews the website is definitely frustrating now.alpha754293 - Monday, June 10, 2019 - link
"The current Xeon W product line, based on Skylake-SP..."Sorry, but that's incorrect.
The current Xeon W product line is based on Skylake-W. (Source: https://en.wikipedia.org/wiki/List_of_Intel_Xeon_m...
The difference between Skylake-W and Skylake-SP is that Skylake-W only supports upto 8 DIMMs of memory, for a maximum of 512 GB while the Skylake-SP supports upto 12 DIMMs of memory, for upto 768 GB for non-M SKUs and 1536 GB for M SKUs.
You cannot run Skylake-W in any other configuration besides a single socket configuration whereas Skylake-SP is for dual socket systems and up (although I suppose that in theory, you might and should still be able to run Skylake-SP in a single socket configuration, if you so desire, but I don't know if that's necessarily 100% true).
However, both Skylake-W and Skylane-SP have a maximum of 48 PCIe 3.0 lanes.
Kevin G - Monday, June 10, 2019 - link
LGA 2066 Xeon W is indeed based upon Sky Lake-SP. Intel simply changed the packaging where two of the memory channels simply are not present on LGA 2066 vs the bigger LGA 3647. Intel was rather clever here since LGA 3647 also have several on package options. Take a look at a delidded LGA 2066 part:https://www.gamersnexus.net/news-pc/2943-intel-i9-...
Notice that there is one PCB on top of another? Now take a look at a deluded LGA 3647 part:
https://www.overclock3d.net/news/cpu_mainboard/pro...
Notice the same that it is the same PCB on PCB style setup? The lower PCB dictates the socket and any on-package extras where as the upper one houses the CPU.
Similarly, the Sky Lake-SP dies have always had 64 PCIe lanes but if them never made it to the motherboard: there were for Omnipath or FPGA options in the package.
ZoZo - Monday, June 10, 2019 - link
It would have been nice to have a W-3235X or W-3245XOr if they could add ECC support to the HEDT platform, that would do the trick for me too.
ZoZo - Monday, June 10, 2019 - link
Now that TR4 motherboards with IPMI are appearing, I guess I can aim at/hope for any new Threadripper line-up. 64 to 128 lanes of PCI-E 4.0 with ECC and unlocked multiplier, all for a lower price, sounds quite sweet.Vlad_Da_Great - Monday, June 10, 2019 - link
NEXT year.Kevin G - Wednesday, June 12, 2019 - link
I wouldn't expect a massive IO bump but a small one is certainly within reason: AMD increased the number of PCIe lanes from 128 to 130 on Epyc. Why? Vendor feed back regarding storage and IPMI IO. Most IPMI and associated IO take up to PCIe lanes which left 126 for usage. Epyc was great for lots of NVMe drives but was limited to 31.5 at full 4x PCIe 3.0 bandwidth. Those extra two lanes bring that figure up to a nice clean 32 drive without sacrificing IPMI.For Theadripper, something similar exists: 60 usable PCIe lanes. I can see this being bumped up to 64 or 66 so that four full 16x PCIe 4.0 slots are possible.
mode_13h - Friday, June 21, 2019 - link
If you want 128 lanes of PCIe, I think you'll need to buy EPYC. They need to keep some product differentiation between these two, and I don't see why PCIe lane count wouldn't continue to be part of it.AdditionalPylons - Monday, June 10, 2019 - link
Apple interestingly sums up L2 and L3 amounts on their stated "cache size" on their website. The SKUs used in the Mac Pro are most likely the following:28-core: Xeon W-3275M, $7453 (compared to $4449 for the non-M version with max 1 TB RAM)
24-core: Xeon W-3265M, $6353 (for $3349 non-M version)
16-core: Xeon W-3245, $1999
12-core: Xeon W-3235, $1398
8-core: Xeon W-3223, $749
With the choices of 24 and 28-core models, Apple is assuming people who need the higher CPU performance also need more than 1 TB of RAM. There are extremely few use cases that need more than 128 GB of RAM, so I think there should be options for non-M versions too, in order to save those $3000.
Skeptical123 - Monday, June 10, 2019 - link
This makes the $20k+ estimates for the new topped out Mac pros seem a little reasonable. Since the the highest CPU you can get from apple might cost up to $7453 list price alone. Though the $6k base price for the 8-core, 32GB of RAM, a single Radeon Pro 580X GPU, and a 256GB SSD seems a more crazy if the cpu list cost is $749, a a few hundred each for the ram, ssd, cpu and your at best around 2k... I guess regardless of the "Apple Tax" Apple is pricing this machine to make more cost since for upgraded version. Which I assume is want professionals will want anyway if they are in able to afford this price bracket :)MattZN - Monday, June 10, 2019 - link
Intel is smoking something if they think they can compete against threadripper at those prices. Intel still wants people to believe that an 8-core processors is the bees-knees. Oh yah... and yet another socket change. The only thing decent about these CPUs is the memory support... but still, by taking away the socket upgrade option I wonder how Intel could possibly believe that anyone would want to build completely new systems around this new form factor.-Matt
mode_13h - Tuesday, June 11, 2019 - link
I think most workstation users don't upgrade their CPU from one generation to the next. The only case for this would be if you buy-in at a low core count and go for more cores + newer gen. But, honestly, most workstation users probably never even open their case. A big upgrade for them would be the SSD or the GPU.The bigger issue is for OEMs, who get only one CPU generation to reap the investment they put into system design & qualification. It might help that Intel is reusing a server socket for this, but of course with some pin-out changes to handle the extra PCIe lanes (probably re-purposing the UPI pins, since these are single-socket systems).
mode_13h - Tuesday, June 11, 2019 - link
Hmmm... the 3000-series' Max Turbo only goes up to 4.4 GHz, while some members of the 2000-series could reach 4.5 GHz.I don't know what I'd do with 48 lanes. All I really wanted was 20 CPU-connected lanes + DMI, like the original E3-series had. Then, I could plug my GPU in a x16 slot and my SSD in a x4 CPU-direct slot.
And I'll stick with a ring-bus, thank you. Mesh scales better, but I don't currently fancy more than about 8 cores.
With all this said, it should come as no surprise that I'm keenly interested in Ryzen 3000. I just wish they made an 8-core SKU with turbos as high as their 16-core can reach. I'd pay extra for that, but I'm sure they're binning their best dies for EPYC and the 16-core SKU.
AshlayW - Tuesday, June 11, 2019 - link
Yes, something like a 3850X with 4.7 GHz Turbo, maybe even 120W TDP or something to give it room to stretch its legs a bit. maybe $449, right between 3800X and 3900X but basically an 8/16 with maximum possible bin/clocks out of the box.PaulStoffregen - Tuesday, June 11, 2019 - link
Apple 2013 "trash can" Mac Pro was based on this "most workstation users probably never even open their case" line of thinking. It wasn't very well received and certainly didn't age well.mode_13h - Friday, June 21, 2019 - link
It didn't age well, because they didn't update it for too long.It's a fair point that workstation users want to do things like upgrade video cards.
AshlayW - Tuesday, June 11, 2019 - link
These are DOAJohn_M - Sunday, June 16, 2019 - link
I have to say that compared with AMD's Rome EPYC offering, these Xeons look decidedly meh. And they still have the Zombieload vulnerability.mode_13h - Friday, June 21, 2019 - link
Though I'm critical of AVX-512, if you had an application that needed it, these should out-perform even 7 nm 32-core ThreadRippers.With regard to Rome, I think it will lack the performance at low thread counts, making it less competitive for a number of workstation applications.