AMD’s Epyc has anywhere from a 50-200% performance win over Intel’s Xeon line up, we went over that in detail earlier. When you look at CPU pricing, AMD’s advantage is a high single-digit multiple or more over Intel, depending on the benchmark. There are some circumstances where Intel can still win however but we consider these to be pathological corner cases not mainstream markets. But that is at MSRP, what about the real world?
Once again SemiAccurate told you about how Intel is discounting the top end Xeons and there is more happening on this front but that is a topic for another day. Some analysts contend that if Intel discounts enough they can still make a tidy profit and sell lots of chips. They are wrong. If you look at cost/CoGS and actual volume selling prices, things get very ugly very fast for Intel. If you expand that out to TCO with real world prices, Intel can’t actually sell Xeons at a profit against Rome. For the record, Rome is very profitable at levels where Intel is under water.
Lets look at the details including some costs and Tier 1 pricing data but we will start this out with a disclaimer, We won’t attempt to explain how we came to the numbers presented or why, just that we have the utmost confidence in them. SemiAccurate said much the same thing about our performance numbers a year ago so make of this caveat what you will. The most specific guidance we can give you is that these figures came from multiple OEMs, ODMs, customers, or other industry sources. Feel free to substitute your own cost and price estimates in, the conclusions are unlikely to change.
Before we get to those numbers, a few definitions. There are three Intel Cascade Lake dies, from smallest to largest they are called LCC, HCC, and XCC. Core counts are 10, 18, and 28 respectively and the die sizes are 298.08, 427.68, and 601.92mm^2 respectively, at least for the Sky Lake-SP variants. Cascade Lake is likely a bit bigger but Intel refuses to release die sizes and we haven’t bothered to measure the ‘new’ chips.
AMD’s Rome is a different beast, it is a multi-chip module (MCM) with 8x 7nm dies called CCD with up to 8C on each. There is also a 14nm IOD to connect them all. The CCDs are 74mm^2 each and the IOD is 416.2mm^2 for a total of 1008.2 mm^2, 592 7nm and 416.2 14nm. Because the CCDs are relatively tiny, significantly smaller than an iPhone CPU on the same process, yields should be excellent. On top of that dies with defects can be salvaged to lower core count variants and the massive caches are effectively defect tolerant. The IOD is in a similar situation for different reasons.
In short AMD should have stunning high yields on the CCDs which are the expensive bit and if you carefully parse the SKU list you will see that there are Epyc models that can use CCDs with only two cores active so you can assume much better functional yields than most pundits are saying. (Note: We strongly believe that AMD’s Rome yields are fabulous but can’t get in to details) Intel likely has very strong yields on Cascade because it is on an ancient 14nm process, but effective yields are still below Intel’s rosy but number-free public statements. (Same caveat here as above folks, we have good reason to say this)
Note: The following is analysis for professional level subscribers only.
Disclosures: Charlie Demerjian and Stone Arch Networking Services, Inc. have no consulting relationships, investment relationships, or hold any investment positions with any of the companies mentioned in this report.
Latest posts by Charlie Demerjian (see all)
- A new x86 server part is back from the fabs - Sep 16, 2019
- Intel to crater pricing on Cascade Lake-X parts - Sep 10, 2019
- Xilinx VU19P is the worlds largest FPGA - Sep 4, 2019
- Intel’s Comet lake is ‘meh’, the launch was not - Aug 26, 2019
- Can Intel sell Xeons at a profit vs AMD’s Rome? - Aug 16, 2019