As we exclusively reported, Nvidia (NASDAQ:NVDA) has the contract for GPUs in the upcoming MacBooks, but the open question is, “Can they supply?” Given their track record, and the current information available, you have to wonder, and wonder a lot.
Lets look at a bit of history to shed some light on the subject, and hopefully glean a little insight into 28nm chips, Kepler, and Apple. If you recall, 40nm was long and hard on Nvidia, starting with <2% yields on initial Fermis, then climbing to a high of almost 10x that in the end. Not that the GPU ever worked right, but you can work magic with retroactively lowered targets. In the end, Nvidia blamed everyone under the sun but themselves, but there never was a high end 400 series part with all the shaders enabled, or anywhere near the promised specs.
Step forward two or so years, and you have a repeat of the same broken record for 28nm, it’s not our fault. No less than Jen-Hsun himself said that Nvidia was well prepared for 28nm production, and would he fib to the analyst community? Then again, they did spin the 28nm shrink of Fermi, something that should be somewhere between a cakewalk and a no brainer. Things do not appear rosy on 28nm in Santa Clara.
The kicker was at the end of our report, and not many people seemed to notice, Nvidia dumped most if not all of their early 28nm wafer starts. If things are going well on 28nm, why dump your wafer starts? If it is a shrink of Fermi, it isn’t going to be logic bugs, so that leaves……..? If you ask officially, all is indeed rosy, but the moves behind the scenes don’t back up that confidence. That said, Nvidia has recently warned AIBs that the Kepler launch will be delayed “past March”. I wonder who has options expiring soon?
That brings us to the latest roadmaps from 4Gamer. If you recall, SemiAccurate exclusively told you over two months ago that Nvidia has two Keplers in house, GK107 and GK117, and that they were small chips. Without rewriting the article, we can say that putting out the small chip first is an act of desperation and shows a complete lack of confidence in their ability to make a chip, much less production quantities of chips, on 28nm. When Fermi was delayed, delayed more, and respun until Nvidia changed their corporate logo to a green skein of yarn, we pointed out that this would have some serious knock-on effects for the next generation. Kepler is now horribly late, the real product, GK112, is looking unlikely for 2012, and Nvidia has snapped up 17 more JHH500 Industrial Strength Blamethrowers that BP recently idled.
Back to the 4Gamer roadmap covering the desktop parts. GK106 and GK107 are the same chip being marketed for the lower-end. The GK106 will have 256-bit memory and PCIe3 and the GK107 will have 128-bit memory and PCIe2. These two chips are the same ASIC just with different fusing for different markets. GK106 is launching several months after GK107, and that is the key bit. Nvidia is likely bracing for some severe yield problems, there really isn’t a reason to separate the two by that much otherwise. Also note that no shader counts have been given to AIBs, something else that strongly points to massive yield problems. If you don’t know what you can make, don’t promise a set number. It worked last time, right?
GK104 comes later, and that is the mainstream part sporting 384-bit memory and PCIe3. Given the memory bandwidth, it should have a healthy performance boost over the current GF104. If the power envelope allows, something that 4Gamer hints may be a problem. This in spite of Nvidia proudly shouting about the efficiency of Kepler, something SemiAccurate’s sources strongly questioned last summer.
Nvidia’s methods have always been a bit questionable when looking at how they arrived at TDP figures, with independent testing always showing notably higher figures than officially stated. When it came to the dual 400/500 cards aka the GTX590, this bit Nvidia in the behind, hard. Like we said years ago, Nvidia was at the power wall, and screaming at the laws of physics wasn’t going to change them. GTX590s were slower than the AMD HD6990 by wide margins, and then the GTX590s literally blew up. There wasn’t a second run of cards just like we said, it quite directly could not be done with the chips they had and the specs they promised. Physics is a bitch at times. [Gravity too.]
So what does that mean for the GK110, the higher end part? If the GK104 is indeed 250W like 4Gamers says, and power management isn’t at least on par with the AMD HD6000 line, it probably can’t be done. Again. Or can’t be done at speeds where it makes sense to productize. Then again, Nvidia is pushing hard to raise the PCIe3 power limit to 400W per slot, something that once again casts severe doubt on their power efficiency claims. AMD on the other hand is not asking for the limit to be raised.
Moving on we have the GK112, aka the real Kepler, filling the silly expensive spot. While SemiAccurate moles, the brown-spotted ones, have yet to deliver the specs of the line to our orbiting email server, it is very likely that the split between GK1x2 and the rest is the same as it was with GF1xx, IE GPU compute orientation for the big ones, gaming for the small, The ‘high end’ GK110 being made from GK104s point this out, and also back up what our moles, the white and brown striped ones, said about Kepler losing mainstream performance efficiency to boost compute numbers. This again points to a two SM architecture for the GK104/106/107 and more than two for the GK112.
Once again, this is a big big problem for Nvidia. Why? If you are familiar with the Nvidia naming schemes, they go GABCD with G = GeForce, A = family name (Tesla, Fermi, Kepler etc), B, usually a 1, being the generation of that architecture, C = the rev of that architecture, and D being the relative size, smaller numerically being the larger GPU. GK106 = first generation Kepler architecture, first rev, smallest chip so far. GK106/7 is the smallest, GK104 larger, and the big boy should be GK102. If you remember Fermi in GTX4xx guise was GF100/104/106/108, and GF110/114/116/118 when called GTX5xx. The latter chips were a year later.
Back to GK112, and more importantly, GK102, or lack thereof. It looks like Nvidia could not pull off GK102, and that part was canceled. The important and difficult variant, GK112 is now about a year after GK102 should have come out before it was canceled. This explains why the smaller GK10x chips launch first, but also indicates exactly how screwed up Kepler is. Launching the small chips first will bring another host of problems, but we have covered that in previous articles.
The culprit is once again the interconnect, something even Jen-Hsun backhandedly admitted last year. Sources tell SemiAccurate that the same interconnect issues are casting doubt on the viability of Denver/T50/Tegra 6 (Note: This was Tegra 5 until the recent slips and roadmap readjustments) and the roadmaps show Kepler in GK1x2 guise is currently suffering a lot too, one of two variants has already failed. The main effect here is runaway power use, clock drops, and the attendant downward performance spiral. Then again, Nvidia wisely delayed Kepler a year to fix that, but as we have repeatedly stated, the architecture is fundamentally wrong, and that is not really fixable. We will see what the company is able to do when GK112 comes out, if it does.
One last thing to think about is performance. Nvidia was claiming ~2.5x the DP performance of Fermi for Kepler. According to 4Gamers, that is now down to 1.5-2x. This once again points to yield and/or performance per watt issues with the chip, both of which are very possible. Then again, Fermi was delivered with <50% of the promised performance per watt, so a mere, to be charitable, 20% backpedaling on Kepler can be considered great progress, right?
That brings us back to Apple, and Nvidia supplying. Can they? That is looking really iffy. As we stated in the first Kepler piece, silicon for GK107 is due back from the fabs any second now. That puts realistic launches three months out best case, so March before OEMs, in this case Apple, get supply. If there needs to be a single spin, add six weeks, meaning late April. The odds of Kepler, a new architecture, being right out of the gate are essentially zero, we feel two spins are likely, but would not be surprised if there were three.
With two spins, Nvidia would be hard pressed to have silicon in quantity before the waning days of June. Intel is launching Ivy Bridge in April, although late March is a possibility. If the CPUs are ready, but the GPUs are not, Apple can’t launch. Not launching makes the Cupertino based fruit stand mighty peeved, and they don’t take failure lightly. Can Nvidia supply? If the next MacBooks use Keplers, and they launch in April, it is going to be awfully tight. And then there is the question of what was promised vs what was delivered, always a problem with single supplier ecosystems. The spin from this potential debacle is going to be astounding, SemiAccurate recommends buying popcorn futures now before the prices rise.S|A
Updated 2 Dec 2011 10:20am. Typo: GK104 was listed as GF104.
Latest posts by Charlie Demerjian (see all)
- What do megadatacenters pay for CPUs? - Nov 19, 2014
- Intel ‘talks about’ Knights Landing and Omni-Something - Nov 17, 2014
- Marvell adds PXA1908 and PXA1938 64-bit LTE SOCs to the mix - Nov 17, 2014
- Intel adds M.2 form factors to S3500 enterprise SSDs - Nov 11, 2014
- A long look at the new Imagination 7-Series GPUs - Nov 11, 2014