MOST PEOPLE SIMPLY don’t understand what Intel is trying to do with their Larrabee chip, and why Intel simply has to keep going with it. That said, it is not, as many speculate, dead, or even wounded, but has had a few setbacks.
As someone who has been following Intel’s GPU efforts since long before anyone believed they were real, I can state unequivocally that Intel is deadly serious about the GPU game, or more importantly the post-GPU game. Having first heard the name Larrabee in a memo deciding the winners and losers of several different mini-core projects. Then we began digging in to it’s design and what it’s supposed to be. When we first pointed out that it was an x86 based ‘post-GPU’ chip, howls of disbelieving laughter ensued.
After a few demonstrations, Intel pulled the plug on the first Larrabee chip, but did not kill the project, not by a long shot. Larrabee, the project, is far from dead, you can even get one if you convince Intel that you have a good use for it. With backers like Paul Otellini, you can be sure the project is ongoing. From what SemiAccurate’s sources have been saying, his support has been quite strong since long before the roadmap reshuffle.
That brings up the question of what will we get when Larrabee comes out, and more importantly, when? Short answer, converged pipelines and 2012. What that means takes a little bit of background.
When Larrabee was first greenlighted, our sources told us that it was a 16-core, 65nm chip that was meant for developers to get their feet wet, not for public use. It was a software development seed part, but if you wanted to hand Intel cash for one, they would take the money. From what we understand, this part never taped out, so we will call it Larrabee 0.
Larrabee 1, aka the part you have heard of, was beset by delays. By the time it was deprecated, it was well over a year and a half late. The first official release date to come our way was mid to late 2008.
Problems arose. Thee Ax steppings were pretty rough, so rough that performance was less than 1/4 of it’s intended targets, and power was through the roof. These bugs hampered development by compounding problems, and caused serious development setbacks. The root problems may have identified, but were not fixed until it was far too late.
The B0 stepping did fix the vast majority of the problems, but it didn’t come back until late 2009. This pushed the release date out to Q2, basically now. For a chip that was targeting the HD4870/GTX285 generation of GPUs, a year’s delay meant that it would be facing the HD5870/GTX285, one of them moving the bar up substantially.
Given the performance deficit, Intel was faced with a tough choice, pull the plug on the consumer part and take a proverbial sh*tstorm of criticism from those not in the loop, or soldier on. Soldiering on meant Larrabee 2, 3 and 4 would probably be delayed as resources were diverted to get Larrabee The Elder out, and Intel would be chasing it’s tail for a long time to come. Wisely, Intel pulled the plug on Larrabee 1 and 2, and refocused on Larrabee 3.
While armchair quarterbacks are not shying from criticism, this call shows Intel management is thinking long term, not appeasing the people looking for shiny things right now. It took quite a bit of guts to do, and announcing it publicly took even more. Kudos to Intel there, they could have just let it slip and hoped no one noticed.
That brings us to Larrabee 2, basically a refresh and cleanup of Larrabee 1. Tie up loose ends, optimize what they could, and fix whatever low hanging fruit that remained. It was a tock in the tick-tock model. Or maybe it is a tick, like their recent naming schemes, I can never remember which is which. In any case, this derivative wasn’t hugely memorable.
Larrabee 3 was going to be a big bang. Intel learned a lot with Larrabee 1, but too late incorporate any of it in Larrabee 2. The third iteration was a different story, it was going to be a ‘converged pipeline’ model with a massively updated ISA.
If you read this as ‘incompatible’, well, yes, it was going to be. Luckily, since Intel controls the compilers and dev environment for the chip, a lot of avenues were available to ease that transition. This is being talked about in the past tense because, well, the first chip isn’t a public release, so any software that comes out publicly will do so with Larrabee 3, and it will be compiled for that. For now, it is a minor headache for devs at worst.
Since we mentioned Larrabee 3, we should point out that it is now Larrabee 2, currently due in early or late 2012 depending on which sources you put more faith in. Lets split the difference and call it mid-2012, or about two years from now.
The new Larrabee 2 is what the old Larrabee 3 was meant to be, but retargeted to what a high end GPU will be in 2012. Instead of chasing hoping to get the next three generations out less late than the last, Intel just scrapped the timeline, and went back to the drawing board. Specs changed, timelines changed, and all sorts of details were rethought.
Larrabee 3’s converged pipeline ISA is now going to be the only ISA the public ever sees. Compiler tricks and dual code path binaries are no longer necessary. When people get their hands on a Larrabee, be it the lucky few who can talk their way into a robins egg blue Larrabee 1, or the new one, the coding path will be set.
In the end, Larrabee was effectively delayed for 3 years, but a lot was learned on both the software and hardware sides. Those who declare the project dead, or say Intel ran away with it’s tail between it’s legs either have an axe to grind or simply don’t understand what happened. Intel is creating a new paradigm, changing the graphics game from the ground up, not just making a GPU. It is a slow and long term process.
The whole converged pipeline CPU/GPU is going to be where both AMD and Intel eventually end up, but their paths are going to diverge near term. Any company that is not aiming for that goal is not going to be in the CPU or GPU game in 3 years because neither will exist as such. Then again, they will have a lot of company in the graveyard of those who fought Moore’s law. Two years.S|A
Latest posts by Charlie Demerjian (see all)
- Nintendo NX/Switch uses Tegra like SemiAccurate said - Oct 24, 2016
- Qualcomm fills in the midrange with Snapdragon 653, 626, and 427 - Oct 24, 2016
- Alibaba and AMD collaborate on GPUs in the cloud - Oct 14, 2016
- The PC market freefall continues, here is how to fix it - Oct 14, 2016
- Western Digital’s new SSD portend a lot - Oct 13, 2016