WORD HAS REACHED our tender ears that Larrabee, the upcoming Intel GPU, will not be quite as generalist as they claim. The fixed function parts will be two HD decoders.
This addition seems like a last minute thing, so it is probably pulled from Intel’s existing IP catalog. The most likely candidate seems to be the G45 decoders, they are small, low power, and have (finally) decent drivers. Then again, does area really matter in a 700+mm^2 chip?
This is NOT Larrabee, but it is big
Don’t take this to mean Larrabee is underpowered, it most definitely is not. The GPU can decode HD streams until you get bored watching them, the problem is that it can’t necessarily do so while staying in an acceptable power budget. Having all the cores pump out HD would likely produce enough heat to keep the fans spun way up, and that is death in the HTPC market.
Why anyone would want the first generation Larrabee for an HTPC rig is an open question, one that brings us to how Larrabee will evolve. The first one can be considered a hybrid between a software development platform and a science experiment. Intel aims to do nothing less than change how graphics are done, a laudable goal that will take years. During that time, avenues will be explored, concepts implemented, and others shelved.
The end goal is to have a CPU that can do what a CPU does now in addition all the GPU tasks. Both AMD and Intel are driving to the same end point along very different highways, but will almost assuredly end up at the same point.
Getting back to power, Larrabee 1 was the first of it’s kind, so once silicon came back, there were obvious things that needed doing on the next version, and the one after that. Intel is going for a fast cadence to roll out new Larrabee chips, with Larrabee 2 coming out about 9 months after 1, and number 3 about 9 months after that. There is an obvious comparison here that we won’t stoop to.
Intel has a 12 month CPU cadence that goes new core -> shrink/optimize -> new core -> shrink/optimize. This is mainly dictated by a cross between the 24 month silicon shrink cycle and common sense. With Larrabee on a 9 month cadence, things can get a little odd.
Luckily, a lot of Larrabee is not all that complex. 32 cores across 7xx mm^2, minus a bit for cache, bus and memory controller leaves you with far sub 20mm^2 cores. Those cores can be revved with a lot less effort than a Nehalem core, then cut and pasted. The uncore can simply be left alone for the most part, or more likely revved separately. This is the same thought process that AMD pioneered with the K8, variants were relatively easy to pull off.
So, what you will see with Larrabee is initially to get it out there, then optimize for Larrabee 2. A 32nm version will lose half the area right off the bat from the shrink. Then Intel will unleash the optimization elves from the caves under Hillsboro (After dark of course……), and you could be looking at a second generation that is closer to 250mm^2 than 500.
Power is a little more problematic, purpose specific logic will always be more efficient than generalist units, and that is where Larrabee gets caught up. You have a big, hot chip that isn’t optimized for much out of the gate. Running video on that takes wattage, and wattage is the enemy of thin and quiet PCs.
We have heard that the initial Larrabees were so power hungry that the clocks needed to be dialed back a bit, GT200 style, to get into a rational PC power envelopes. At those power levels, even using a fraction of the GPU to decode movies breaks the ‘quiet’ budget, so two HD decoders were added.
Given the rapid pace of optimizations that are already coming for the Larrabee line, we don’t expect the decoders to remain in silicon for long. If they are there in the second revision, that will be somewhat surprising, but they are almost guaranteed to be gone by the third.S|A
Latest posts by Charlie Demerjian (see all)
- Intel shows off 10nm 112Gbps SerDes - Mar 12, 2019
- Intel releases Compute Express Link spec - Mar 11, 2019
- Qualcomm rolls out a second gen 5G modem called X55 - Feb 19, 2019
- What is Intel’s Foveros tech and what isn’t it? - Feb 11, 2019
- Why SemiAccurate called 10nm wrong - Jan 25, 2019