Intel cuts out GPUs: technical tidbits

Part 2.1: This just makes too much sense

Intel logo 87x80 Intel cuts out GPUs: technical tidbitsEditor’s note: This article is an update to the “Intel Slams the Door on GPUs” series, expanding on one technical aspect. It goes over one possibility not mentioned in Part 1 or Part 2, and talks a bit about the implications of it. -Ed.

One thought that occurred to me after the previous articles were done is how Intel could step back to PCIe2 on Broadwell and not do more work by ripping PCIe3 out of that chip and replacing it with the older spec. Fusing it down to the PCIe2 base specification would not hold up in court either, if things got that far, and they probably will. Worse yet, putting PCIe2 on the die would mean not just making a PCIe2 implementation on 14nm that will never be used anywhere else, but also integrating it with the ring bus and the rest of the chip. It is a lot of work, too much work in fact to have any plausible deniability in front of an angry FTC lawyer.

So how will Intel do it? Easy, through the chipset. Currently, the DFNAASB (Device Formerly Known As A South Bridge) has just that capability, what a coincidence. It is not made on the same 14nm process as Broadwell, likely not even on 22nm, it will be on a -2 or -3 process when Intel moves to 14nm. Why? Because they can, and there is no need to make it on a bleeding edge process, it buys them nothing but cost.

So that is probably how Intel will put PCIe2 on Broadwell, on the chipset. This CPU will use the already existing, debugged, and ready Haswell chipsets, so that part is done. Better yet, Intel can forgo the porting of PCIe3 blocks to 14nm entirely, and doesn’t have to integrate, validate, or prestidigitate about it on the die. No transistors used makes for low leakage and a smaller die too. The cost and power savings are hard to argue. As a side effect, any PCIe2 traffic has to do an extra hop from the SB (south bridge) to the CPU, adding latency and lowering performance, albeit fractionally.

The one down side for Intel is going to be arguing power savings. Putting PCIe2 on the SB unquestionably lowers power used by the CPU itself, but pushing high speed signals off die, across the board, and on to the SB costs quite a bit of energy. If the SB is on an interposer, most of these concerns go away though, and that is pretty likely by the time 14nm rolls out. That said, you could probably make a decent case for it being a net power saver in all but a few corner cases if it wasn’t on an interposer. In any event, power won’t be a slam dunk for anyone screaming about what Intel is doing to exclude them.

If Intel takes this tactic, I think they will have not only a solid cost and CPU power argument, but possibly a better overall power use story as well. It will certainly take the PCIe2 vs PCIe3 quibbling off the table, and give them a strong cost case for not adding it later if they are asked to. All in all, putting Broadwell’s PCIe capabilities on the chipsets gives Intel the best of both worlds, and gives their opponents the worst possible connectivity and an uphill battle to argue against what they are getting. Strategically brilliant, ethically a tad less than honorable.S|A

The following two tabs change content below.
 Intel cuts out GPUs: technical tidbits

Charlie Demerjian

Roving engine of chaos and snide remarks at SemiAccurate
Charlie Demerjian is the founder of Stone Arch Networking Services and SemiAccurate.com. SemiAccurate.com is a technology news site; addressing hardware design, software selection, customization, securing and maintenance, with over one million views per month. He is a technologist and analyst specializing in semiconductors, system and network architecture. As head writer of SemiAccurate.com, he regularly advises writers, analysts, and industry executives on technical matters and long lead industry trends. Charlie is also a council member with Gerson Lehman Group.