How Intel can slam the door on GPUs

Part 2: The games we play, and the words we choose

Intel - logoEditor’s note: This is second part of the story, “Intel slams the door on discrete GPUs”. This second half examines how Intel will navigate the maze of technical and real world issues, restrictions, and options for what Intel is doing.  It also goes in to how those who will presumably object to the changes are hamstrung by their own actions. -Ed.

Enter lawyers by the dozen

So how can Intel do this? Doesn’t their FTC consent decree ban this? Well, if you look at the latest version, the one with the Oak Trail updates, the relevant portions begin on page 6. Part II A. says that Intel must include a PCIe bus on all of it’s mainstream processors for a period of six years after the agreement, basically until the end of 2016. II B. says Intel is free to pick out whatever version of the spec it deems fit, be it PCIe, PCIe2, PCIe3 or whatever, as long as it is a standard. They can also pick the width that they deem appropriate. D. says that bugs will be not be allowed if they keep the device from meeting the PCIe spec designated. E. is a carve out for Oak Trail and irrelevant to this story.

Notice that we skipped C.? Why? It says, “Respondent shall not design any Required Interface to intentionally limit the performance or operation of any Relevant GPU in a manner that would render the Required Interface non-compliant with the applicable PCIe Base Specification. ” Most people read that as something akin to Intel not being able to limit the performance of the competition through their choices, but it doesn’t say that. It just says that it can’t do anything that limits the performance of a GPU on their PCIe bus compared to that of a non-limited standard PCIe link of the same width and base specification.

If Intel puts a PCIe2 4x link on Broadwell, as long as it is able to perform like a PCIe2 4x link in any other system, and it meets all the requirements of the PCI-SIG for that specification, all is good. Limiting performance of something like a GPU is however covered in section V part A. on page 13, and it is worth a long look.

IT IS FURTHER ORDERED that Respondent shall not make any engineering or design change to a Relevant Product if that change (1) degrades the performance of a Relevant Product sold by a competitor of Respondent and (2) does not provide an actual benefit to the Relevant Product sold by Respondent, including without limitation any improvement in performance, operation, cost, manufacturability, reliability, compatibility, or ability to operate or enhance the operation of another product; provided, however, that any degradation of the performance of a competing product shall not itself be deemed to be a benefit to the Relevant Product sold by Respondent. Respondent shall have the burden of demonstrating that any engineering or design change at issue complies with Section V. of this Order.

This does say what you think, Intel can’t cripple performance by picking the PCIe base spec or width to intentionally limit the competition’s products according to part (1). (2) however throws a bit of a curve ball at anyone protesting by saying “and (2) does not provide an actual benefit to the Relevant Product sold by Respondent“. The word “and” is key, if Intel can show that they gain something relevant to their product, they can to a large degree pick whatever they want. I guarantee that Intel can show that a 4x lane saves lots of power compared to a 16x lane, and knows every benchmark that is not bandwidth constrained very well.

What about PCIe2 vs PCIe3 and all the power savings tech that the latter specification has? This gets a little trickier, but Intel has a good argument to use there to. If they argue that PCIe3 at full utilization takes more power than PCIe2 at full utilization, they can easily show measurable power savings on the same process. If a GPU is a heavy bandwidth user as the GPU makers will likely argue, than PCIe2 4x saves a lot of power in the scenario that they suggest. From there it becomes a technical nuance argument, peak, average, leakage, burst rates, and power used in each cycle on each process. Want to bet an FTC arbitrator is not good at deciphering all the volumes of technical minutia that Intel will have ready? You can also find benchmarks that are only affected in very minor ways by the bandwidth loss, the TechPowerUp article is full of them. These will be presented too, and are not nearly as tough for the layman to comprehend.

Run a dozen well chosen but common programs at high resolutions and it will be pretty easy for Intel to make a power savings case. They won’t even have to fudge the numbers, and both sides will have to argue over what terms like typical, average, and enthusiast mean. At that point it becomes more of a he said, she said argument, and good luck proving anything there. History has shown that non-technical government legislators are not the most adept at putting technical minutia like fractions of a Watt per data transfer into context, but do understand basic logic. Intel should handily win this one or at least, in SemiAccurate’s opinion, be able meet the burden of proof in the last sentence of Part V A.

To make matters worse, AMD and Nvidia don’t know this yet. Intel has to disclose their roadmaps to both sides, as required by the consent decree, but depending on how you read it, they only need to do so a year out. If Intel plans things right, they effectively don’t have to tell any GPU maker about Broadwell until a year and a day before launch. It can be taped out, done and ready by then, making any changes hard if not impossible.

At that point, AMD and Nvidia will have to go to the FTC, complain, be heard, and make a persuasive argument in a very tight time frame. Intel can not only make the fairly persuasive arguments outlined above, but they can also point out that it is too late to change things anyway, the chips is done. Before it was finished, they honestly, really really, thought that 4x was more than plenty, see all these benchmarks. Even then, it saves reams of power over a 16x link, see the phone book of microsecond power measurement dumps attached as appendices R, V and AA.

Good luck to AMD and Nvidia in proving that PCIe2 4x was put in place to hamstring them, has no technical merit, do it in a short time frame, and persuade the FTC that Intel is doing all of the above only to hurt them. That last one is borderline impossible without smoking gun evidence, and I doubt there is any this time around. Then there are the inevitable appeals.

Does it matter in the end?

If AMD and Nvidia do somehow manage to convince the FTC to delay or modify Broadwell, that has one minor problem attached to it. By the time anything is done, assuming the GPU slingers can make their case, the OEMs will have their laptops designs for Broadwell completed and mostly tested. Even the most minute change means the OEMs have to start over from close to zero.

That is not only expensive, it is time consuming too. OEMs will not be too happy to hear that their entire laptop R&D expenditure for a year was just flushed, and they will have no product refresh for that same time period either. Which side do you think they will take? Slap Intel on the wrist, lose a year’s worth of products, and make AMD and Nvidia smile, or side with Intel and not redo all that engineering work? What if Intel suddenly sweetens the deal they get in the background? That is of course forbidden, but games of golf between execs isn’t, and who records talks that happen between holes 7 and 8?

One last complicating factor is the FTC settlement itself. If you recall, it was an agreement between all the aggrieved parties, Intel, AMD, Nvidia, and Via, but the latter company doesn’t really play much part in this new tale. AMD and Nvidia do, and had input in to the settlement, could have objected to any part of it, and agreed to it just over two years ago. They didn’t, they agreed to it as a whole, agreed to the language it contains, and didn’t change it when they had the chance. From here, both competitors will have a fairly hard time backpedalling now. Intel is going to hang both companies with their own words.

In the end, Intel is simply shutting the door on discrete GPUs by moving the low bar higher with their integrated performance, the high bar lower with PCIe limits, and offering their own solution in the middle. Broadwell will start the squeeze on mobile CPUs, and you can be sure that mainstream desktops won’t be far behind. Essentially the window on what a discrete GPU can do when connected to an Intel CPU will narrow with each successive generation.

One last thing to throw in to the mix, that PCIe 4x lane is all you get. If you want to add anything else PCIe connected like a USB3 chip, an extra NIC, 3G modem, or even a Wi-Fi card that isn’t on Intel’s chipset, IE their product, you need to use one lane. If Intel allows the 4x lane to be split up in to four 1x lanes, that means a GPU will only get 1x PCIe2 if you put another PCIe device in. Radio makers are going to love this choice, but not as much as the OEMs that have to make the choice. Or they can stick with Intel graphics, an Intel Wi-Fi NIC, and coming soon, and Intel LTE radio and have it all.

There will be arguments, FTC complaints, and likely lawsuits over this, but those will take years. If it is anything like the Nvidia chipset lawsuit, Intel will drag things out for a long time, write a check to make problems go away, pretend that they are sorry, and move on. GPUs will however be dead, and so will the companies that make them. No matter how this ends up, that is the case, and nothing will change it. Game over, welcome to the world of Intel graphics, and the still unusable drivers that come with them.S|A

The following two tabs change content below.

Charlie Demerjian

Roving engine of chaos and snide remarks at SemiAccurate
Charlie Demerjian is the founder of Stone Arch Networking Services and SemiAccurate.com. SemiAccurate.com is a technology news site; addressing hardware design, software selection, customization, securing and maintenance, with over one million views per month. He is a technologist and analyst specializing in semiconductors, system and network architecture. As head writer of SemiAccurate.com, he regularly advises writers, analysts, and industry executives on technical matters and long lead industry trends. Charlie is also available through Guidepoint and Mosaic. FullyAccurate