When Intel has nothing to offer in a hot market like IoT, they pull a smoke and mirrors act like the newAtom A3900 and E3900. No SemiAccurate isn’t saying Intel is pulling an Nvidia, just that a chip with nothing in common with a market is now a perfect fit for this newly defined space.
What is IoT
Lets start with IoT, a market of connected things and sensors that live for a decade off a single coin cell battery. It is a hot market, possibly the only real growth area in computing which garners a lot of investment, sales, and Wall Street attention. By their own accounting, Intel has two lines in this space, Quark and Atom. When we directly asked Intel about their definition of the IoT space, they replied that it was everything from the gateway to the server, something that lines up nicely with their offerings.
If you ask every other company in the industry you get a very different answer. Those say it is a connected device, more microcontroller than multi-core CPU, and with a radio. The most poignant answer we have heard recently was from ARM’s CEO Simon Segars at Techcon 2016. He described the space as anything that you can glean insight from, effectively anything from a sensor with a radio to a camera but all are connected to the net. This pretty much universal definition has one thing in common other than the industry agreeing on it, Intel has no offerings to fit the space.
In short you have two diametrically opposed viewpoints, one industry consensus, one from a company with immense pressures on their management to be in the space. Like cloud which is defined by SemiAccurate as, “what we have to sell you” with “we” being anyone in the industry, IoT somehow fits the products that Intel has to sell.
Time to dance
Why are they doing this transparent and rather desperate hot shoe dance of redefinition? That one is easy, they are trying to make the investment community aka Wall Street to think they have a chance in this arena. What for? Growth and sales in this market and a lack thereof in the traditional Intel markets. If the investment community thinks Intel has no chance in the only growing hardware market, they tend to ask questions that makes Intel spokespeople twitchy. That leads to more probing questions and eventually stock price declines.
The traditional way of combating this unwelcome trend is to, well, have products that actually compete in this market. That leads to actual sales, income, and a virtuous cycle in the opposite direction of the current one. Wall Street tends to like this cycle for some reason and it leads to softball questions for the spokespeople and increasing stock prices.
Fight fire with smoke
Unfortunately Intel has no such products and can’t make them, their business is not structured to compete in that market. Please do note this isn’t a hollow swipe on our part, it is a technical reality. If you look at current Intel cores in the space, either Atom or Quark ones, on a similar process the ARM competition is about half the die area or less for usually better performance. For the cost focused IoT world, this leaves Intel as a non-starter, they are out of the running on price alone.
Intel counters with some claims about their process cost being lower than anyone else out there, but won’t back it up with anything more specific. This argument fails for two reasons, first is the price they are charging for foundry services is so far out of line with the competition that no one can produce mid-range parts on their process, Altera when it was independent was a marquee Intel foundry customer with their Stratix 10 line.
This high-end essentially cost-no-object part was on Intel’s 14nm process, the other two lower end cost-sensitive lines were built at TSMC. Given the cost of splitting a line across two foundries, this wasn’t a choice made lightly, especially since the Intel process was far better than TSMC’s 20nm one. What Intel was charging for wafers put them out of the running, something that intones a very high cost per wafer internally. Other data SemiAccurate has seen backs up this supposition too.
The process of misdirection
Then there is the little issue that IoT devices tend to be on very old process nodes, bleeding edge in this space is 40nm but 55, 65, and even the ancient 130nm processes are routinely used for this market. Why? They are good enough and don’t leak, a key for long life in the power constrained IoT world. Intel does not have the ability to offer such processes so they have to play the “our latest process is so much better than the competition” card. In that they are right, it is better for some things, specifically large area high clocked devices.
What they won’t say is that their available processes are unsuitable for the IoT space due to massive leakage and the cost is more than an order of magnitude too high for anything made on it to compete. Don’t take our word for it, go price a 90nm wafer at TSMC or UMC, then compare it to a modern 14/16nm wafer, then add a multiple for the Intel price. It is really hard to make cost effective <$1 chips on modern nodes. Until you get well into the two digit dollar space in high margin devices, Intel’s process cost overwhelms any potential benefit for IoT devices. Almost no non-Intel definition of IoT device costs that much, not even close.
Bigger is not better
Worse yet that core area multiple comes back to bite Intel too, plus there is the x86 tax. Most of the IoT code base is built for ARM M-class microcontrollers or other architectures common in the space. These all have instruction sets tuned for the task at hand and do the job well, that is to say energy efficiently. Intel has x86 which they claim is an advantage because of a large knowledge pool.
In the embedded and IoT space the opposite it true, Intel is the incompatible architecture. Worse yet the x86 decode overhead bites hard into the energy use side of things, it takes a lot to decode and process an x86 instruction and that overhead is painful. The Quark and Atom cores are not only bigger, they are vastly thirstier and built on a far less efficient process for low power. If you recall Intel made the same claims about their inherent x86 advantage in phones and tablets. Contra-revenue and painful retreats were the result there and x86 is an even worse fit for IoT.
New offering in the space
So that brings us back to last weeks nearly fact-free announcement. Notice that Intel did not disclose die sizes for the Atom A3900 and E3900 lines. both are 14nm chips but if they did disclose the die sizes, well a comparison against competitive ARM cored SoCs would have blown out their competitiveness claims, their technical reticence is not by chance.
When Intel has an actual technical advantage, they are the first to shout it from the hills. You might have noticed that the sound of music echoing from the mountains of late is more crickets and birds than sub-micron details. These two new Atom parts are what Intel had on hand to introduce the day before ARM’s IoT focused Techcon 2016. They fit Intel’s IoT definition perfectly and line up with exactly no other company’s version. We can’t explain this discontinuity, they are significantly faster that ARM microcontroller based products, burn orders of magnitude more power, and are priced to Intel’s normal tiers.
Software is hard
Then there is the software problem, all the spaces that Intel claims are good fits for these IoT Atoms are strongholds for Linux. The Intel spec sheet for the new Atoms does claim Windows support but no IoT device would pay tens of dollars for a Windows 10 license against far sub-$5 Linux devices, hardware and software included. Why is Linux problematic for Intel? They are using their own Gen9 GPU, something that simply does not function correctly under Linux. Try it for yourself, run any trivial 3D game and you will instantly see how abjectly broken Intel graphics are.
We can’t state this plainly enough, their graphics just don’t work. We have brought this up to Intel representatives numerous times and off the record they have stated directly that they know the problem but can’t be bothered to fix them. This is both a hardware and a software problem and has been ongoing for years. Until now Linux really didn’t matter to Intel but the IoT boom is forcing their hand. Unfortunately the hardware they are using to fill this perception gap is non-functional. Don’t take our word for it though, try it yourself or write the author for a list of A-list common game candidates that will clearly show this officially non-problematic feature set off.
Perfect fit for an imperfect definition
So with the new Atom E3900 and A3900, Intel is bolstering a strong IoT product line. They claim it is a perfect fit for their unique definition of IoT too, something none of the several hundred others companies in the space seem to share. If you look at these offerings technically though, they are too large to sell, too power hungry to be in the space, on a process that leaks too much, on a process that costs too much, non-functional on the prevailing OS in the space, and ill-suited to anything other than their core use case. That would be making sure Wall Street doesn’t ask why they aren’t in the IoT space, not anything related to IoT work itself. Calling these Atoms IoT is nothing more than an intentional diversion rooted in a lack of competitive products.S|A
Latest posts by Charlie Demerjian (see all)
- AMD shrinks GPUs to 12nm with the Radeon RX590 - Nov 15, 2018
- Coolit water cools Cascade-AP CPUs - Nov 14, 2018
- Intel tries to pretend they have 5G silicon with the XMM 8160 - Nov 12, 2018
- AMD’s Rome is indeed a monster - Nov 9, 2018
- Intel announces Cascade Lake-AP MCM - Nov 5, 2018