We previously talked about the physical chip and core, then the uncore, and finally took a look at the reference design. They all look pretty good, and have all the right check boxes ticked off. So far so good, but will it sell? Will consumers care what is inside? In short, does Intel stand a chance in the market. The answer to that question is an unequivocal maybe, so lets look at what we know and speculate from there.
Some of these questions are based on specific features and results while others are a lot more subjective. As with most market predictions, they depend more on subjective consumer whims and corporate money hoses than quality and functionality. If you don’t think so, ask yourself why HTC rose to phone prominence so quickly, Microsoft has a sub-2% and dropping mobile marketshare, and the Nokia/Intel phone running Meego wasn’t announced at CES. There is no hard formula for success or failure here.
Intel has shown off a reference design running Android 4.0, and has Motorola, Lenovo, and a few unannounced others making phones. To top it off, the silicon is performance competitive with anything on the market. What could go wrong? What can Intel do to prevent problems?
Intel has traditionally made really good hardware, and is far ahead of where people think they are in advanced packaging technology. Nothing new on this front. On the software front, low level tools aside, Intel has traditionally been a running joke. They simply can not make graphics drivers, and as it is retired, Sandy Bridge still does not have accessible Linux drivers, Git not being an easy go for most users and the release distros won’t have it until after Ivy is launched. Desktop Atoms don’t have very functional drivers, and the state of open and/or accessible Medfield drivers are, well, they don’t work at all. In a ray of sunshine, there may be hope here though, even if it is only limited to phones. Things look to be changing for this round.
Why is this so important? The easy answer is because consumers really don’t care what runs on their phone. We mean other than Angry Birds, that is. Intel will tell you that CPU power matters in this market, and to an extent it does. The problem is that if your phone runs fast enough to do what you expect it to, then offering more is a very tenuous upsell. Quick, what core does your phone have in it? How fast? What is the average utilization? If your CPU power magically doubled right now with no more battery draw, would you notice? How?
There are scenarios where CPU power matters, but the majority of those are future proofing, and speculative. For a device that has a one or two year replacement cycle, buying expensive hardware for software that may come out someday is not the best investment, but isn’t completely a waste either. Other than money and potential battery drain, there isn’t really any down side to overkill on the CPU side. Intel has demonstrated pretty convincingly that their CPU performance is more than competitive, but you might want to hold off on the large RFQs until third party tests on released devices surface.
One of the biggest problems is software, but Intel appears to be getting a clue about how broken their software and drivers are. Given the gravity of phones and mobile to the company’s financial future, Intel is throwing developers at the problem. They claim to have hundreds working on Android and related phone software, and the results are starting to be publicly seen. The devices SemiAccurate saw worked well, ran Android 2.3.x and early builds of 4.x, and burned through the demos as expected. So far, so good.
Android is a double edged sword because it has two distinct types of code to run. While commonly thought of as Java based, Android doesn’t actually run anything Java. The platform runs a completely different VM called Dalvik. You can compile Java code into Dalvik byte code, and you can compile other languages to Dalvik too. You can not however run Java byte code in a Dalvik VM, or the other way around, they are distinct and incompatible.
Like Java, Dalvik is a virtual machine, the byte code it accepts is interpreted by the underlying hardware. This abstraction means that the underlying CPU architecture doesn’t matter, but everyone takes a performance hit for the code interpretation. ARM, MIPS, x86, or anything else will run a Dalvik app if the VM is there and correct. Performance may differ wildly on different microarchitectures, but the app should run correctly. Write once, run anywhere. Intel obviously has a correct Dalvik interpreter, and it looks to work well.
The other type of code is called NDK or Native Development Kit. This is more of a traditional programming environment where you write for the target hardware instead of an abstraction like Dalvik. NDK is meant for performance critical portions of programs, and since it is not interpreted, NDK code runs much faster than Dalvik code. The downside is a near complete loss of portability.
For things written with NDK, underlying CPU architecture matters. One extraneous NDK instruction targeted at an ARM chip will break a program on an x86 or MIPS phone. To make matters worse, it isn’t just the CPU, the GPU and every other bit of hardware may matter just as much. There are a lot of GPUs out there for the mobile space, not to mention DSPs, video encoders, decoders, and radios. You get speed, but it comes with explosive handcuffs, so use NDK wisely.
Luckily, the Android ecosystem is overwhelmingly ARM CPU based. There are no numbers floating publicly about Android CPU marketshare, but other than a few MIPS based low end tablets, the market is 100% ARM based. Those ARM CPUs vary a lot in capability and features, but all have a core set of instructions that they execute. Compatibility is very good between them, more out of programmer necessity than anything else. If you stray outside of that compatibility box, you are in the proverbial “world of hurt”, and it is an expensive world. Intel has hundreds of developers on Android for this reason.
The best example of what happens if you stray outside of this compatibility box is Nvidia and their Tegra 2 chip. Instead of implementing an A9 core with ARM’s NEON SIMD instructions like the majority of the world, Nvidia decided to save the substantial die area and skip those instructions. They are optional, so this is not a problem in and of itself. It could be a big win in some circumstances, especially if code is natively written for Tegra 2’s strengths.
Instead of making compatible hardware, Nvidia tried to force developers in to writing the same code for their GPUs and use CUDA to do it. Both are proprietary and not compatible with NEON and the rest of the ARM ecosystem. In a shock to no one, the concept failed utterly, no one wrote code that would be specific to Nvidia chips. The company was left with a CPU that could not meet the lofty multimedia performance claims in the real world, and design win after design win was lost. Tegra 3 has NEON, and I have not heard CUDA mentioned in the same sentence as that chip.
Getting back to Intel, they saw this coming and prepared in a very smart way. Intel estimates 70-75% of Android apps are Davlik based, 25-30% are NDK, but that doesn’t meant the Dalvik apps are pure Dalvik. Some may have a few performance critical loops written in NDK for speed, or make a call to a NDK library. The number of these ‘tainted’ apps is not clear, but either way, a person buying a phone that runs 3/4 of their apps as a best case will not be a happy customer.
To solve that fundamental problem, Intel is using binary translation, basically intercepting any NDK code and just in time (JIT) recompiling it to native x86 instructions. If you are envisioning it as a kind of Dalvik VM for ARM code, that is the basic idea. This works quite well in theory, any emulator out there does this, but it comes at a performance and possibly battery life cost. Intel is claiming greater than 90% app compatibility out of the box, and we have no doubt this will be the case. It isn’t rocket science at this point, it is a known quantity. Initial compatibility should be more than good enough.
The performance penalty is a different story, and that part is not known for real world use. Intel is claiming good performance, but until the pundits get phones to test, Medfield will have a cloud hanging over it. If Intel did a good job, the performance and battery life will not take that big a hit. The CPU performance for smooth running is there, as is the potential power headroom, so lets just say we are cautiously optimistic.
In the end, the software compatibility side is more than good enough for a new phone to get to market. If the binary translation compatibility improves with time, it may very well be a complete non-issue by the end of the year. Then again, it only takes one broken app to justify a return, so ongoing maintenance is going to be critical for Intel. Until they get well in to the double digits for market share, they will always be playing catch up. Once they get critical market share mass, developers will code for x86 directly, and the aforementioned expensive maintenance will largely go away for good.
As a humorous aside, go dig up any Intel speech about why ARM will fail on the desktop. Swap all instances of ARM and Intel, the do a find and replace for desktop/laptop and phone. Same damning and insurmountable problem, but this time it is Intel as the underdog. If it is solvable for one side one would think it is solvable for the other.
Getting back to drivers, this is the one area we are far from convinced Intel will get right. The code for current units is there, and Intel appears to not have screwed this one up, a mid-range miracle if there ever was one. There seems to be two paths for Medfield drivers, one for phones, one for everything else. The everything else one is, well, the same broken mess that Intel always produces in the space. The phone version is in much better shape though, and unrelated to the mainstream driver.
For the short term, this is a good thing, the drivers for an Atom may actually work! The first problem is we understand that Intel has no intention of releasing the ‘working’ driver, other than for meeting specific GPL requirements. The drivers are fully available if you are big enough to warrant them, sign an NDA, and likely jump through a lot of hoops, should you even know to ask for them. This is somewhat understandable for competitive reasons, but ends up snatching a victory from Intel’s grasp.
For the average software developer, all they get is the open variant that doesn’t seem to have basic functionality. This is a two class system that says to interested parties, “We have burnt you time and time again on the last few generations, but this time we got it right. Unfortunately, unless you are big enough, we don’t care, and won’t give you access to what you need to do your job.” The percentage of phones sold by non-big name vendors is pretty tiny, so from a sales perspective, it is an tolerable snub.
Unfortunately, the developers that Intel is trying to pull in to avoid the very expensive ongoing maintenance and compatibility problems mentioned earlier are the ones they are snubbing. No, they are not just snubbing the developers, they are publicly taunting them while lowering the bar on the code they can get. To say this is monumentally stupid is understating the issue. Intel doesn’t understand the market they are desperately trying to break in to, and still are more interested in meeting checkbox goals than fixing dire problems.
The end result is that Intel starts out like any underdog with a new platform, by incentivising developers with cash until they can reach critical mass. This is a long and expensive process that normally ends when a critical mass is reached and developers write for the platform because it is good for them as well as Intel. Instead of courting developers to voluntarily make cool programs for their chips, they deny the numerical majority of coders the tools to do their job.
On the driver front, Intel has the ability to do the right thing, write good code, and yet their lack of common sense might sink it all. They are buying themselves an expensive problem and destroying their chances of making Atom based phones successful in the long term. Rather starting the process of climbing out of their self-dug hole, Intel seems to be intent on pausing to refocus while heavier digging equipment is shipped in. It is painful to watch. Again.
Getting back to the good news, Intel has a good product. It is competitive with anything on the market now. It has the software support at launch to do what is necessary, and as long as Intel keeps shovelling money at software, things will stay that way. They have the vendors to design, market, and pry open carrier doors in the right way. They even appear to have a number of carriers on board. Everything looks good for the next few months.
The competition however is not standing still, and that is the next big open question. Intel hardware currently leads the ARM based competition in CPU performance. Intel hardware is currently power use competitive with current ARM offerings in many ways, and GPU performance is more than adequate too. Intel hardware is not on the market yet, ARM is. In a few months, the next big performance step on the ARM side, Qualcomm’s Krait/Snapdragon S4 is coming out. Ironically, the first public showing of both Medfield and S4 devices was the first day of CES.
How will Intel’s CPU performance advantage hold up against the S4? How will it hold up against the flood of other ARM A15 based SoC due out later this year from TI, Samsung, and others? It is unlikely that the new fire breathing A15s will be notably lighter on batteries, so any power lead Intel has should be in good shape. Since hard numbers on S4 and OMAP5 performance are not publicly available, this is an open question. If you have such numbers please email Charlie at Semiaccurate.com.
In any case, we expect ARM to take the lead on CPU performance with their new crop of A15, or A15ish in the case of the S4, chips, but not fundamentally change the power use balance. On the GPU side where Intel currently runs mid-pack, the bar will be raised a lot by all the new SoCs. Intel is about to get blown into the weeds here on raw performance, even excluding possible driver issues.
Luckily for Intel, if you take all those earlier questions about CPU specs, performance, and relevance to the user experience, and substitute a ‘G’ for the ‘C’, you have the same result. Like CPUs, GPU performance matters only if it is not enough. Enough is a minefield of a metric, and you can always pull out an isolated scenario that proves any point you want, but for the overwhelming majority of users, current chips have more than enough GPU power.
Until phone or tablet resolution takes a large step forward, more GPU is largely irrelevant. Ask yourself this, if your phone GPU power magically doubled without any added battery draw, would you notice? Is there a point to decoding a 1080p movie at 120Hz if you have a 960 * 600 screen that only can display 30FPS? You can plug your phone in to your TV to watch a HD movie in stereo, but will your phone support the required HDMI1.4a connection? Does your phone have the storage space to hold that much data? Is there anyone that will actually do this?
The net result is that the only thing the new crop of ARM chips do for the user is close the CPU power gap, nudge battery life a bit, and blow the graphics performance gap wide open. For the most part, these things are irrelevant to a potential end user. As usual, Intel has another round of Atoms in the pipeline that will again rebalanced things. Intel has committed to dual core/dual graphics and quad core Medfields, both still code named ‘Future’, so they are not standing still.
On top of that, there is a 22nm Atom with a new core set for a debut after the ‘Future’ duo, but that won’t raise the performance bar dramatically. SemiAccurate expects this core to debut in about a year, likely at CES 2013, with devices based on it to come out a few months later. Intel’s large and slow steps will be surrounded by ARM’s smaller but much more frequent advances. It is truly a fierce fight, and all the players are not known at this point, much less their relative performance advantages. For hardware at least, Intel is in the right ballpark, and has a competitive roadmap.
So will the Intel Atom based phones succeed in the market? The first two critical bars, working software and having a product people can buy have been cleared. The OEM/carrier hurdles are being crossed as we speak too, so this much is all positive. The next few stumbling blocks, future competitiveness, ongoing software support, and consumer whims are a mixed bag. Future competitiveness is a little more cut and dried, and as outlined above and Software support is there but tenuous. We are positive on the hardware side, but Intel’s track record on software maintenance is spotty at best.
That leaves one big open question, consumer whims. Will people be tempted away from their iToy by a slick Intel Inside logo with an Atom hologram on it? Hell no. Will consumers be tempted to the dark side, err, microarchitecture by visions of 3x CPU power over an ARM A15, or 17x the GPU power of a Tegra? Hell no. Will consumers care what CPU, GPU, or anything else is in their phone? The overwhelming majority won’t have a clue what you are asking about if you survey them, but “Oooh, that one is shiny and the blue LED blinks!” So what is to be done to court consumer desire?
There are several thing here, make a chip that does what it says, and is supported well in software so phone companies can make nifty products with it. Intel is there on both counts for the short term, but could easily botch the mid- and long term side. Then the price needs to be right so Intel inside doesn’t equate to a $100 premium for less compatibility. We hear that Intel has been buying Atom design wins in Taiwan for over a year now, and there is no reason to suspect these first few phone wins aren’t heavily subsidized too. The price for now is very likely right where it needs to be, basically very attractive. Advertising support is likely there too, but we will see how well this plays when the devices launch.
Intel has everything lined up for a successful launch, OEMs have all they need to be enabled. If Samsung decides to make the next flagship widget with an Atom, there isn’t anything on the technical front to stand in their way, but it will almost assuredly fly high or fall to earth based upon design and AICLF (Artificially Instilled Consumer Lust Factor). If Verizon decides to push the hypothetical Intel based X73/a4-991 as the Droid Superior XXL and hype it all over prime time TV, it will sell, and Intel will be writing the checks for the ad spree.
If Intel keeps coming out with competitive silicon, keeps the price down, keeps the software updated, and throws vast amounts of cash at every step of the chain, they will keep getting design wins and sales. From there, if x86 eventually gets a respectable double digit market share, and developers stop being actively driven away, the software compatibility problem might solve itself and go away. Then, and only then, will Atom be a success in phones. If everything goes perfectly, this is a multi-year process, and there are no shortcuts.
In the end, the crystal ball says that Intel still has a chance, but it is a steep uphill climb. The check boxes are all ticked, and things look positive for launch. The last three attempts to get Atom in to phones were an unmitigated disaster, but things really are different this time. Intel can do well with Medfield and will have some solid products on the market shortly.
The continued upwards climb, spreading of the wings, and eventual self-sufficiency of Atom in phones now rests on the continued software support and subsidy flows. Sadly, these later decisions tend to be made by the same people that made a mess of the last few attempts, and have a track record of expensive failures. It is a cultural problem at Intel, and we see no signs of that improving. Once again, software may sink good hardware.S|A
Updated: 25 January 2012 at 9:55am. Spelling correction of Davlik to the proper Dalvik.
Latest posts by Charlie Demerjian (see all)
- AMD puts massive SSDs on GPUs and calls it SSG - Jul 25, 2016
- AMD revamps workstation lines and adds three cards - Jul 25, 2016
- AMD unveils Loom and ProRender software - Jul 25, 2016
- Emulex puts NVMe over FC with NVMf - Jul 20, 2016
- ARM needed Softbank, Softbank wanted ARM - Jul 19, 2016