At AFDS/Fusion 11, Jem ‘No cute nickname yet’ Davies gave a talk about ARM’s (NASDAQ:ARMH) view on power and heterogeneous computing. Some of the talk was old, some new, but it all fit in with the conference theme.
The first item Jem talked about was current trends in power and CPU scaling, and how things are not getting better. The basic idea is that transistor sizes are shrinking but power is not scaling down at the same rate any longer, You can put 2x the transistors on a chip, but if it takes 3x the power, you have a problem. Anyone who works on chips can tell you we have a problem, a big problem.
If you are thinking that this sounds like a rehash of Mike Muller’s talk at Common Platform, you would be right, only this time there was far less hardcore tech, and no eye surgery. Basically, the talk was right for the audience, this is something the software guys really need to wrap their heads around. Hardware won’t scale like it did, you can’t assume that solving bad code through brute force CPU power will be viable forever.
Once the problem was laid out, the talk turned to standards, aka how to make the silicon useful in the face of the new reality. For the most part, CPUs are fast enough for almost all non-server tasks, the bottlenecks are in other places now. Two more CPU cores won’t help any more, if you have six and use two, eight buys you nothing. The most problematic bottleneck today is graphics. ARM has Mali, AMD has ATI, and Intel still doesn’t have basic driver functionality.
GPUs, what are they good for?
With the need for more GPU power on the top of most silicon provider’s agendas, the need for more usable GPUs comes to the forefront of most software provider’s agendas. Funny how that works. That said, if left alone, the market would try and provide 73 different solutions for 31 different problems, the majority of which are nothing more than shallow attempts to lock users in. We call this the “Woulda, shoulda, CUDA” problem.
ARM’s view of GPU compute
ARM sees this in an up-close and personal way, you might recall that they don’t actually sell chips, just designs and IP. ARM has their cores, Coretex-A8/A9/A15 being the most common, and their Mali GPU line, but that is only the start. There are hundreds of vendors selling ARM products and related IP, and you can get multiple different GPUs tailored to multiple purposes. This mix and match situation has the distinct possibility of sliding in to a real mess very quickly.
Standards make an impact
How do you solve this problem, and un-mess the situation? Better yet, how do you avoid the problem in the first place, after all, it isn’t easy to fix broken silicon in the field. The solution, abstract it away, have an API that just works, a driver on steroids, or…. FSAIL. Jem did not mention FSAIL by name, or anything related, but the ideas put forward by ARM are exactly the same as the ones put forward by AMD. Once again, what a coincidence that both companies were talking at Fusion 11!
Standards are well and good, and a thin software layer to make things just work is the way forward, no question there. Unfortunately, there is a bigger problem that may come up from the process, making the “One true standard, praise it’s APIs”, that is making it wrong. This brings things from “real mess” to “intractably painful”, leading to forks and worse. The one thing worse than “One true standard” is “Many one true standards”. Why? Because then you have many ways to do things wrong, not just one.
Lots of depth here, wise words
The fact that ARM, AMD, Microsoft, Corel and others are all standing up to bang the OpenCL drum, and implicitly supporting the next steps too, is encouraging. There have been some grumblings about OpenCL 1.1, but 1.2 is around the corner, and things appear to be progressing rapidly. In a sure sign of impending problems, all really appears to be going quite well. Head to the shelters, you still have time.
So, where does that leave us? ARM and AMD together ship the overwhelming majority of CPUs sold, and Intel is on board too, if only on the CPU side of things. That is progress, but there is a lot of work left to do. Nvidia supports OpenCL on paper, but if you look at what is available, their true colors show, especially if you talk to their sales team. The writing is on the wall, and AFDS/Fusion 11 laid out the roadmap for heterogeneous compute quite clearly. From here on out, it is a question of when, not how, vendors will support the “One true standard”.S|A
Authors note/plea to those involved: Please keep the naming of the resultant standard(s) sane. FSAIL works decently, Fusion Architecture Intermediate Language does not. ECMA 19745.a.43-2013/V7 does not either. Keep that in mind.
Latest posts by Charlie Demerjian (see all)
- SemiAccurate digs out Intel’s 10nm process problems - Sep 11, 2017
- Intel foundry customer bails out - Sep 6, 2017
- Qualcomm outs the 9150 C-V2X chipset - Sep 5, 2017
- AMD’s Epyc pummels Intel’s new Xeon-W workstation CPUs - Aug 29, 2017
- Mediatek fill the mid-range with Helio P23 and P30 - Aug 28, 2017