Intel held their AI day today and SemiAccurate has a few details about their presentation. Since we weren’t at the show, we asked about the tech side of the presentations and got a few key bits.
The big news is of course that Intel now sees themselves as an AI focused company, something we covered in detail a few days ago. The short version is that Intel has been in AI for a long time, the upcoming products are quite real and have true features useful for AI. The only difference is that Intel now talks about their prowess in the field, it isn’t a shallow PR ploy like at some companies.
But what are they doing? One of the new announcements from supercomputing is that the CPU after Knights Landing is called Knights Hill and it is AI focused. While no one at Intel would say exactly what that means from an architecture and an ISA standpoint, we did manage to glean a few bits about the effort. They all point to native support for 16-bit instructions and data sizes with a strong possibility for direct 8-bit support too. AMD has 16-bit support in it;s new GPUs too so AI looks to be an industry-wide trend.
More importantly Intel is finally starting to talk about their Nervana Systems acquisition and it’s hardware. Actually they are not talking about the hardware other than the code name Lake Crest, just that it is shipping in the first half of 2017. The real question for Intel is not the hardware itself or its immediate predecessors, those are just about fully baked now. What they need to know is what to do with it and how do they go about getting there.
This isn’t an open-ended question nor is it rhetorical, Intel is about selling product and the Nervana chips are soon going to be an Intel product. Will they be a socketed chip on their own? A Xeon co-processor like Phi? An MCM like Omnipath adapters or Altera FPGAs? Do bear in mind that these are not just packaging decisions, they are key to, and based on how the product is used.
Does the chip need high bandwidth, low latency, both or neither? That may be known at the moment but there is a second order problem here, AI software and algorithms. The most used software packages in the space are both brand new bleeding edge technology and likely to be outdated in a few months. Things are moving so fast that what is hot now is old hat after the next industry conference. Worse yet the algorithms of today could possibly be completely different in their needs from the hardware. Today’s theoretical high bandwidth, latency insensitive software package could be replaced by a low bandwidth, highly latency sensitive one next week, the space is moving that fast.
Going back to Intel, they likely have an idea of how the Nervana hardware will initially be productized. How the customers use that hardware is an ever-changing world of cat herding and there might not be a single or even small number of answers to what these people want. This is where Intel is doing the smart thing, the Nervana hardware will in initially be sent to those who need it, the big AI players and researchers really, to get feedback.
This feedback will be rolled into the product decisions of the near future to determine how the parts should be tied together for maximum usefulness. The iterations will continue until the field of AI settles down, if it ever does. Initially Lake Crest will be similar to the Skylake + Altera SKUs, IE tightly coupled on the same package with a Xeon. This combo will be called Knights Crest, and unfortunate and confusing name that ties it to the Phi/Larrabee family to which it shares nothing.
An Intel hardware or market focused talk would not be the same without claims of performance increases, and this time will be no different. Knights Mill performance will be going up by a multiple, 4x to be precise, Knights Landing. More impressive is Intel’s claim that they will be improving AI performance on their products by 100x over the next several years. This is obviously from a combination of hardware and software but looking at Knights *, Nervana, and Altera products tightly integrated with Xeons,, it is a very plausible goal.
Speaking of software, Intel looks to be following AMD’s lead in opening up the stack. Intel has always been the leader in open hardware specs, their documentation is the gold standard. This had directly lead to innovation on the software side on top of their hardware stack. Today Intel is said to be opening up more of the tools and software stack to achieve the same goals. If you look at the state of AI research, it is almost entirely Linux based for the basic reason of openness, you can’t do the low level development needed on closed hardware and software. Intel opening up more of the software stack is the right thing to do for the right reasons, it will boost AI development significantly. Since Intel is and AI focused company now, the impression I get is that they too get it.S|A
Latest posts by Charlie Demerjian (see all)
- More on Intel’s 10nm process problems - Sep 17, 2018
- Intel puts out another 14nm 2020 server platform - Sep 11, 2018
- Why Can’t Intel Supply Enough 14nm Xeons? - Sep 10, 2018
- Intel can’t supply 14nm Xeons, HPE directly recommends AMD Epyc - Sep 7, 2018
- AMD reintroduces the Athlon name with two CPUs - Sep 6, 2018