IDF didn’t neglect the bigger CPUs for Atoms, there were no less than three families of big Xeons Intel teased too. Like their Atom brethren, nothing worth a damn was disclosed because it was IDF and real information is forbidden.
The first one out of the gate is something that was widely expected, the single socket Haswell based Xeon. If you take a consumer Haswell and blow a few less fuses in an attempt to arbitrarily create new markets, you can call it a Xeon. Jack up the price 5-10x and you have the new Haswell E3s. If you know anything about Haswell, you know it should be a fairly decent part, mandatory marketing stupidity aside.
One unexpected bit on the slide says, “Media software developer kits: Available for Linux and Windows in 2013″. This tells you that the GPUs on these E3s will not be disabled and Intel is trying to push GPU compute in the server space. The concept is a great idea, AMD has been doing it for a while now, and Intel likely feels compelled to at least shout about it so as not to appear as far behind the curve as they really are.
There are two problems with this approach, the GPU itself and OSes that they are aiming at. Intel has a decade plus long history of botching everything that even potentially touches a GPU, and nothing has changed this time around. It is a management problem, not a technical one however and there are no signs of anything like a fix being imminent, just the opposite in fact. If Intel is pushing server side GPU compute, you know the concept was ready for prime time a few years ago but only now do they have a part to address the problem.
This is how information is (not) disseminated now
Only one problem, the GPU in Haswell sucks. True it sucks less than its predecessors but it is woefully inadequate compared to an AMD APU like Trinity that costs 1/10th the price. The best Intel can do here is drive corporate acceptance for AMD products in the space. The main reason for this is not the anemic GPU performance itself, but the drivers they use.
To call Intel drivers barely functional is high praise indeed, but they haven’t reached that bar in SemiAccurate’s eyes yet. This woeful state of affairs is nothing new, Intel management practices still preclude fixing the drivers, and there is absolutely no impetus to change anything. The big difference with today’s announcement is that this state of affairs now has a Xeon tag.
Most of this problem is centered around the joke that is how Intel manages the Linux driver program, not the result of their work. A good example of this is the Media SDK tagged on the slide. Two weeks ago, the SDK was announced at GDC but no Linux version was mentioned publicly, NDA only. The space that Intel is targeting is 70%+ Linux so they had to mention it with forced smiles.
Any prospective clients and devs will realize the thankless task of developing for a platform that the vendor is only reluctantly supporting in a cursory manner. About the only good thing to come out of this program will be the occasional bout of pained laughter when the full extent of the waffling is revealed. This is just sad guys, at least try.
As you move up the product stack, things get progressively better. This of course brings us to the delayed Ivy Bridge-E/EP/E5, basically the socket 2011 chip that uses the Ivy core but loses the GPU. SemiAccurate has been hearing that stepping after stepping have come out of the fabs for this part, and each one fixes some problems while revealing a few more. Sandy-E/EP has essentially no competition to speak of in the market so the pressure to get Ivy-E/EP out the door is roughly zero. It will come, just a bit later than intended, but that is old news. The entire list of details given out other than “Q3 release” is zero points long.
The last one up is really going to be a killer part, the Ivy Bridge-EX. This 4/8/more socket beast is a replacement for the rather crusty Westmere-EX 10-core Xeon released near the end of the Mesozoic era. Once again there is no real competition to speak of in the 4+ socket x86 space unless you count IBM’s Power and Intel’s Itanium, but the last one has a knife between the shoulder blades and is gurgling worryingly.
In a refreshing novelty for an IDF presentation, Intel gave not only one but two factlets out about Ivy-EX. The first is that in an 8S configuration, the system will support up to 12TB of memory directly, that means 1.5TB per socket. If this system was a Westemere-EX it would have 4-channel memory with eight DIMMs supported per channel, or 32 in total. 1536/32=48, not a nice number for DIMM sizes. That means the Ivy-EX line will either have six memory channels per socket or support six DIMMs per channel. Of the two the latter is more likely, DRAM speeds have gone up since Westmere-EX’s launch, physics has changed a whole lot less.
The other is a brand new technology that will change the world. Intel calls this, “Run Sure Technology”. It is another in a long like of BS marketing terms that try to hide things detrimental to the user in flowery language. Remember the Small Business Advantage package that actively precludes a secure system? How about the other buzzwords that bring a valued customer un-closable hardware based remote exploit holes and DRM that allows a 3rd party to shut them down remotely on a whim? Remember kiddies, you can’t turn this off or even close the doors, nor can you get the keys either. Sound like fun? If so, Intel bringing something similar to a high-end server near you is a wonderful selling point. If not, well enjoy the “Advantage”, for “Sure”.
In the end, Intel put out a bunch of good stuff, and then sank their own ships not once but twice. First they did a big presentation and said nothing but hollow fluff, and that is the best of it. Then the little information that did leak out despite their wishes shows that the company still doesn’t give a damn about security, graphics, or what is going by them in the market. The more things change…S|A