There are a lot of people that don’t understand what the AMD/SeaMicro deal is about, and the rampant handwaving is starting to get annoying. The short story is that it doesn’t signal anything more than where AMD thinks the server market is going.
AMD bought SeaMicro for two reasons, the technology and their market inroads. Before we look at those things, a few other items surrounding the purchase need clearing up. First, it does not signal that AMD is going to buy anything else, especially Calxeda. They tried that, and the deal didn’t work out, but it was close. It does not mean anything for an on-chip or intra-chip interconnect either, it is a system architecture, not silicon resident. Anyone saying this isn’t signaling insight deeper than their lack of technical understanding.
The purchase of SeaMicro also does not signal anything about an ARM license, anyone suggesting that is flat out ignorant. We broke the news of the ARM/AMD interconnect last summer, and people have been struggling to understand it since. It is easy, FSA and an ARM license are not tied to each other at all, period. We would be happy to explain it in much more depth if you want us to consult for your company, but it is not relevant to the SeaMicro news. The really short version is that FSA is on-chip, SeaMicro’s interconnect is not.
The acquired technology is not some uber-interconnect like people are breathlessly dreaming of, it is just PCIe that is implemented in a very clever way. No, it isn’t just clever, it is flat out brilliant, and it does not give any particular chip massive bandwidth. It just saves power, and lots of it, although not directly.
SeaMicro’s interconnect is just PCIe, and the older Atom based cards would support six Atoms from four ASICs. That means PCIe 2x is the maximum each ASIC can see. The ‘big chip’ version of the ASIC called ‘Freedom’ supports 2x PCIe2 lanes or 10Gbps. That is about the maximum you will get out of this interconnect, a number that is dwarfed by any modern chip to chip interconnect by an order of magnitude.
From there, the breathless pundits don’t understand what a SeaMicro server is, a BBOSNS (Big Box Of Shared Nothing Servers). This means that a SeaMicro server isn’t really a server, it is hundreds of servers in a box, and they don’t talk to each other any more than two laptops sitting next to each other do. They can talk over a network, but there is no direct interconnects between the sockets like a two or four socket server.
All of this is the long way of saying that between sockets, there is no coherency, and no shared memory. What one does is completely opaque to the other, this is not a massive Cray server with thousands of nodes sharing a memory space. SeaMicro sockets see their own DIMM and their own ASIC, that’s it. Everything else is virtual, basically faked.
If anyone tells you that this technology is going to be used for core to core interconnects, laugh at them. If anyone tells you that the technology is going to be used for socket to socket interconnects, laugh at them. If anyone tells you this means AMD and ARM are going to hook up, laugh at them. Point out that the physical layer is nothing more than PCIe, and that PCIe is not coherent. Ask them why PCIe would make a good on-die interconnect too, and watch the blank stares. Then walk away.
Recall however that we said the interconnect was brilliant, something that does not play well with the concept of vanilla PCIe. That nuance is the subtle bits, and those are where the power is saved. Modern chips and PCs are incredibly good at saving power, they have to be. The low hanging fruit has already been picked, and any advances tend to come in very small chunks. A percent here and a percent there can add up to a large amount if you do it often enough.
To do what SeaMicro did, you simply can not think in the usual ways. Instead, they looked at what was needed for a server and picked out exactly what isn’t needed for their uses. Anything that was not mandatory, they removed, turned off, or didn’t implement with fanatical devotion. The features that were needed, things like SATA, keyboards, and networks ports, were deemed too power hungry to implement, so they didn’t. Those were turned off too.
Instead of lots and lots of controllers, NICs, drives, and cables, SeaMicro put one in, their interconnect fabric. With some clever coding the rest of these bits, where absolutely necessary, are virtualized across the interconnect. All 768 individual servers in a SeaMicro box don’t have 768 drives, they share one SAN. There aren’t 768 NICs either, there are a handful of 10GigE ports. All of the components are picked to save as much power as possible too. This consolidation pays dividends, and that is added to the power saved by removing the physical parts. On top of this, the cabling and transfer power saved is immense too. It is 1/10th of a watt here, 1/2 watt there, but in the end, it adds up to a big number, a big number times 768.
So to say SeaMicro’s interconnect is nothing more than PCIe is true, but that is really selling it short. What they did is take existing, known, and common technologies, then implement it in an outside the box manner. The end result is huge savings and unmatched densities. The company sweated all of the details that no one else bothered to do, and the end result looks somewhat like magic, even if it is just a series of non-magical steps. It will not however transfer to the things most of the pundits are shouting about, zero chance.
The other half of the purchase was that SeaMicro is perhaps the only company that understands the ultra-dense server market. This arena is not related to anything that you would recognize as a server, blade, or anything else purchasable off the shelf. Sweating the details has resulted in something quite unique in the industry, and it possibly does the intended job better than Google, Facebook, Amazon, and all the rest have done internally.
Those companies buy customized servers by the tens or hundreds of thousands. They are averse to paying for and powering up anything that they absolutely do not need. You may notice that this is where SeaMicro shines. The dense server clients also don’t like paying for floor space that they do not use, and SeaMicro is the best there is in this regard as well. If you think about it, it is almost as if SeaMicro servers were designed to fit the exact specs of a the largest and fastest growing server market.
The reasoning behind this is, well, that is exactly what SeaMicro did, and they are the only ones currently doing it. The biggest and most lucrative server customers are the ones SeaMicro caters to, and are head and shoulders above their competition. While the company does not talk much about their customer base, they have the ear of every large buyer that matters in the server world. This is the other half of what AMD bought.
As a tangential bonus, SeaMicro was Intel’s baby. They were pushing the boundaries of server design, and pushing Intel engineers hard to do things better. This was a win/win for Intel, and given the attention that SeaMicro got at Intel shows and calls, it wasn’t just fluffy PR.
Those engineering ties have now moved one exit up the 101 from Santa Clara to Sunnyvale, and took their unique expertise with them. They of course rode a wave of dollars to get there, but it is hard to say that the company didn’t earn it.
The loss of SeaMicro to Intel is bad for Intel, but the gain of SeaMicro by AMD is worse. It instantly moves AMD from a fading presence in the area where they were traditionally strongest to a potential powerhouse. To make matters more complicated, it also removes the best tool Intel had to attack AMD with. Intel may minimize this loss in their official statements, but don’t be fooled, this really hurts the company. It may be a bit before AMD has true products out that are optimized for SeaMicro, but they will come.
In the end, AMD bought a very very good thing in SeaMicro. They got the best technology, and more importantly, the expertise to implement it. On top of it, the purchase opened up doors to just about every Intel server stronghold there is, something AMD would have a hard time doing otherwise. Brilliant move on AMD’s part, and a painful loss for Intel. Just about the only thing you can say other than that is that the purported technical press, loud though they may shout, doesn’t have a clue about servers.S|A
Latest posts by Charlie Demerjian (see all)
- HyperX ships it’s 60 millionth enthusiast memory module - Oct 15, 2018
- Bittware/Nallatech water cools 300W of Xilinx FPGA - Oct 12, 2018
- More on Intel’s 10nm process problems - Sep 17, 2018
- Intel puts out another 14nm 2020 server platform - Sep 11, 2018
- Why Can’t Intel Supply Enough 14nm Xeons? - Sep 10, 2018