In a Computex filled with not much new, Supermicro had three categories of new stuff, and some other bits. Topping the list are high temp servers, GPU compute nodes, and storage boxes.
GPU blades – two in 7U
The first bit we saw at the booth was an update to their X9 blades that would take not one but two GPUs and two Sandy-EPs as well. This theoretically allows for 20 GPUs/7U, or 100 GPUs in a cabinet with 7U left over for a large block of dry ice to keep the metal shrouds from boiling off. If you are interested in actual computing rather than exercises in explaining to your boss why achievable FLOPS are so low, Supermicro also makes a 4S Sandy-EP blade too.
If you are looking for a machine that has actual storage capacity, memory slots to match compute capacity, and other such frivolities, look to the SuperServer 8017R-TF+. As you can immediately tell from the name, it is a… umm… err… SuperServer line product. It also is a 1U 4S Sandy-EP box with 8 DIMM slots per socket, 32 in total. On top of that, there are four PCIe3 8x slots, but in 1U, you have to question why, half-half-half-half-height cards are about as useful as they are common.
Three GPUs in 1U
Should you want disappointing throughput in the same form factor, no worries, the 1027GR-TRF is there for you. This is a 1U 1S 8 DIMM Sandy-EP box with six 2.5″ drive bays. The disappointing throughput comes in with the three dual slot GPU bays, each fed with a full 16x PCIe3 slot. The packaging is pretty tight, as you can see, there isn’t enough room left over for a bottom panel.
Four GPUs in 2U
If that level of percentage achievable FLOPS isn’t low enough for your uses, you can always step up to the 2U 2S 2027GR-TRF. This one not only has two Sandy-EPs, but it also has space for four 2-slot GPUs, 8 DIMM slots, redundant PSUs, and, well, that’s about it. The most interesting part is a bit hard to see, but it is to the left of the S|A watermark under the plastic fan shroud. The heatsinks are asymmetric, the rearmost one is about twice the height of the forward CPUs. This doesn’t really matter as long as both do the necessary cooling job with the available airflow, but it does show that Supermicro is really sweating the details.
From there we move on to the 7047R-TXRF workstation, perfect for massive storage bandwidth. It is a 4U 2S tower workstation that sports 16 DIMM slots and eight 8x PCIe3 slots, spaced with no separation between them. This is obviously not good for GPUs, so why do you need that many slots? PCIe SSDs. Eight 8x slots provides 512Gbps of bandwidth, or to use a more familiar number to most of you, 16 Gdwords. To put things in perspective, a fairly high bitrate HD movie is about 25Mbps. This workstation could theoretically support 20,000 of those streams while still leaving enough bandwidth left over to match two SATA6 SSDs raided.
Basically, this box should be more than enough to keep up with the low latency storage needs of your local large metropolitan area, your I/O will be a bottleneck long before storage is. Luckily, the 8x slots provide a solution there too, Supermicro now does their I/O cards in normal PCIe layouts, not backwards/PCI-no-e layout. I/O may be the bottleneck, but at least it is an addressable bottleneck now. I guess that makes storage bandwidth a solved problem, what a scary thought.
The last of the three major categories is high temperature boards for the upcoming breed of high temperature data centers. If you have even the barest grasp of thermodynamics, you quickly realize that running a data center at above ambient temperatures can save massive amounts of energy. No cooling and less airflow means less power spent cooling and moving air. Supermicro is making off the shelf boards that will run at 47C all day long, so your 40C data center will take customer boxes as well as bespoke units.
Follow instructions carefully
The end result is cheaper operation of a data center, so theoretically you will pay less for rack space. There is one big problem with the whole concept, Intel has a curious set of instructions to follow to ensure proper operation of their CPUs at these elevated temperatures. If you are confused by the picture above, you just haven’t read the Intel High Temperature Data Center Server Assembly Instruction Manual carefully enough, it does make sense if you are detail oriented and diligent enough to use a screwdriver.
Just kidding, we are making all this up because writing about high temperature boards that are otherwise non-remarkable is far from exciting work. It matters a lot, but hardly the stuff of riveting barroom conversations. Go local team or something. That said, Supermicro has a full line high temp boards with many more sure to follow.
I can taste the colors!
Last up we have a minor but useful update to the Supermicro rack line. The older models were pretty slick, with modular construction, configuration options until you get bored, slick cable routing, and lot to recommend over a generic unit. The one problem is that they color co-ordinated with nothing, dull greys and washed out pastels are so last century data center! Supermicro stepped up to the plate and not only updated the cable routing and control, but marked everything in bright, obvious, and non-pastel colors. They still color co-ordinate with nothing, but at least they are brighter.S|A
Updated 7/17/12: Changed blades from 9U to 7U, and related numbers accordingly. Also changed 2027GR-TRF to 4 GPUs from 6.
Charlie Demerjian
Latest posts by Charlie Demerjian (see all)
- Microsoft Hobbles Intel Once Again - Sep 20, 2024
- What is really going on with Intel’s 18a process? - Sep 9, 2024
- Industry pioneer Mike Magee has passed away - Aug 12, 2024
- What is Qualcomm launching at IFA this year? - Aug 9, 2024
- SemiAccurate is back up - Aug 7, 2024