In part two of this series, we looked at the core of Project Denver, aka T50. In part three, it is time to take a look at the bits surrounding that core, basically what differentiates Denver from a T50 CPU.
To switch gears from T50 to Denver, we scale out from the core to the SoC. Last January at CES, Nvidia was talking about Project Denver to anyone that would listen. Superlatives were tossed about with gay abandon, but details were non-existent. Press that should have known better touted it as the best thing since the last best thing Nvidia announced, and were all quite sure that it would take over the world, even if they couldn’t tell you what it was.
Officially, Denver is a CPU + GPU, SoC, or other buzzword that is aimed at the HPC space. Take a Tesla type HPC oriented GPU, and slap a few CPUs on it, and you have a surefire winner in the HPC world. Who needs an x86 CPU when you have an ARM core and lots of GPUs, right? Take that Intel! Putting aside questions about the sustainability of the GPU based HPC market, it does sound like a cunning plan.
People who talked to us about Denver at CES may have noticed that we giggled every time it was mentioned. The reason for this is that we were well aware of the current state of the project. SemiAccurate moles were telling us that at the time, Project Denver basically didn’t work. Luckily, that didn’t stop the pomp and circumstance.
As late as the last Analyst Day, Nvidia wasn’t being shy about telling any analyst who couldn’t run away fast enough that Denver was set to come out in late 2012, maybe 2013 if things don’t work out well. The problem was that there was absolutely zero chance of that happening, and Nvidia knew it.
Denver is a SoC based on T50 cores and Maxwell GPU cores. When the project started out in 2006, it was an x86 core with Fermi GPU shaders, and that slipped to x86 plus Kepler shaders later on. We told you about the PoR change from hell, aka the great x86 to ARM-64 migration in part 2 of this story, and since then, there has been another PoR change to twist the knife. The GPU side of the house was moved from Kepler to Maxwell, a part not due until 2013. If Nvidia’s Analyst Day spiel is to be believed, Denver, at the moment, will come out a year before it’s GPU cores are done. The skeptic in us tells that this may be a bit of a stretch.
Late 2012 is not a possibility, nor is Q1/2013, but since there have been no official specs on code name Denver released externally, there is the possibility that the name may be re-purposed to suit paper schedules.
Another area of concern for Denver is the interconnect. Imagine a SoC with many CPU cores, many GPU core clusters, and many memory controllers. Now slap a crossbar in the middle, ala Fermi, and you have what people working on the project call ‘a mess’. A mess that doesn’t work. Really.
One of the major problems with Fermi was the interconnect, and it was really never fixed. The large chips, GF100/110, had 8 clusters of shaders. The interconnect failed. The ‘fixed’ part, GF104/114 and smaller had two clusters or less, meaning the crossbar was so simple that it was almost non-existent. Luckily, it wasn’t Nvidia’s fault, it was TSMC that caused all the problems.
Runaway power use was the result of the interconnect scheme, and was never fixed, only lessened a bit via external circuitry. Fingers firmly pointed externally, there is no management problem in Santa Clara, just ask anyone in said management. Meanwhile, companies that let their engineers do their job have moved on to saner interconnect schemes. Those seem to work. That said, Nvidia’s whole philosophy doesn’t bode well for a low power SoC.
There are many Denver variants planned, from phones to supercomputers. What falls under the Denver moniker is more a PR problem than engineering, see the mercurial ‘Ion’ for a good idea of how things will play out here. What Denver was in January is not in question, at the time, engineers were making a SoC with T50 cores attached to Maxwell shaders via a massive crossbar. What Denver will be after the next PoR change is anyone’s guess. What Nvidia calls Denver when something is released may be totally decoupled from any engineering project that previously had the code name, or may be.
In the end, Project Denver is a really cool core. There hasn’t been anything that far out in left field since Transmeta, think of it as a moonshot or maybe moonunit V .5. Nvidia picked up the right team to do the job, and do it well, there is no question about their technical capabilities. Unfortunately, it is very easy to put smart people under poor management, and that is why we fear that Project Denver, as it stands now, may never see the light of day. Good luck guys, I admire your perseverance.S|A
Latest posts by Charlie Demerjian (see all)
- How is Intel solving their 14nm capacity problems? - Jun 13, 2019
- How big is AMD’s new Navi GPU? - Jun 7, 2019
- Intel kills off a (minor) product line - Jun 7, 2019
- A look at Intel’s Ice Lake and Sunny Cove - Jun 5, 2019
- Leaked roadmap shows Intel’s 10nm woes - Apr 25, 2019