There are a lot of misconceptions floating around Nvidia’s (NASDAQ:NVDA) ‘Project Denver’ aka Tegra 5. Publicly, it is going to do everything for everyone, but privately, the situation is a little more complex, and a lot more interesting.
When Nvidia announced Project Denver at the last CES, they made it sound like an all new initiative that was going to take over the world. Denver is not new, but it is very innovative and interesting, at least its CPU cores are. Those cores are Tegra 5/T50/(likely)Logan, and will be in a number of products. The rest of the SoC is ambitious but technically not all that different from the other SoCs out there.
The project itself has been going on for several years, I first wrote up the beginnings of the chip in late 2006 when Nvidia bought the remains of Stexar. (Sorry, no links due to this and this.) That was the birth of the Nvidia x86 program, something that has gone through more changes than a David Bowie retrospective mainly due to management. Denver has been going through what seems like a PoR (Plan of Record) change every six months, pity the people who are working on it, they must have the patience of a saint, or a whole boatload of saints.
We first wrote up that the chip was slated to be Tegra 5 last August, and Denver is just one of the variants in that line. T50 was going to be a full 64-bit x86 CPU, not ARM cored chip, but Nvidia lacked the patent licenses to make hardware that was x86 compatible. Nvidia was trying to ‘negotiate’ a license in the background. Sources close to the negotiating table indicate that Jen-Hsun’s mouth weighed heavily against that happening.
Publicly, Nvidia’s stance was that there was no need for any license because the company was not making x86 hardware. Technically, this is true, T50 is a software/firmware based ‘code morphing’ CPU like Transmeta. The ISA that users see is a software layer, not hardware, the underlying ISA can be just about anything that Nvidia’s engineers feel works out best. T50 is not x86 under all the covers, nor is it ARM, it is something else totally that users will never be privy to.
The idea was that this emulation of x86 in software would be more than enough to dodge any x86 patents that would stop the chip from coming to market. SemiAccurate has it on very good authority that this cunning plan would not have succeeded, and based on what the sources showed us, the chip never would have gotten to market. Since Nvidia’s public bluster was matched by equally fervent negotiations for a license in the background, we would have to conclude that they were aware of what their chances were in court.
On the day that Nvidia settled with Intel over the chipset/patent agreement, all hopes of T50 being an x86 part died. If you read the settlement, section 1.8 specifically mentions, ““Intel Architecture Emulator” shall mean software, firmware, or hardware that, through emulation, simulation or any other process, allows a computer or other device that does not contain an Intel Compatible Processor, or a processor that is not an Intel Compatible Processor, to execute binary code that is capable of being executed on an Intel Compatible Processor.“. Section 1.12 has some similarly curious language, as do other places sprinkled around the document, that isn’t by chance.
So, where does a core go from here? That one is easy, it becomes an ARM core, or if you believe Nvidia PR, it was ARM all along. T50 was never ARM hardware based, we had originally heard it was an A15 or the follow-on part, that emulated x86, that information turned out to be wrong. T50 is its own unique ISA, and emulates the exposed ISA as embedded software. Think of it as an on chip x86 or ARM compiler to the low level instructions.
So, between last fall and CES, out went x86, and in came ARM, specifically the ARM-64 core that is the follow up to the A15 chip. This caused a number of headaches for the already beleaguered engineers, as far as PoR changes go, this one was a whopper. Luckily, with one major exception, the hardware changes needed to carry this out were minimal. The same can’t be said for the software layer, going from x86 to ARM is not trivial.
Those familiar with the ARM-64 ISA will realize that it cleans up a lot of the cruft that is the ARM instruction set, but not all of it. Things like predication still remain, but are thankfully minimized, and other rough spots are cleaned up a bit more. Legacy ARM warts can be minimized in the T50 software layer, likely better than in a pure hardware implementation, but it isn’t a trivial job. Remember when we said the pity the poor engineers working on this?
The T50 core is wide, very wide, 8 pipes wide in fact. Once you have picked your jaw up off the floor, let me just start by saying that the width is not equivalent to an 8 wide ARM core, this is 8 ‘Transmeta’ style software instructions, not ARM instructions. In the end, T50 should be about the performance equivalent of a 4-wide ARM core, a sensible target, with a lot lower power use.
T50 looks to be a remarkable piece of work, it could end up as one of the most ambitious and innovative CPU cores currently in the pipeline. If it comes out. Our skepticism is not based on the technical parts of the core or project, technical problems can be solved by diligent engineering work, and we have no doubt that T50/Denver’s problems are solvable. Our concern is that the engineers may not be allowed to fix the problems.
Looking at Nvidia’s track record over the past few years, it is a casebook of failure after failure. When questioned, Nvidia management points the finger everywhere but where it should be pointed, at themselves. The company has released almost no chips in the past two or three years that were on time, on spec, or both, once again testing the sainthood of their engineers. Ironically, this is not an engineering problem, knowing many Nvidia engineers personally, it is pretty obvious that they are dedicated, hardworking, and quite competent. Nvidia’s problem is what the engineers are told to do.
Engineers are a very logical bunch, quite good at doing what they are told in innovative ways, and solving problems logically. If those engineers are given impossible tasks, or their PoR changes every few months, it makes it very hard to meet unwavering deadlines. If the deadlines change, that reflects badly on management. If the engineers don’t meet deadlines, it isn’t management’s fault, right? Nvidia has a management problem, a big one, and until that changes, we hold little hope for T50 being successful in spite of the engineers best efforts.S|A
Latest posts by Charlie Demerjian (see all)
- How is Intel solving their 14nm capacity problems? - Jun 13, 2019
- How big is AMD’s new Navi GPU? - Jun 7, 2019
- Intel kills off a (minor) product line - Jun 7, 2019
- A look at Intel’s Ice Lake and Sunny Cove - Jun 5, 2019
- Leaked roadmap shows Intel’s 10nm woes - Apr 25, 2019