WHY IS NVIDIA screwing it’s partners by cutting them out and directly competing against them? That one is easy, they are circling the wagons and grabbing a bigger share of a shrinking pie because plans A and B didn’t work out.
If you haven’t been paying attention lately, HardOCP noticed, then confirmed, that Nvidia is putting their own branded cards out. If you are an Nvidia partner, this is basically your death knell, but according to Kyle, Nvidia claims it is only a test. With Newegg and Best Buy on board, the largest etail and retail channels in the US respectively, that is one heck of a ‘test’.
Why would they do such a thing? That one is easy, their income is dropping like a rock, and will crater completely in a year or two. The chipset business that made up between 1/4 and 1/3 of their income is in the process of going away, and that puts them from a $1 Billion/Q company to a sub $1B/Q company. That revenue isn’t coming back, and something needs to replace it.
Normally, they would make this up in other areas, that is what companies tend to do. The other options are GPUs, GPU compute/GPGPU and widgets. None of those three options are panning out for the company, and this latest power grab will only hasten the death of their mainstream GPU business just like it did for 3DFx.
GPUs are dying as a market, or three market segments; high, mid, and low, so to speak, are dying. On top you have the ‘high end’ GPUs, generally segmented as the $200+ category. These are large die size chips that sell for very high margins, but only make up 2-5% of the unit volume. If a GPU maker can break even on dev costs here, they are usually pretty ecstatic. This generation, Nvidia is not going to break even, but that is somewhat tangential to the point.
On the other end of the spectrum the ‘low end’ GPUs are classified as <$75 boards and generally are made by taking 1/4 of the high end card, sometimes less, and calling it adequate. They rarely are, but Moore’s law has made them ‘suck less’ with each passing year. In any case, they are very cheap to develop.
Low end GPUs have razor thin margins but huge volumes, between 60-75% of the unit volume, sometimes more. Even with that, the margins mean that a GPU maker makes little if anything in this segment. A $29 card doesn’t leave much room for silicon profits, much less anything else, but this is what HP and Dell sell by the millions.
It does drive volume, and that can drive down costs, amortize a lot of fixed costs, and generally spread a lot of the things that accountants don’t like to talk about. They are a necessity even if the bottom line isn’t directly helped much by their existence.
The last segment is the ‘mid-range’ cards, basically the $75-200 category. These are generally made by cutting the high end parts in half, and they are the meat of the market. Mid-range GPUs perform adequately and consumers like them, so this segment sells fairly well, and makes decent margins. GPU makers need this segment if they are going to make a profit, the other two are not going to do it.
The low end GPU is going to evaporate in a few months. Later in Q4, AMD’s Ontario/Zacate parts have 80 shaders on the die, and in Q1 Intel’s Sandy Bridge arrives with about the same power, possibly a bit more. Q2 sees AMD’s Llano hit the streets with 400 shaders for the kill. From there, things only get faster and more powerful. Remember, this functionality is included on the CPU for zero additional cost, you won’t be able to buy a consumer CPU without them.
That means the low end GPU market goes *poof* because even if you could make a faster GPU, it wouldn’t be economically sane, much less sound, to sell it at the ‘low end’ price point. Llano will not only eat that market but threaten the low end of the mid-range GPU segment. Game over, and it won’t come back. That is the majority of Nvidia’s unit sales in the GPU market, gone.
On the high end, Nvidia got a bit of a reprieve when Intel shelved their Larrabee GPU, but those in the know will tell you it isn’t permanently dead. In any case, the high end was spared immediate death, but it is running into another wall, that of ‘good enough’. A single high end GPU will power almost any game in existence on a 30″ 2560 * 1600 monitor, and a mid-range one will do the same for a 1920 * 1200/1080p screen.
Making the case for needing a faster card on the high end is a progressively more tenuous argument. Multiple screens, 3D and all the other things that are touted as ‘killer apps’ for this category are simply not turning into an economically viable customer base. Banging the drum may make some analysts change their ratings, but it still doesn’t sell cards.
With the high end dying out, the mid-range will have to shoulder more and more of the development costs, meaning those profits get whittled away. With the low end gone, the fixed costs are amortized over a smaller base too. This death spiral quickly leads to a mid-range that is unprofitable as well. That time is coming quicker than Nvidia wants the financial community to believe. 60% of Nvidia’s business is hanging by a thread right now.
The net result is that GPUs are not worth doing for anything but the ‘professional’/ultra-high end/compute markets. Those markets are insanely profitable, selling a chip that normally goes into a $500 card for $4000 is not a bad trick, but it depends on two things, GPUs and heavy investments.
Nvidia’s professional line can only exist because they subsidize it with the mainstream GPU market. In essence, they have the chips given to them for free, and all the division has to do is write software and drivers. That is a big expense, but an order of magnitude less than the cost of developing the underlying chip.
With the low end gone, that means the professional card line will have to shoulder more of the burden too. When the high end goes, the line will have to stand on it’s own as well as supporting the mid-range GPUs. History shows that there have been literally dozens of companies making GPGPU-like chips for compute acceleration. Every single one failed because the economics did not work out. Let’s repeat that very important point. Every single one failed because the economics did not work out.
Nvidia can only make money at this because they don’t have to develop chips on the same books as the GPGPU division. When that ‘mana from gamers’ goes away, so will all the profitability of the GPGPU segment. That wipes out one of the two remaining ‘exit strategies’ for Nvidia.
The other exit strategy is ‘widgets’, aka the Tegra line. All we really need to say here is that the second generation Tegra 2 is 20+% over the promised power budget, and is so buggy that it’s adoption is basically zero. As they said last generation, next generation will be better. Or they will say it again.
In the mean time, the Tegra line does not appear to be self-sustaining. That is the second door slammed shut, and unless it is flung wide open very soon, it won’t ever make money. Considering that projections for Tegra 2 sales were cut in half earlier this year, then vastly reduced again a few months later, it doesn’t look good.
The end result is that things look bleak for the boys in green, but this article is about the short term, and circling the wagons. If you recall, the chipset business is gone, and that is a huge chunk of of Nvidia’s income wiped out. The GPU business is about to take a sharp drop, and head down with rapidity that few understand.
Nvidia does see the upcoming drop, and to brace for it, they are grabbing what revenue they can. That means people upstream of them, namely their AIB ‘partners’ are expendable. Nvidia has been cutting back partners, they just shut down BFG by not supplying them 400 series parts, and are trying to do the same to XFX now. Ironically, BFG’s big strength was at Best Buy, I wonder why they had to die?
By cutting out the AIBs, Nvidia can sell $200 boards instead of $50 chips. That is good, right? Well, not as good as it is seems, margins on GPUs are in the 50% range while boards are lucky to get 1/3 of that. The first thing selling boards will do is crater their margins, but that is expected.
Once the retail and etail returns start pouring in, those margins will decrease even more, and support costs have a long tail. Retail is horribly tough, and few can master it. Those that can don’t tend to make huge profits, and step over a lot of burnt out husks when they do. Nvidia is jumping into the deep end more out of fear than planning.
What the cards will do is boost the revenue coming in the door by a large margin. That will give the appearance of ‘things going well’ so insiders can sell more stock. Nvidia has one thing mastered, snowing gullible financial analysts. The card sales are another attempt at doing that, and it will likely succeed.
Success on this scale is measured in months though. In the best case it will only cost Nvidia a few ‘partners’. Worst case, they will all die or leave. If Nvidia succeeds, their partners are dead, it is just a matter of ‘when’ not ‘if’ they go. When the AIBs go, Nvidia loses a lot of marketing presence, channel expertise, and sales outlets.
There is no way that the branded Nvidia cards can make up the revenue lost from the chipset and low end of the GPU business. There is no way that the branded Nvidia cards can make up for the lost marketing from their ex-‘partners’. There is no way that GPUs will be sustainable as a stand alone business in a few years.
All the fuss about making their own cards comes down to one thing, propping up a failing business. The core is rotting out, the exit strategies aren’t panning out, so Nvidia is eating their progeny with a ‘better them than us’ smile on their faces. The problem is that it is a shortsighted strategy, and will only hasten the end. By then, stock will have been sold, and those that know will have moved on.S|A
Latest posts by Charlie Demerjian (see all)
- Intel launches Kaby Refresh 8th Generation CPUs - Aug 21, 2017
- Qualcomm ups Spectra ISP to 2.0 - Aug 15, 2017
- Coffee Lake is going to impact Intel’s margins - Aug 10, 2017
- SemiAccurate digs up Intel Coffee Lake specs - Aug 9, 2017
- Everspin hits the 1Gb milestone with new 28nm MRAMs - Aug 8, 2017