Qualcomm shows off a little at CES

CES 2025: Two chips lovingly surrounded in fluff

Qualcomm Snapdragon logoQualcomm underwhelmed at CES but had a few disclosures worth talking about. SemiAccurate would love to go into detail about the releases but the source material was lacking.

There were five major groups of ‘news’ the company released at the show, two of which had actual data in them. The other three, automotive, IOT cloud services, and smart home, were devoid of actual technical anything. The automotive section was a release about several partnerships but no mention of product that we could find. After several hours of briefings, SemiAccurate is still at a loss about the actual automotive offerings, things like the names of chips, much less specs.

The IOT cloud services offering is, well we aren’t sure. There was a deck showing pretty diagrams but nothing that said anything about what is actually being used, just the idea as a useful business choice that makes smiling stock photo businessdrones seem happy. The smart home bit was just a copy of their blog, something we contend is silly to put out because it is essentially competition that we can’t add anything useful too.

Luckily from here things got a little better. The first bit is the release of Purwa SE although Qualcomm never called it that. Purwa SE is the little brother to the Qualcomm Homoa/X Elite SoCs.

We like the silicon but the end user product is so stunningly bad that we can’t get it to do more than boot loop after several months of trying. Software enablement matters people. That said Purwa is a cut down Homoa with essentially half the GPU. The GPU was the painfully weak part of Homoa and to save money, Qualcomm halved it. They didn’t touch the utterly useless ‘AI’ unit because marketing is based on it, same for the CPU. GPUs didn’t feature in the marketing materials so they had to go.

Qualcomm Snapdragon X Specs

Completely inadequate for purpose but not for marketing

Purwa SE/Snapdragon X has 8 cores in this tier, tops out at 3GHz, and the NPU is still at 45 TOPS. That Microsoft CoPilot sticker has lots of MDF behind it, but the silicon is unchanged from the IFA debut. In short it is slower at everything, so why do it? Officially it is to enable a lower priced PC coming in at $600 but Qualcomm curiously never mentions yield salvage anywhere in the releases. The only Purwa SE device we saw at CES had a price tag of $899 so… err… stick with Intel devices, they actually work and aren’t hardware backdoored by a third party. There is no reason for anyone to buy a ‘PC’ based on Purwa SE/Snapdragon X.

The last chip is kind of interesting in a historical sense, remember the Qualcomm Cloud AI 100 chip from the ‘before time’? In the distant days of 2019, Qualcomm teased an AI ASIC called Cloud AI 100 and SemiAccurate thought it was really interesting. A 16nm inference chip backed by the software stack from the Qualcomm phones actually sounds interesting doesn’t it? The bit about no fans was also a positive sign that no one else seems to have caught at the introduction. Then it vanished, poof, radio silence.

Now step forward five years and it is finally out! Yay? A 16nm inference chip that doesn’t need a fan, on the card it appears, and supports the best AI formats of the last decade. Things have advanced in the interim. 16nm begat 12nm which gave way to 10nm. 7nm then took over and 5nm was next, the former was current at the time of the 2019 introduction. 4nm and 3nm are in place now with 2nm on the near horizon, plus many half-nodes between. 16nm seems a tad… err… dated. But it is out and it looks like this.

Qualcomm Cloud AI 100 card lineup

Why does anyone care in 2025?

The Cloud AI 100 name is never mentioned in the slides but the renderings have it on the cards. There are three variants, Standard, Pro, and Ultra. From what we gather, Pro is the full chip, Standard is a cut down version, and Ultra is two of the Pro chips on a card with significantly more memory. If you tease apart the specs though, interesting bits pop out. DRAM is 16/32/128GB of (we assume they meant) LPDDR4x and bandwidth scales with size from the Pro to the Ultra. This means the Standard and Pro have the same number of channels but different DRAM densities. So far so blindingly obvious.

When you get to SRAM, Standard and Pro are close so the SRAM is likely tied to the compute units, a few of which are fused off in Standard. SRAM has the same ratio as the FP16 numbers so there you have it. INT8 scales the same as the FP16 values in case you needed more data there. When you move to the Ultra card though, the RAM bandwidth goes up 4x, SRAM capacity has the same multiple so the performance goes up by a similar amount, right? Not so fast there cowpoke.

On the above chart, INT8 performance scales superlinearly from Pro to Ultra, something you would expect from 2x the compute chips and 4x the memory size and bandwidth. PCIe bandwidth also doubles in case you were wondering but FP16 performance is up a mere 44%. No clue why on this one but if something screams ‘serious architectural bottleneck’, this is it. Since there is absolutely no technical info on how the card is laid out or how the chip works, we can’t say but we have glaring red light here folks.

So who would want the best inference silicon from 2019? Good question. Qualcomm is showing off chassis from Aetina and Lenovo, likely for edge inference but, well, aren’t there better solutions than a 75-300W chassis for this now? This isn’t to say that these things won’t do the job, just that there is better out there now. Process tech has moved on and on and on and on, plus models now use formats simply not supported by the Cloud AI 100. Sure you can refactor your model but why bother?

The reason we are down on this device is simple, an Intel Lunar Lake CPU has a CPU, GPU, and NPU with a combined 120 TOPS for the platform. This takes between 17-37W, including memory, and supports all the latest models and data formats. Two of these would fit in the same TDP as a single AI 100 Pro card and deliver 60% of the performance. Without a GPU and with CPU resources lowered to match the AI needs, Lunar could easily beat AI 100 in PPW. True dedicated inference hardware, even Qualcomm’s own, would annihilate either platform.

In short there seems to be little reason for AI 100 to tempt buyers in the year 2025. Worse yet is that if you deploy AI hardware, you probably don’t want to code and tune whatever it is you are doing for one generation of hardware, then have to redo it

for another architecture. Intel, I am looking at you too. In the 5+ years since introduction, Qualcomm hasn’t uttered a peep about a roadmap, successor parts, or anything else. Would you invest in that?

One bright note is the OS support for AI 100, generic Linux, Red Hat, Ubuntu, and CentOS. Note what is missing. Without knowing what is on the chip and card, it is hard to say if Cloud AI 100 has an OS running on it or is just an add-on device for a real CPU. Just one of a long line of questions that make us a tad gunshy of this offering, it matters for edge use cases.

So all in all Qualcomm talked up two devices at CES, Purwa SE and Cloud AI 100, plus three other fluffy things that had nothing behind them. Purwa SE is flat out inadequate on the graphics and compatibility front, Cloud AI 100 is too little too late, and the rest is pure fluff. Nothing bad but nothing good either.S|A

The following two tabs change content below.

Charlie Demerjian

Roving engine of chaos and snide remarks at SemiAccurate
Charlie Demerjian is the founder of Stone Arch Networking Services and SemiAccurate.com. SemiAccurate.com is a technology news site; addressing hardware design, software selection, customization, securing and maintenance, with over one million views per month. He is a technologist and analyst specializing in semiconductors, system and network architecture. As head writer of SemiAccurate.com, he regularly advises writers, analysts, and industry executives on technical matters and long lead industry trends. Charlie is also available through Guidepoint and Mosaic. FullyAccurate