Qualcomm had a few automotive goodies to talk about recently, a platform, a technology, and partnerships. Of the bunch the technology, VIO, is probably the most interesting to SemiAccurate readers.
On the platform side we have Qualcomm’s Drive Data set of offerings which is more and less than many other similar offerings from other companies. Most of the automotive telematics offerings you hear about are a complete hardware and software system to do X where X is usually some overarching far future feature like autonomous driving. Many companies have borderline moonshot capabilities delivered at flashy keynotes even though it is obvious the hardware can’t deliver for a few generations and the software behind it doesn’t exist yet.
Qualcomm does far less than this with its Drive Data platform, they don’t promise the moon. They don’t even promise that it will do very much for the end-user, a refreshing and realistic change from some other companies. In that they do a lot more than the rest, and do things that are actually useful for developers now. More and less ends up being really good for everyone because instead of chasing an impossible dream on inadequate silicon and sensors, they start building things immediately that move things toward the ultimate goal.
So what is Drive Data? It is a set of tools and APIs that run on Qualcomm silicon, in this case the Snapdragon 820Am automotive SoC and associated boards. Those tools are made for data acquisition, machine learning, image processing, and data transfer. The idea is to enable realtime data acquisition, number crunching, machine learning, and cloud uploads with the lowest barriers to entry. For the engineers out there, don’t think about acquisition, that is the easy part, think finding the balance between data and compute locality, bandwidth and compute locality, and similar tradeoffs while flying down the highway at exactly the speed limit, really. :)
In short Drive Data is Qualcomm doing the plumbing for OEMs and carmakers. At MWC 2015 Qualcomm showed off their Zeroth platform for machine learning. That is now baked in to every Qualcomm high-end SoC as is their quite powerful DSP. Between these two features the Snapdragon 820 can do a lot of things on the machine learning and image recognition front SoCs from other companies can not without external hardware and silicon. Even then there are some integration headaches that most would rather avoid.
Like most other APIs offered on modern silicon from Intel to the smallest tier 3 and lower chipmakers, the idea is to lower the cost of entry to a market for a developer. With Drive Data does just that, it comes with image recognition, machine learning, cloud uploads, and related tasks. TomTom is using it for crowdsourced HD map data acquisition for future autonomous driving uses. This is the long way of saying they are building value on top of the Drive Data platform rather than redoing the plumbing for the 73rd time.
That brings us to the technology that most will find interesting, Qualcomm call this VIO. The term was introduced to us during the Snapdragon 835 briefing a few months ago, possibly before, but to be honest we didn’t get it. VIO stands for Visual Inertial Odometry and it does exactly what the name suggests. No this isn’t our usual joke of “Company X’s new RL8536/Z44 is exactly what you would expect and does what its name suggests”, VIO really does. Take a look at the picture below.
Follow the colored traces
Note the three traces, all GPS related, and all different. They are taken from the same test vehicle equipped with a ground truth GPS and a normal GPS unit. Ground truth is a $100K or so highly accurate GPS unit which typically sample at a very high rate, have lots of antennas, and can utilize enhancements to GPS like ground stations at airports. The end result is they are really accurate as you can see from the green trace, it is locked onto the lane the car is (presumably) traveling down and doesn’t waver much as you would expect from a car traveling down a straight highway.
The next trace to look at is the red one, basically standard GPS. For the last few meters of the trace it is pretty close to the ground truth in readings but wavers from edge to edge on the lane at times. If you look back to the right hand side of the image, about half way across the trace takes a wild swing into the guardrail and even further back it crosses three lanes and wobbles faster than physics says a car on the highway can handle. This is standard GPS behavior and was probably caused by reflections in the vicinity, consider this behavior to be ‘normal’.
Normal isn’t good enough for autonomous driving though, that three lane wobble would cause some undefined behaviors if a car were to follow it in real life. Such undefined behaviors usually end up in hospitals and courtrooms and as such are considered sub-optimal by most engineers working on autonomous driving technologies. Luckily for humanity doggedly following GPS traces like the red one are a long solved problem in the sector, Teslas don’t tend to go bonkers when they drive by a tall building at speed.
That brings us to the last trace, the blue one called GPS/VIO fusion. Note that it tracks the ground truth trace very precisely, very very precisely, the deltas are in the top right. Qualcomm is saying that a cheap consumer GPS augmented by VIO can match a 100K calibrated GPS unit and has the data to back it up. It isn’t hard to see how this could be useful to autonomous driving developers.
So what is VIO? It is a way to augment positioning data via machine intelligence and image recognition. If you assume you start out at a known location, IE where your car is parked, and that normal GPS will give you an accurate reading over time and/or the vehicle knows where it was when it last stopped, you have your initial conditions. From there if you just use normal GPS you get the red line above and things aren’t all rosy.
VIO augments this by taking a picture of a scene and recognizing items in it. For cars this is things like street signs, traffic lights, lane markers, and the usual things every street has. Doing this in real time is not a particularly hard problem anymore, AI has been trained for all of this and the dataset fits comfortably in a modern SoC’s storage and doesn’t tax it much for realtime recognition.
What VIO does is use known objects in an initial frame to infer motion between frames. If a street sign is 23 pixels wide in frame 1 and 31 pixels wide in frame 2 and you know each frame is 1/120th of a second long, the math to work out the distance traveled is pretty simple. Better yet it also works in three dimensions so you can adjust left/right motion as well as forward travel. Between this inferential delta and the constant stream of consumer grade GPS info, the resultant location is pretty accurate.
A side effect of VIO is that you use the cameras in a vehicle to spot all relevant objects at a given point be it a lonely highway or a busy metropolitan intersection. Since the Snapdragon 820Am conveniently comes with a Cat12 LTE modem, that data can be uploaded to a cloud service for whatever hopefully benign reasons an OEM uses it for. This is what TomTom is doing and it will build up quite a precise database in short order.
This cloud service can then send pre-computed location and object data back to the Drive Data/VIO/GPS platform to, in theory, save energy, free up compute cycles for other uses, and give better hints as to what is happening in the world around you. The more back and forth to the cloud there is, the better the system as a whole gets. The ‘simple’ addition of VIO to GPS makes it scary accurate, the addition of years of accumulated image recognition results should be pretty impressive when it is brought to bear at the consumer level.
And that is why we started out by saying Qualcomm is offering more and less with the Drive Data Platform. TomTom is simply using it to gather data in a crowdsourced manner at the moment, and for a good reason. How this tsunami of data is gathered, parsed, fed back, updated, and used is really an open question. If you need to ask about how it is most efficiently used, computed, and slung around, that is a much harder question. If every car out there streams 120FPS data from 10 4K cameras, well the LTE networks are going to melt.
More and less means Qualcomm is offering less in the sense that they don’t draw conclusions, they just offer tools and APIs. More because it works now and does what it says, gathers data and allows developers to figure out how and where to use the results. They aren’t trying to offer a pre-rolled autonomous driving solution like many others and it shows, but they are offering a path to get there, someday, when things are ready. SemiAccurate thinks this is the right way to do things.
In addition to TomTom there were a slew of other Qualcomm automotive news releases recently. First up is that PSA has followed Audi/VAG in moving to Qualcomm SoCs for their infotainment systems. If you were a cynic you would say that Qualcomm is buying wins, if you looked into the details you might think that buggy, late, and out of spec silicon coupled with a lack of LTE modems are enough to exclude some players. In the automotive sector, deliverables tend to matter more than CEOs shouting and quasi-correct public statements.
After this two other mobile V2x announcements were made. The first was a partnership with PSA, Ericsson, and Orange to define what V2x should look like in 5G. It is an attempt to define what portion of the LTE spectrum should be allocated to V2x and what priority it should be given. It is building on the Release 14 V2V, V2I, and V2P proposals to pave the way for automotive use cases to be a first class citizen in 5G.
There was a similar trial announced with LG using a more specific set of technologies. The basis for this trial is a Cat16/GbLTE modem augmented by 802.11ac communications. This telematics system supports 802.11p/DSRC with an eye on 5G once again. In short don’t expect much to come from this in the form of end-user devices or technologies until 5G arrives.
Last up we have a real current use case for automotive V2x technology, albeit in a much more restricted application, F1 cars. Two F1 cars really, the Mercedes-AMG Petronas F1 team to be exact. The idea is to transmit the Gigabytes of data collected by a modern F1 car back to the pits as quickly as possible. Since the series’ rules preclude some forms of data collection and car to pit wall transmissions while restricting others to while the car is in the pits, reliable but fast transmission is a must.
If you consider that in a modern F1 pitstop the car is stationary for 2-3 seconds, getting a few GB off of it is quite a challenge. Throw in that energy use is critical, modern F1 rules are effectively an efficiency based challenge, so pulling wattage off the drivetrain is a distinct no-no as well. Then there is weight, aerodynamic concerns about form factor, and all the rest. Grams matter as do mm in this arena. Then there the security of this critical data…
To that end, Qualcomm is using 802.11ac to transmit data long-range from the car to the pit wall, and once things get a bit closer, 60GHz 802.11ad takes over. The challenge is to do the handoff as early as possible to maximize bandwidth and also do it as reliably as possible too. In an electronically noisy arena like an F1 pit lane this isn’t easy. Last summer Qualcomm and Mercedes did a trial in the garage during the USGP in Austin and the two promise to do a few more during the British GP this summer.
So that is about it for Qualcomm’s automotive offerings in the past two weeks. A platform, APIs, a GPS augmentation technology, two 5G V2x partnerships, an infotainment alliance, and an F1 team’s data transmissions. Very little of it is useful to the consumer at the moment but in the near future it should all make things work better. The only problem is by then it will be hard to tie it back to the work being done now.S|A
Latest posts by Charlie Demerjian (see all)
- Qualcomm launches the Snapdragon 205, a high-end low-end SoC - Mar 20, 2017
- Intel officially introduces Xpoint with the DC P4800X SSD - Mar 19, 2017
- Dell shows off an 8K HDR monitor - Mar 15, 2017
- A third huge datacenter falls to ARM servers - Mar 14, 2017
- A second megadatacenter goes heavily to ARM CPUs - Mar 13, 2017