Join executives from July 26-28 for Transform’s AI & Edge Week. Hear from top leaders discuss topics surrounding AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Reserve your free pass now!
Autonomous driving is seen as the future of mobility, thanks to companies like Tesla that have developed AI-driven advanced driving assistance systems (ADAS) to help users navigate from one point to another under certain conditions.
The progress has been astonishing to many, but the fact remains: We are still nowhere near truly autonomous vehicles. In order to achieve true autonomy, self-driving vehicles should be able to perform better than human drivers in all conditions, be it a densely populated urban area, a village or an unexpected scenario along the way.
“Much of the time, autonomous driving is actually kind of easy. It’s sometimes as simple as driving on an empty road or following a lead vehicle. However, since we’re dealing with the real world, there’s a wide variety of ‘edge cases’ that can occur,” Kai Wang, the director of prediction at Amazon-owned mobility company Zoox, said at VentureBeat’s Transform 2022 conference.
These edge cases create trouble for algorithms. Imagine a group of people stepping onto the street from a blind corner or a pile of rubble lying in the way.
Training effort from Zoox
Humans are pretty good at recognizing and responding to almost all kinds of edge cases, but machines find the task difficult as there are so many possibilities of what can happen on the road. To solve this, Zoox, which is building fully autonomous driving software and a purpose-built autonomous robotaxi, has taken a multi-layered approach.
“There’s not really a single solution that will solve all these cases. So, we try to build in different types of mitigations at our whole system level, at each layer to give us the best chance at handling these things,” Wang said.
First, as the executive explained, Zoox enables the perception of different conditions/objects by bringing in data from the sensor pods located on all four corners of its vehicle.
Each pod features multiple sensor modalities — RGB cameras, Lidar sensors, radars and thermal sensors — that complement each other. For instance, RGB cameras can sense detail in imagery but fail to measure depth, which is handled by Lidar.
“The job of our perception system is to use all these sensors together, and fuse them to produce just a single representation for all the objects around us. This gives the best chance at recognizing all the things in the world around us,” Wang said.
Once the surrounding agents are recognized, the system models where they will end up in the next few seconds. This is done with data-driven deep learning algorithms that come up with a distribution of future potential trajectories. Post this, it considers all the dynamic entities and their predicted trajectories and takes a decision on what to do or how to safely navigate through the current scenario to the target destination.
While the system is effectively modeling and handling edge cases, it could run into certain novel situations on the road. In those cases, the system stops and uses teleguidance capabilities to bring in a human expert for help (while checking for collisions and obstacles with other agents at the same time).
“We have a human operator dialed into the situation to suggest a route to get through the blockage. So far, we have received teleguidance for less than 1% of our total mission time in complex environments. And as our system gets more mature, this percentage should go down further,” Wang said.
After moving on, the data associated with the edge case goes to the company through a feedback loop, allowing it to use the scenario and its variants in simulations to make the software system more robust.
Don’t miss the full session on how edge data is training AI to be more accurate and responsive.