Pittsburgh-based Startup Aurora, which develops self-driving technology for cars, has given in their blog an interesting insight into the challenges of machine learning and the capabilities of sensors.
Aurora is working on Level 4 autonomy and leapfrogging Level 2 and Level 3. Co-founder Chris Urmson, who worked for years a technology lead at Waymo, is a strong proponent of that approach. He thinks that level 3 is nothing else than the metaphorical candle that we think brings us through improvements to a light bulb.
Autonomous cars today are using a suite of sensors, including cameras, radar, and LiDAR. Such a suite allows to bridge the gaps between strengths and weaknesses that every sensors has. Some companies may focus on cameras (AutoX, Tesla), but most other think that there is currently no way without LiDAR. The current state of technology seems to be on that requires cameras, but with the speed progress of technology this may change within a few years.
The Aurora-team describes in its blog the different weakness of the sensors, and demonstrate that on examples Cameras for instance struggle with dynamic light ranges, which is limited in darkness. Also cameras react with a delay when moving from dark to bright areas, and vice versa. It’s like we feel blinded coming from a dark spot into bright light. Here a demo-video:

A radar does not that problem, but some others. An instance for struggling radars comes up while driving in tunnels. The walls and moving objects may reflect the radar signal and bounce it off multiple times, so that the signal is wrongly interpreted Doppler-effects – how fast another object is moving – and the low resolution are another challenge.

LiDAR – laser system – can create a three dimensional image of the environment, but small objects like rain drops, snowflakes or dust can lead to increased signal noise.

While the developers are working on algorithms and technology to reduce the effects of those challenges, in combining the sensors they can fill each others weaknesses. But developers and researchers are working to overcome them, also in a case where one sensor-type fails and the others have to chime in.
While data and test drives in the real world a absolutely useful, Aurora is first aiming to build the basics. Before a machine can learn, the pipes and infrastructure has to be built. Scenarios are simulated and algorithms tested, because stuff that doesn’t even work in a simulation won’t work in real life. Those methods remind us heavily a the Silicon Valley, where quick prototypes are created, tested, iterated, tested and this within a short period of time, that allows a quick approach to a pretty good result and understanding of the real problems. Simulations are therefore a key component that the Aurora team is focusing at the moment.
The blog talks more about approaches and examples, and is a recommended read.
This article has also been published in German.
1 Comment