Can Self-Driving Vehicles See Better?
EVENTSPERSPECTIVES

Can Self-Driving Vehicles See Better?

What does the state of autonomous vehicles look like right now? To find out, the Autotech Council—a Silicon Valley-based ecosystem of automobile industry players—held an industry gathering. The half-day event, hosted by Western Digital, brought together 300 leaders in the autonomous vehicle industry. We partnered with SiliconANGLE, a leading digital media platform, to spend a few minutes talking with a select group of these leaders. Watch the latest expert interview here.


There’s something missing from autonomous vehicles as they drive themselves down the road: a clear line of sight.

It’s one scenario to be the only car driving on a highway in the desert in broad daylight. But, the same can’t be said when a self-driving car operates on busy, city streets. Just think about your own driving experiences downtown in a big city. There might have been poorly maintained roads or traffic redirected due to construction. Not to mention, there’s the potential risk of pedestrians crossing the street.

Dealing with Messy Roads in Real Life

 * Video clip from full interview.

In the real-world, road conditions are rarely ideal and usually messy. For drivers, quick decisions are just a way of life on the road. If the goal is to have autonomous vehicles think like human drivers, then the vehicles need to see and react to everything in their path. As Dave Tokic, VP Marketing & Strategic Partnerships at Algolux, says, we need autonomous cars to have autonomous vision.

“It’s really about perceiving much more effectively and robustly the surrounding environment and the objects, as well enabling cameras to see more clearly.” – Dave Tokic

Autonomous vision is based on the idea that self-driving cars can learn to better perceive their surroundings in all types of driving conditions. There could be issues such as bad weather, low light, and dirty camera lenses. Algolux, a startup headquartered in Canada that recently announced a $10M USD Series A funding, is developing a software platform that uses machine learning to help cameras on autonomous vehicles see and perceive better.

By using input from the imaging system of an autonomous vehicle—whether from cameras, LIDAR, or another source—their software processes these inputs to understand the scene in those difficult, real-world scenarios. It’s a novel approach that could help save human lives and put fully autonomous vehicles one step closer to becoming a reality.                                    

Making Sense of Data from Self-Driving Vehicles

* Video clip from full interview.

Autonomous vehicle developers are taking different approaches to their imaging systems. Some are betting on LIDAR, a system that uses light from a laser to detect objects. Others are using off-the-shelf cameras that have been specially tuned for driving. Each technology has its tradeoffs. LIDAR can detect objects further out than cameras, but is much more expensive. Regardless of the chosen technology, the images still have to be processed, integrated, and fused to inform driving decisions.

“We actually ‘learn’ how to process for the particular task, such as seeing a pedestrian or bicyclist… It gives us quite the advantage of being able to see more than could be perceived before.” – Dave Tokic

Autonomous vehicles are learning to see more than ever before. Now, the challenge remains to make the ecosystem of self-driving cars safer and more reliable than ever before.


Get news and stories about Data Makes Possible delivered to your inbox, including when new autonomous vehicle stories go live:

 

1 Comment

Leave a Reply