Date: Thursday, September 24, 2020
Start Time: 12:30 pm
End Time: 1:00 pm
Vision systems play an essential role in safety-critical applications, such as advanced driver assistance systems, autonomous vehicles, video security, and fleet management. However, today’s imaging and vision stacks fundamentally rely on supervised training, making it challenging to handle the “edge cases” with datasets that are naturally biased—for example unusual scenes, object types, and environmental conditions, such as rare, dense fog and snow. In this talk, I will introduce the computational imaging and computer vision approaches being used by Algolux to handle such edge cases. Instead of relying purely on supervised downstream networks to become more robust by seeing more training data, we rethink the camera design itself and optimize new processing stacks, from photon to detection, that jointly solve this problem. I will show how such co-designed cameras, using our Eos camera stack, outperform public and commercial vision systems (Tesla’s latest OTA Model S Autopilot and Nvidia Driveworks). I will show how the same approach can be applied to depth imaging, allowing us to extract accurate, dense depth via low-cost CMOS gated imagers (beating scanning lidar, such as Velodyne’s HDL64).
There will be a live Q&A session with the presenter immediately following the presentation.