Date: Tuesday, May 17 (Main Conference Day 1)
Start Time: 2:05 pm
End Time: 3:10 pm
Highly autonomous machines require advanced perception capabilities. Autonomous machines are generally equipped with three main sensor types: cameras, lidar and radar. The intrinsic limitations of each sensor affect the performance of the perception task. One way to increase overall performance is to combine the information coming from different sensor types. This is the objective of sensor fusion: to combine the information from different sensors and thus improve the perceptual ability of the system. This way the system can better operate under challenging environmental conditions by relying on the sensor data that is the least impacted by the current situation (e.g. poor lighting, adverse weather). In this presentation, we will present the main sensor fusion strategies that can be used for combining heterogeneous sensor data. In particular, we will explore the three primary fusion methods that can be applied in a perception system: early fusion, late fusion and mid-level fusion.