Date: Wednesday, May 24
Start Time: 4:50 pm
End Time: 5:20 pm
Increasingly, perceptual AI is being used to enable devices and systems to obtain accurate estimates of object locations, speeds and trajectories. In demanding applications, this is often best done using a heterogeneous combination of sensors (e.g., vision, radar, LiDAR). In this talk, we introduce techniques for combining data from multiple sensors to obtain accurate information about objects in the environment. We will briefly introduce the roles played by Kalman filters, particle filters, Bayesian networks and neural networks in this type of fusion. And we will examine alternative fusion architectures, such as centralized and decentralized approaches, to better understand the trade-offs associated with different approaches to sensor fusion as used to enhance the ability of machines to understand their environment.