Where: Mission City M1-M3
Date: Day 2
Start Time: 4:55 pm
End Time: 5:25 pm
Indoor autonomous navigation requires using a variety of sensors in different modalities. Merging together RGB, depth, lidar and odometry data streams to achieve autonomous operation requires a fusion of sensory data. In this talk, we describe our sensor-pack agnostic sensory fusion approach, which allows us to take advantage of the latest in sensor technology to achieve robust, safe and performant perception across a large fleet of industrial robots. We explain how we addressed a number of sensory fusion challenges such as robust and safe obstacle detection, fusing geometric and semantic information and dealing with moving people and sensory blind spots.