Date: Friday, May 28
Start Time: 1:00pm
End Time: 2:00pm
Understanding the 3D environment is a crucial computer vision capability required by a growing set of applications such as autonomous driving, AR/VR and AIoT. 3D visual information, captured by LiDAR and other sensors, is typically represented by a point cloud consisting of thousands of unstructured points.
Developing computer vision solutions to understand 3D point clouds requires addressing several challenges, including how to efficiently represent and process 3D point clouds, how to design efficient on-device neural networks to process 3D point clouds, and how to easily obtain data to train 3D models and improve data efficiency. In this talk, we show how we address these challenges as part of our “SqeezeSeg” research and present a highly efficient, accurate, and data-efficient solution for on-device 3D point-cloud understanding.