Date: Tuesday, May 23
Start Time: 4:15 pm
End Time: 4:45 pm
ImmersiView is a deep learning–based augmented reality solution for automotive safety. It uses a head-up display to draw a driver’s attention to important objects. The development of such solutions often resembles research more than development, with long development cycles and unpredictable accuracy and inference speed advances. In this talk, we present an efficient development process for projects involving multiple deep learning tasks. This process decouples task dependencies through teacher-student learning and concurrently improves accuracy and speed via sprints. In each sprint, we train teacher networks for each task, focusing only on improving accuracy. In the same sprint, a unified student network learns all tasks from the most accurate teacher networks. To optimize accuracy and speed, we apply neural architecture search to the student network in the initial sprints and then fix the architecture. This development process enabled us to create the ImmersiView prototype in three months, followed by monthly releases.