Ego vision technology is revolutionizing the capabilities of smart eyewear, enabling applications that understand user actions, estimate human pose and provide spatial awareness through simultaneous localization and mapping (SLAM). This presentation will dive into the latest advancements in deploying these computer vision techniques on embedded systems. We’ll explain how we overcome the challenges of constrained processing power, memory and energy consumption while still achieving real-time, on-device performance for smart eyewear. In particular, we will share insights on optimizing neural networks for low-power environments, innovating in pose estimation and effectively integrating SLAM in dynamic settings, all supported by real-world examples and demonstrations. We will also explore how these capabilities open new possibilities for augmented reality, assistive technologies and enhanced personal health.