Date: Monday, May 11
Start Time: 1:30 pm
End Time: 2:00 pm
Hearing aids are transforming from single-function amplification devices into multifunctional edge computing platforms powered by embedded deep neural networks. Advances in on-device AI, including efficient neural network architectures and hardware and software co-design, enable capabilities such as speech enhancement in complex acoustic scenes, automatic environment classification, health and wellness monitoring, fall detection and natural human-machine interaction. We’ll discuss key design challenges, including constraints on power consumption, latency, form factor and reliability, along with the need for robust performance across real-world conditions. We’ll also highlight system-level innovations in sensor fusion, adaptive signal processing and personalized AI that enable seamless and context-aware user experiences. Finally, we’ll consider the broader implications of ear-worn edge AI as a foundational platform for personal computing, extending beyond hearing augmentation to continuous health sensing, cognitive assistance and ambient intelligence. This emerging paradigm positions the ear as a central interface for next-generation human-centric AI systems.

