Date: Thursday, May 23
Start Time: 11:25 am
End Time: 11:55 am
AI at the edge has been transforming over the last few years, with newer use cases running more efficiently and securely. Most edge AI workloads were initially run on CPUs, but machine learning accelerators have gradually been integrated into SoCs, providing more efficient solutions. At the same time, ChatGPT has driven a sudden surge in interest in transformer-based models, which are primarily deployed using cloud resources. Soon, many transformer-based models will be modified to run effectively on edge devices. In this presentation, we explain the role of transformer-based models in vision applications and the challenges of implementing transformer models at the edge. Next, we introduce the latest Arm machine learning solution and how it enables the deployment of transformer-based vision networks at the edge. Finally, we share an example implementation of a transformer-based embedded vision use case and use this to contrast such solutions with those based on traditional CNN networks.