Date: Wednesday, May 22
Start Time: 4:15 pm
End Time: 4:45 pm
In recent years there’s been tremendous focus on designing next-generation AI chipsets to improve neural network inference performance. As higher performance processors are called upon to execute ever-larger models—from vision transformers to LLMs—memory bandwidth is frequently the key performance bottleneck. With the demands for memory bandwidth and storage capacity varying across applications, it is critical to identify the right memory technologies that match the complexity and performance needs of your application. In this talk, we will explore how to choose the right memory to break the performance bottleneck in edge AI systems. We’ll also highlight recent memory technology developments that are enabling higher memory performance and capacity at the edge.