Date: Wednesday, May 22
Start Time: 2:40 pm
End Time: 3:10 pm
Transformer neural networks have revolutionized artificial intelligence by introducing an architecture built around self-attention mechanisms. This has enabled unprecedented advances in understanding sequential data, such as human languages, while also dramatically improving accuracy on nonsequential tasks like object detection. In this talk, we will explain the technical underpinnings of transformer architectures, with particular focus on self-attention mechanisms. We’ll also explore how transformers have influenced the direction of AI research and industry innovation. Finally, we’ll touch on ethical considerations and discuss how transformers are likely to evolve in the near future.