The increasing complexity of AI models has led to a growing need for efficient data storage and processing. One critical way to gain efficiency is using smaller and simpler data types. In this presentation, we explore the trade-offs in data types for AI. We introduce the most commonly used data types, including compact integer and floating-point formats, and highlight their advantages and limitations, as well as their impact on model accuracy and the complexity of the quantization process. We examine the main trends in this space, including emerging techniques such as microscaling, and consider the benefits of advanced compression techniques. We conclude by summarizing the key considerations for AI data type selection, including maximizing system throughput and balancing compute and memory bandwidth.