As large language models (LLMs) and vision language models (VLMs) have quickly become important for edge applications from smartphones to automobiles, chipmakers and IP providers have struggled with how to adapt processor software stacks. In this talk, Expedera’s Ram Tadishetti examines how edge processor software stacks have evolved from their focus on CNNs to today’s support of a rapidly expanding range of diverse networks, including LLMs and VLMs. Ram will examine the difficulties that LLMs and VLMs present to a processor software stack and the challenges posed by the rapid introduction of new models with novel features, and he’ll explain the methods Expedera has implemented to mitigate these challenges. He will also discuss potential future software evolutions that will further streamline the implementation of new models.