Date: Wednesday, May 22
Start Time: 4:15 pm
End Time: 5:20 pm
In this session, we’ll outline the key steps required to implement a large language model (LLM) on a Raspberry Pi. We’ll begin by outlining the motivations for running LLMs on the edge and exploring practical use cases for LLMs at the edge. Next, we’ll provide some rules of thumb for selecting hardware to run an LLM. Then we’ll walk through the steps needed to adapt an LLM for an application using prompt engineering and LoRA retraining. We’ll then demonstrate how to build and run an LLM from scratch on a Raspberry Pi. Finally, we’ll show how to integrate an LLM with other edge system building blocks, such as a speech recognition engine to enable spoken input and application logic to trigger actions.