In this presentation, we provide a practical end-to-end overview of the AMD Vitis AI workflow for deploying deep learning models on AMD adaptive SoCs for embedded and edge systems. These platforms are well suited for tasks like object detection and image classification and for system-level vision pipelines requiring deterministic, real‑time performance with tight power and latency constraints. Attendees will learn about model quantization and optimization, the Vitis AI compiler and how to integrate and execute models using both ONNX Runtime and Vitis AI Runtime. We’ll also provide guidance on building applications for inference, deploying models on hardware and utilizing profiling tools such as the AI Analyzer to evaluate performance and pinpoint bottlenecks.

