The Edge AI Deep Dive Day includes in-person sessions where the industry’s leading companies delve into specific topics in visual and AI. Deep Dive sessions are focused on a unique subject, and is a fantastic opportunity to thoroughly explore vital aspects of AI.
We are excited to be offering in-depth sessions on Tuesday, May 20 at the 2025 Embedded Vision Summit. Deep Dive session tickets are $25 each and may be purchased with or without a Summit pass.
Join us for our second Qualcomm AI Hub workshop at the Embedded Vision Summit! Whether you’re looking to get started with AI Hub or have used it before and want to improve performance, expand to new use cases and learn what’s new, this workshop is for you. If you enjoyed this workshop in 2024, we think you’ll want to attend again this year. We’ll address the common challenges faced by developers bringing AI to edge devices.
You already know why on‑device AI matters. But do you know how to achieve real-time latency targets on Qualcomm-powered devices? Learn how to compile, optimize, and test models for a specific device and runtime, while preserving accuracy and staying within memory constraints. To deploy AI on device at scale, you need a repeatable workflow that automates optimization and continually deploys the most efficient model across a diverse portfolio of Qualcomm SoCs. Those practices directly shape real‑world user experiences, from low‑power handheld devices to higher‑performance edge appliances.
This workshop focuses on on‑device AI as it exists in production today: power‑limited, latency‑sensitive, and deployed across millions of devices and chipsets, rather than a single reference platform.
Does this sound familiar?
This workshop is designed for teams who are deploying edge AI and want an end-to-end workflow for selecting a runtime, quantizing their model, determining performance bottlenecks and understanding how to integrate a compiled model within their software application.
You will select a model—either bring your own or start with our collection. Then submit jobs through Python APIs, inspecting performance down to the operator and layer level to understand how compilation and quantization choices affect on‑device behavior. Qualcomm AI Hub products provide a workflow for deploying performance-optimized models on-device:
Designed for ML or software engineers who want to ship AI models on Qualcomm-powered edge devices, this technical hands-on session strengthens how teams approach model selection, compilation, and optimization for Qualcomm NPUs. Participants will leave with a working application, device‑level performance data, and APIs to obtain optimized assets and integrate them directly into existing development workflows.
Hardware Requirements
Laptop: A device able to run Python commands via a code editor is required (no phone or iPad)
You will also be asked to create an account for Qualcomm AI Hub Workbench prior to the session (more info to follow)
Note: Separate registration and a $25 fee is required.
Learn about the newest speakers, sessions and other noteworthy details about the Summit Program by leaving us a few details.

Interested in sponsoring or exhibiting?
The Embedded Vision Summit gives you unique access to the best qualified technology buyers you’ll ever meet.
Want to contact us?
Use the small blue chat widget in the lower right-hand corner of your screen, or the form linked below.
STAY CONNECTED
Follow us on X and LinkedIn.