Description:
Deploying advanced vision AI models on embedded systems doesn’t need to be complex or time-consuming. Join us in this hands-on session where attendees will walk through two workshop examples of deploying vision-based models—each taking under one hour using the Edge Impulse machine learning development platform. In the first part of the session, attendees will learn how to collect a high-quality dataset to train and deploy real-time object detection on low-power microcontrollers. In the second part of the session, attendees will construct a multi-stage machine learning pipeline for pose classification:
Workshop #1 – FOMO: Real-Time Object Detection on Low-Power Microcontrollers (~ 60 Minutes)
Edge Impulse FOMO (Faster Objects, More Objects) is a novel machine learning algorithm that brings object detection to highly constrained devices. It lets you count objects, find the location of objects in an image, and track multiple objects in real-time using up to 30x less processing power and memory than MobileNet SSD or YOLOv5. In this exercise, attendees will learn how to collect a high-quality object detection dataset to train and deploy a FOMO model to a microcontroller like the Arduino Portenta H7 + Vision shield.
Workshop #2 – Pose Classification: Multi-Stage Inference in an Embedded Device (~ 60 Minutes)
Construct a multi-stage machine learning pipeline that captures image data and use the TensorFlow Pose Estimation model to identify joint locations on a human body and classify poses using those features. To accomplish this project, we will wrap the pose estimation model in a custom Edge Impulse block so that both the pose estimation and classification models can be easily deployed to an embedded device after training.
What Will You Gain By Attending?
Who Should Attend?
Hardware Requirements: Laptop and smartphone or tablet
Location: Santa Clara Convention Center – Room 209/210
Refreshments: Continental breakfast and lunch will be provided
About Edge Impulse: Edge Impulse provides a user-friendly, end-to-end platform for developing embedded machine learning applications. Features such as automated data set labeling, pre-built DSP and ML blocks, live classification testing and digital twin creation help significantly reduce time and complexity, without sacrificing the ability for developers to optimize and customize models.
Interested in sponsoring or exhibiting?
The Embedded Vision Summit gives you unique access to the best qualified technology buyers you’ll ever meet.
Want to contact us?
Use the small blue chat widget in the lower right-hand corner of your screen, or the form linked below.
STAY CONNECTED
Follow us on Twitter and LinkedIn.