Date: Tuesday, May 25
Start Time: 12:30 pm
End Time: 1:00 pm
When developing an edge AI solution, DNN inference performance is critical: If your network doesn’t meet your throughput and latency requirements, you’re in trouble. But accurately measuring inference performance on target hardware can be time-consuming—just getting your hands on the target hardware can take weeks or months. In this Over-the-Shoulder tutorial, we’ll show step-by-step how you can quickly and easily benchmark inference performance on a variety of platforms without having to purchase hardware or install software tools. By using Intel DevCloud for the Edge and the Deep Learning Workbench, you can work from anywhere and get instant access to a wide range of Intel hardware platforms and software tools. And, beyond simply measuring performance, using Intel DevCloud for the Edge allows you to quickly identify bottlenecks and optimize your code—all in a cloud-based environment accessible from anywhere. Join us to learn how to quickly benchmark and optimize your trained DNN model.