Start Time: 1:30 pm
End Time: 2:00 pm
AI software developers need to deploy diverse classes of algorithms in embedded devices, including deep learning, machine vision and sensor fusion. Adapting these algorithms to run efficiently on embedded devices typically involves using accelerator processors, such as GPUs or neural network accelerators. Quickly deploying high-performance AI algorithms to different processors requires open standards. There are a variety of open standards available to help: SYCL, OpenCL, SPIR-V, OpenMP, OpenVX and ONNX. This talk will present proven workflows that combine these standards to enable AI software developed on a PC to run efficiently on a variety of embedded devices using accelerated programming models.