Date: Friday, May 28
Start Time: 1:00 pm
End Time: 1:30 pm
Achieving high performance and power efficiency for machine learning inference at the edge requires maintaining high chip utilization, even with a batch size of one, while processing high-resolution image data. In this Over-the-Shoulder tutorial session, we will show how Edgecortix’s reconfigurable Dynamic Neural Accelerator (DNA) AI processor architecture, coupled with our MERA compiler and software stack, enables developers to seamlessly execute deep neural networks written in Pytorch and TensorFlow Lite while maintaining high chip utilization, power efficiency and low latency regardless of the type of convolution neural network. We will walk through examples of implementing deep neural networks for vision applications on DNA, starting from standard machine learning frameworks and then benchmarking performance using the built-in simulator as well as FPGA hardware.