Date: Tuesday, May 25
Start Time: 10:30 am
End Time: 11:00 am
The ability to perform neural network inference in resource-constrained devices is fueling the growth of machine learning at the edge. But application solutions require more than just inference—they also incorporate aggregation and pre-processing of input data, and post-processing of inference results. In addition, new neural network topologies are emerging rapidly. This diversity of functionality and quick evolution of topologies means that processing engines must have the flexibility to execute different types of workloads. I/O flexibility is also key, to enable system developers to choose the best sensor and connectivity options for their applications. In this talk, we explore how the configurable nature of Lattice FPGAs and the soft cores implemented on them allow for quick adoption of emerging neural network topologies, efficient execution of pre- and post-processing functions, and flexible I/O interfacing. We also show how we optimize network topologies and our compiler to get the best out of FPGAs.