Date: Wednesday, May 22
Start Time: 12:00 pm
End Time: 12:30 pm
In this session we show how techniques of sequential optimization are applied to enable continual learning during run-time, as new observations flow in. The lightweight nature of these techniques, using only the new batches of observations for processing, allows for new training iterations to be performed on the edge without losing memory of the entire pool of observations that served for the initial training. Guy will present detailed examples using this technique, showing how it can be used to optimize a linear function, an image warping algorithm and an object classification neural network.