Date: Tuesday, May 17 (Main Conference Day 1)
Start Time: 9:30 am
End Time: 10:40 am
We say that today’s mainstream computer vision technologies enable machines to “see,” much as humans do. We refer to today’s image sensors as the “eyes” of these machines. And we call our most powerful algorithms deep “neural” networks.
In reality, the principles underlying current mainstream computer vision are completely different from those underlying biological vision. Conventional image sensors operate very differently from eyes found in nature, and there’s virtually nothing “neural” about deep neural networks.
Can we gain important advantages by implementing computer vision using principles of biological vision?
Professor Ryad Benosman thinks so.
Mainstream image sensors and processors acquire and process visual information as a series of snapshots recorded at a fixed frame rate, resulting in limited temporal resolution, low dynamic range and a high degree of redundancy in data and computation. Nature suggests a different approach: Biological vision systems are driven and controlled by events within the scene in view, and not – like conventional techniques – by artificially created timing and control signals that have no relation to the source of the visual information.
The term “neuromorphic” refers to systems that mimic biological processes. In this talk, Professor Benosman — a pioneer of neuromorphic sensing and computing — will introduce the fundamentals of bio-inspired, event-based image sensing and processing approaches, and explore their strengths and weaknesses. He will show that bio-inspired vision systems have the potential to outperform conventional, frame-based systems and to enable new capabilities in terms of data compression, dynamic range, temporal resolution and power efficiency in applications such as 3D vision, object tracking, motor control and visual feedback loops.