How to See with an Event Camera

Date:

Venue: Australian National University, Canberra, ACT

VideoSlides

My final PhD seminar.


Abstract. Seeing enables us to recognise people and things, detect motion, perceive our 3D environment and more. Light stimulates our eyes, sending electrical impulses to the brain where we form an image and extract useful information. While we do not fully understand how, we know that it happens on a tight energy budget with limited computational power, especially compared to the artificial analogues to our eyes and brain: cameras and computers. The field of neuromorphic engineering (neuro - brain, morphic - like) aims to understand the brain and build one on a chip. We still have a long way to go - though we’ve already built the eyes.

Event cameras are bio-inspired sensors that offer improvements over conventional cameras, however, extracting useful information from the raw data output is challenging. Compared to conventional cameras, event cameras (i) are fast, (ii) can see dark and bright at the same time, (iii) have less motion-blur, (iv) use less energy and (v) transmit data efficiently. However, the raw output of event cameras, called events, cannot be easily interpreted or processed like conventional images. Reconstructing images from events enables human-interpretable visualisation and application of image processing algorithms. Machine learning can be used to extract information (e.g., classification, motion, 3D structure) from images, or even directly from events. I believe that reconstructing images and machine learning with events are two challenging yet promising directions to unlocking the full potential of event cameras. In this talk I will present (i) continuous-time complementary filtering for real-time image reconstruction with event cameras, (ii) a framework for asynchronous, per-event spatial image convolution and (iii) convolutional neural networks for image reconstruction and optic flow with event cameras.