Time and Date: Monday, June 17, 2019. Full day workshop: 8:00 - 18:00 h.

Location: Room 101A - Long Beach Convention & Entertainment Center, Long Beach, California

HUAWEI SPONSOR

This workshop and the best paper award are sponsored by Huawei.


Schedule:
8:00 SESSION 1
8:00 Introduction to the workshop. Slides
8:05 Andrew Davison (Imperial College London and SLAMCore)
Novel Hardware for Spatial AI. Slides,  YouTube
8:30 Davide Scaramuzza and Guillermo Gallego (University of Zurich)
Event-based Cameras: Challenges and Opportunities. Slides,  YouTube
8:55 Hyunsurk Eric Ryu (Samsung Electronics)
Industrial DVS Design: Key Features and Applications. Slides,  YouTube
9:20 Contributed papers pitch:
10:00 Posters and Live Demos during coffee break
10:30 SESSION 2
10:30 Kostas Daniilidis and Alex Zhu (University of Pennsylvania)
Unsupervised Learning of Optical Flow and Camera Motion from Event Data. Slides,  YouTube
10:55 Mike Davies (Intel Corp.)
Realizing the Promise of Spiking Neuromorphic Hardware. Slides,  YouTube
11:20 Garrick Orchard (Intel Corp.)
Spiking Neural Networks for Event-based Vision. Slides,  YouTube
11:45 Cornelia Fermuller (University of Maryland)
Object Motion Estimation and Grouping from Event Data. Slides,  YouTube
12:15 Lunch Break
13:15 SESSION 3
13:15 Piotr Dudek (University of Manchester)
SCAMP-5: Vision Sensor with Pixel Parallel SIMD Processor Array. Slides,  YouTube
13:40 Robert Mahony (Australian National University)
Asynchronous Convolutions and Image Reconstruction. Slides,  YouTube
14:05 Yuchao Dai (Australian National University)
Bringing a Blurry Frame Alive at High Frame-Rate with an Event Camera. Slides,  YouTube
14:15 Henri Rebecq (University of Zurich)
Events-to-Video: Bringing Modern Computer Vision to Event Cameras. Slides,  YouTube
14:25 Yusuke Sekikawa (Denso IT Laboratory)
EventNet: Asynchronous recursive event processing. Slides,  YouTube
14:35 Yulia Sandamirskaya (University of Zurich and ETH Zurich)
Neuromorphic Computing: towards event-based cognitive sensing and control. Slides,  YouTube
14:50 Julien N.P. Martel (Stanford University)
Bringing computation on the focal plane: algorithms and systems for sensors with in-pixel processing capabilities. Slides,  YouTube
15:15 Posters and Live Demos during coffee break
15:45 SESSION 4
15:45 Amos Sironi (Prophesee)
Learning from Events: on the Future of Machine Learning for Event-based Cameras. Slides,  YouTube
16:05 Kynan Eng, CEO of iniVation
Applications, Software and Hardware for Event-Based Vision. Slides,  YouTube
16:25 Stefan Isler (Insightness)
Event-based Vision for Augmented Reality. Slides,  YouTube
16:45 Shoushun Chen, founder of CelePixel Technology
Introduction of Celex Family Sensor and Event/Frame/Optical-flow Hybrid Processing. Slides,  YouTube
17:05 Award Ceremony, Slides
Best Paper Award: Asynchronous Convolutional Networks for Object Detection in Neuromorphic Cameras
Best Paper Award Finalist: Star Tracking using an Event Camera
17:15 Panel discussion. Slides

Live Demos (during breaks)

 


Objectives:
This workshop is dedicated to event-based cameras, smart cameras and algorithms. Event-based cameras are revolutionary vision sensors with three key advantages: a measurement rate that is almost 1 million times faster than standard cameras, a latency of microseconds, and a dynamic range that is eight orders of magnitude larger than that of standard cameras. Because of these advantages, event-based cameras open frontiers that are unthinkable with standard cameras (which have been the main sensing technology of the past 60 years). These revolutionary sensors enable the design of a new class of algorithms to track a baseball in the moonlight, build a flying robot with the same agility of a fly, and doing structure from motion in challenging lighting conditions and at remarkable speeds. These sensors became commercially available in 2008 and are slowly being adopted in computer vision and robotics. They have covered the main news in recent years, with event-camera company Prophesee receiving $40 million in investment from Intel and Bosch, and Samsung announcing mass production as well its use in combination with IBM TrueNorth processor to recognize human gestures. Cellular processor arrays (CPAs), such as the SCAMP sensor, are novel sensors in which each pixel has a programmable processor, thus they yield massively parallel processing near the image plane. Unlike a conventional image sensor, the SCAMP does not output raw images, but rather the results of on-sensor computations, for instance a feature map or optic flow map. Because early vision computations are carried out entirely on-sensor, the resulting system has high speed and low-power consumption, enabling new embedded vision applications in areas such as robotics, AR/VR, automotive, gaming, surveillance, etc. This workshop will cover the sensing hardware, as well as the processing and learning methods needed to take advantage of the above-mentioned novel cameras.

FAQs:

Organizers:


Previous related workshops: