DSEC: A Stereo Event Camera Dataset for Driving Scenarios + CVPRW 2021 Competition
Description
Once an academic venture, autonomous driving has received unparalleled corporate funding in the last decade. Still, operating conditions of current autonomous cars are mostly restricted to ideal scenarios. This means that driving in challenging illumination conditions such as night, sunrise, and sunset remains an open problem. In these cases, standard cameras are being pushed to their limits in terms of low light and high dynamic range performance. To address these challenges, we propose, DSEC, a new dataset that contains such demanding illumination conditions and provides a rich set of sensory data. DSEC offers data from a wide-baseline stereo setup of two color frame cameras and two high-resolution monochrome event cameras. In addition, we collect lidar data and RTK GPS measurements, both hardware synchronized with all camera data. One of the distinctive features of this dataset is the inclusion of high-resolution event cameras. Event cameras have received increasing attention for their high temporal resolution and high dynamic range performance. However, due to their novelty, event camera datasets in driving scenarios are rare. This work presents the first high resolution, large scale stereo dataset with event cameras. The dataset contains 53 sequences collected by driving in a variety of illumination conditions and provides ground truth disparity for the development and evaluation of event-based stereo algorithms.Citing
If you use this work in your research, please cite the following paper:

DSEC: A Stereo Event Camera Dataset for Driving Scenarios
IEEE Robotics and Automation Letters (RA-L), 2021.
BibTeX:
@Article{Gehrig21ral, author = {Mathias Gehrig and Willem Aarents and Daniel Gehirg and Davide Scaramuzza}, title = {DSEC: A Stereo Event Camera Dataset for Driving Scenarios}, journal = {IEEE Robotics and Automation Letters}, year = {2021}, doi = {10.1109/LRA.2021.3068942} }
Competition
Task
The goal is to estimate dense disparity from two event cameras in a stereo setup.Details:
- The predictions will be evaluated on the left event camera at specified timestamps on the test set.
- Usage of global shutter (RGB) cameras is not allowed for this competition.
- The relevant metric is the average absolute error of the disparity.
- The evaluation will be performed on all pixels on which groundtruth disparity is available.
Deadline
The deadline for the submission is 6th of June, 2021 (11:59 PM Pacific Time).Submission Format
The submission will be evaluated by an evaluation server and contain the disparity maps at specified timestamps on the test set. The exact submission format will be announced as soon as the evaluation server is online.Technical Report
The participants should prepare a short (1 or 2 pages) technical report that will be available to the public upon the participants' consent.Winner
The winner of the competition will be invited to present at the CVPR 2021 Workshop on Event-Based Vision.Copyright
This dataset is provided to you under the Creative Commons Attribution-ShareAlike 4.0 International public license (CC BY-SA 4.0). This means that you must attribute the work in the manner specified by the authors and must distribute your contributions under the same license. Using the material for commercial purposes is allowed.Download
We provide the option to download the dataset in larger zip files or more fine-grained in separated files per sequence.Training Data
train_events.zip (125 GB)train_images.zip (216 GB)
train_disparity.zip (12 GB)
train_calibration.zip
Test Data
test_events.zip (27 GB)test_images.zip (43 GB)
test_calibration.zip
The test set does not contain groundtruth because it will be used for the upcoming benchmark.