Robotics and Perception Group

RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members

Welcome to the website of the Robotics and Perception Group led by Prof. Davide Scaramuzza. Our lab was founded in February 2012 and is part of the Department of Informatics, at the University of Zurich, and the Department of Neuroinformatics, which is a joint institute of both the University of Zurich and ETH Zurich.


Our mission is to research the fundamental challenges of robotics and computer vision that will benefit all of humanity. Our key interest is to develop autonomous machines that can navigate all by themselves using only onboard cameras and computation, without relying on external infrastructure, such as GPS or position tracking systems, nor off-board computing. Our interests encompass predonimantly micro drones because they are more challenging and offer more research opportunities than ground robots.

News

September 28, 2023

Our work selected as an IROS paper award candidate


Congratulations to Jiaxu and Giovanni whose IROS paper "Autonomous Power Line Inspection with Drones via Perception-Aware MPC" is nominated for either the conference best paper or best student paper award! Only 12 papers have been nominated out of 1,096 accepted papers: 1% nomination rate!

September 25, 2023

Our work won the Best Paper Award at IROS23 Workshop Robotic Perception and Mapping

We are happy to announce that our work "HDVIO: Improving Localization and Disturbance Estimation with Hybrid Dynamics VIO" won the best paper award at IROS23 Workshop Robotic Perception and Mapping: Frontier Vision and Learning Techniques. The paper will be presented in a spotlight talk on Thursday October 5th in Detroit. Congratulations to all collaborators! Check it out paper and video.

September 25, 2023

Our work in collaboration with ASL, ETH Zurich, won the Best Paper Award at IROS23 Workshop Robotic Perception and Mapping


We are happy to announce that our work in collaboration with ASL, ETH Zurich, "Attending Multiple Visual Tasks for Own Failure Detection" won the best paper award at IROS23 Workshop Robotic Perception and Mapping: Frontier Vision and Learning Techniques. The paper will be presented in a spotlight talk on Thursday October 5th in Detroit. Congratulations to all collaborators! Check it out the paper.

September 24, 2023

End-to-End Learned Event- and Image-based Visual Odometry


RAMP-VO is a novel end-to-end learnable visual odometry system tailored for challenging conditions. It seamlessly integrates event-based cameras with traditional frames, utilizing Recurrent, Asynchronous, and Massively Parallel (RAMP) encoders. Despite being trained only in simulations, it outperforms both learning-based and model-based methods, demonstrating its potential for robust space navigation. For more details, check out our paper.

September 22, 2023

Actor-Critic Model Predictive Control

How can we combine the task performance and reward flexibility of model-free RL with the robustness and online replanning capabilities of MPC? We provide an answer by introducing a new framework called Actor-Critic Model Predictive Control (ACMPC). The key idea is to embed a differentiable MPC within an actor-critic RL framework. For more details, check out our paper and our video.

September 19, 2023

Code Release: Active Camera Exposure Control

We release the code of our camera controller that adjusts the exposure time and gain of the camera automatically. We propose an active exposure control method to improve the robustness of visual odometry in HDR (high dynamic range) environments. Our method evaluates the proper exposure time by maximizing a robust gradient-based image quality metric. Check out our paper for more details.

September 19, 2023

Contrastive Initial State Buffer for Reinforcement Learning

We introduce the concept of a Contrastive Initial State Buffer, which strategically selects states from past experiences and uses them to initialize the agent in the environment in order to guide it toward more informative states. The experiments on drone racing and legged locomotion show that our initial state buffer achieves higher task performance while also speeding up training convergence. Check out our paper.

September 13, 2023

Reinforcement Learning vs. Optimal Control for Drone Racing


Reinforcement Learning (RL) vs. Optimal Control (OC) - why can RL achieve immpressive results beyond optimal control for many real-world robotic tasks? We investigate this question in our paper "Reaching the Limit in Autonomous Racing: Optimal Control versus Reinforcement Learning" published today in Science Robotics, available open access here!
Many works have focused on impressive results, but less attention has been paid to the systematic study of fundamental factors that have led to the success of reinforcement learning or have limited optimal control. Our results indicate that RL does not outperform OC because RL optimizes its objective better. Rather, RL outperforms OC because it optimizes a better objective: RL can directly optimize a task-level objective and can leverage domain randomization allowing the discovery of more robust control responses.
Check out our video to see our drone race autonomously with accelerations up to 12g!

September 1, 2023

AI Drone beats Human World Champions Head-to-Head Drone Race


We are thrilled to share our groundbreaking research paper published in Nature titled "Champion-Level Drone Racing using Deep Reinforcement Learning," available open access here!
We introduce "Swift," the first autonomous vision-based drone that won several fair head-to-head races against human world champions! The Swift AI drone combines deep reinforcement learning in simulation with data collected in the physical world. This marks the first time that an autonomous mobile robot has beaten human champions in a real physical sport designed for and by humans. As such it represents a milestone for mobile robotics, machine intelligence, and beyond, which may inspire the deployment of hybrid learning-based solutions in other physical systems, such as autonomous vehicles, aircraft, and personal robots, across a broad range of applications.
Curious to see "Swift" racing and know more? Check out these two videos from us and from Nature.

September 1, 2023

New PhD Student

We welcome Ismail Geles as a new PhD student in our lab!

August 30, 2023

From Chaos Comes Order: Ordering Event Representations for Object Recognition and Detection


State-of-the-art event-based deep learning methods typically convert raw events into dense input representations before they can be processed by standard networks. However, selecting this representation is very expensive, since it requires training a separate neural network for each representation and comparing the validation scores. In this work, we circumvent this bottleneck by measuring the quality of event representations with the Gromov-Wasserstein Discrepancy, which is 200 times faster to compute. This work opens a new unexplored field of explicit representation optimization. For more information, have a look at our paper. The code will be available on this link at the start of the ICCV 2023 conference.

August 25, 2023

IROS2023 Workshop: Learning Robot Super Autonomy


Do not miss our IROS2023 Workshop: Learning Robot Super Autonomy! The workshop features an incredible speakers lineup and we will have a best paper award with prize money. Checkout the agenda and join the presentations at our workshop website. Organized by Giuseppe Loianno and Davide Scaramuzza.

August 15, 2023

Scientifica - come and see our drones!

Our lab will open the doors of its large drone testing arena on August 30th, 14:00h. Bring your family and friends to learn more about drones and watch an autonomous drone race. If you are interested, please register here!

August 14, 2023

New Senior Scientist

We welcome Harmish Khambhaita as our new Senior Scientist. He obtained his Ph.D. in Toulouse and previously worked, among others, for Anybotics as the Autonomy and Perception Lead.

July 28, 2023

Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone Racing

We tackle the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies. We use contrastive learning to extract robust feature representations from the input images and leverage a learning-by-cheating framework for training a neural network policy. For more information, check out our IROS23 paper and video.

July 28, 2023

Our Science Robotics 2021 paper wins prestigious Chinese award!


We are truly honored to receive the prestigious Frontiers of Science Award in the category Robotics Science and Systems, which was presented on July 16th 2023 at the International Congress of Basic Science in the Beijing's People's Hall of China for our Science Robotics 2021's paper "Learning High Speed Flight in the Wild"! Congratulations to the entire team: Antonio Loquercio, Elia Kaufmann Rene Ranftl, Matthias Mueller, Vladlen Koltun. Many thanks to the award committee! Congratulations to other winners too. Paper, open-source code, and video.

July 04, 2023

Our paper on Authorship Attribution through Deep Learning accepted at PLOS ONE


We are excited to announce that our paper on authorship attribution for research papers has just been published in PLOS ONE. We developed a transformer-based AI that achieves over 70% accuracy on the newly created, largest-to-date, authorship-attribution dataset with over 2000 authors. For more information check out our PDF and open-source code.

July 03, 2023

Video Recordings of the 4th International Workshop on Event-Based Vision at CVPR 2023 available!


The recordings of the 4th international workshop on event-based vision at CVPR 2023 are available here. The event was co-organized by Guillermo Gallego, Davide Scaramuzza, Kostas Daniilidis, Cornelia Femueller, Davide Migliore.

June 21, 2023

Microgravity induces overconfidence in perceptual decision-making

We are excited to present our paper on the effects of microgravity on perceptual decision-making published in Nature Scientific Reports.

PDF YouTube Dataset

June 20, 2023

HDVIO: Improving Localization and Disturbance Estimation with Hybrid Dynamics VIO

We are excited to present our new RSS paper on state and disturbance estimation for flying vehicles. We propose a hybrid dynamics model that combines a point-mass vehicle model with a learning-based component that captures complex aerodynamic effects. We include our hybrid dynamics model in an optimization-based VIO system that estimates external disturbance acting on the robot as well as the robot's state. HDVIO improves the motion and external force estimation compared to the state-of-the-art. For more information, check out our paper and video.

June 13, 2023

Our CVPR Paper is Featured in Computer Vision News


Our CVPR highlight and award-candidate work "Data-driven Feature Tracking for Event Cameras" is featured on Computer Vision News. Find out more and read the complete interview with the authors Nico Messikommer, Mathias Gehrig and Carter Fang here!

Jun 13, 2023

DSEC-Detection Dataset Release


We release a new dataset for event- and frame-based object detection, DSEC-Detection based on the DSEC dataset, with aligned frames, events and object tracks. For more details visit the dataset website.

PDF YouTube Dataset Code

June 08, 2023

Our PhD student Manasi Muglikar is awarded UZH Candoc Grant

Manasi, PhD student in our lab, is awarded the UZH Candoc Grant 2023 for her outstanding research! Congratulations! Checkout her latest work on event-based vision here.

May 13, 2023

Training Efficient Controllers via Analytic Policy Gradient


In systems with limited compute, such as aerial vehicles, an accurate controller that is efficient at execution time is imperative. We propose an Analytic Policy Gradient (APG) method to tackle this problem. APG exploits the availability of differentiable simulators by training a controller offline with gradient descent on the tracking error. Our proposed method outperforms both model-based and model-free RL methods in terms of tracking error. Concurrently, it achieves similar performance to MPC while requiring more than an order of magnitude less computation time. Our work provides insights into the potential of APG as a promising control method for robotics.

PDF YouTube Code

May 10, 2023

We are hiring


We have multiple openings for a Scientific Research Manager, Phd students and Postdocs in Reinforcement Learning for Agile Vision-based Navigation and Computer vision with Standard Cameras and Event Cameras. Job descriptions and how to apply: https://rpg.ifi.uzh.ch/positions.html

May 09, 2023

NCCR Robotics Documentary

Check out this amazing 45-minute documentary on YouTube about the story of twelve years of groundbreaking robotics research by the Swiss National Competence Center of Research in Robotics (NCCR Robotics). The documentary summarizes all the key achievements, from assistive technologies that allowed patients with completely paralyzed legs to walk again to legged and flying robots with self-learning capabilities for disaster mitigation to educational robots used by thousands of children worldwide! Congrats to all NCCR Robotics members who have made this possible! And congratulations to the coordinator, Dario Floreano, and his management team! We are very proud to have been part of this! NCCR Robotics will continue to operate in four different projects. Check out this article to learn more.

May 04, 2023

Code Release: Tightly coupling global position measurements in VIO


We are excited to release fully open-source our code to tightly fuse global positional measurements in visual-inertial odometry (VIO)! Our code integrates global positional measurements, for example GPS, in SVO Pro, a sliding-window optimization-based VIO that uses the SVO frontend. We leverage the IMU preintegration theory to efficiently include the global position measurements in the VIO problem formulation. Our system outperforms the loosely-coupled approach in terms of absolute trajectory error up to 50% with negligible increase of the computational cost. For more information, have a look at our paper and code.

April 25, 2023

Our work was selected as a CVPR Award Candidate

We are honored that our 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) paper "Data-driven Feature Tracking for Event Cameras" was selected as an award candidate. Congratulations to all collaborators!

PDF YouTube Code

April 17, 2023

Neuromorphic Optical Flow and Real-time Implementation with Event Cameras (CVPRW 2023)


We present a new spiking neural network (SNN) architecture that significantly improves optical flow prediction accuracy while reducing complexity, making it ideal for real-time applications in edge devices and robots. By leveraging event-based vision and SNNs, our solution achieves high-speed optical flow prediction with nearly two orders of magnitude less complexity, without compromising accuracy. This breakthrough paves the way for efficient real-time deployments in various computer vision pipelines. For more information, have a look at our paper.

April 13, 2023

Our Master student Asude Aydin wins the UZH Award for her Master Thesis

Asude Aydin, who did his Master thesis A Hybrid ANN-SNN Architecture for Low-Power and Low-Latency Visual Perception at RPG has received the UZH Award 2023 for her outstanding work. Check out her paper here, which is based on her Master thesis.


April 11, 2023

Event-based Shape from Polarization


We introduce a novel shape-from-polarization technique using an event camera (accepted at CVPR 2023). Our setup consists of a linear polarizer rotating at high-speeds in front of an event camera. Our method uses the continuous event stream caused by the rotation to reconstruct relative intensities at multiple polarizer angles. Experiments demonstrate that our method outperforms physics-based baselines using frames, reducing the MAE by 25% in synthetic and real-world dataset. For more information, have a look at our paper.

April 07, 2023

Recurrent Vision Transformers for Object Detection with Event Cameras (CVPR 2023)


We introduce a novel efficient and highly-performant object detection backbone for event-based vision. Through extensive architecture study, we find that vision transformers can be combined with recurrent neural networks to effectively extract spatio-temporal features for object detection. Our proposed architecture can be trained from scratch on publicly available real-world data to reach state-of-the-art performance while lowering inference time compared to prior work by up to 6 times. For more information, have a look at our paper and code.

April 3, 2023

Data-driven Feature Tracking for Event Cameras

We are excited to announce that our paper on Data-driven Feature Tracking for Event Cameras was accepted at CVPR 2023. In this work, we introduce the first data-driven feature tracker for event cameras, which leverages low-latency events to track features detected in a grayscale frame. Our data-driven tracker outperforms existing approaches in relative feature age by up to 130 % while also achieving the lowest latency
For more information, check out our paper, video and code.

April 3, 2023

Autonomous Power Line Inspection with Drones via Perception-Aware MPC

We are excited to present our new work on autonomous power line inspection with drones using perception-aware model predictive control (MPC). We propose a MPC that tightly couples perception and action. Our controller generates commands that maximize the visibility of the power lines while, at the same time, safely avoiding the power masts. For power line detection, we propose a lightweight learning-based detector that is trained only on synthetic data and is able to transfer zero-shot to real-world power line images. For more information, check out our paper and video.

April 3, 2023

RPG and LINA Project featured in RSI


In the recent news broadcast by RSI, our lab is featured for its efforts in developing and boosting research on civil applications for drones. The LINA project at the Dübendorf airport is making its infrastructure availble to researchers and industries to facilitate the testing and developing of autonomous flying systems hardware and software. RSI [IT]

April 1, 2023

New PhD Student

We welcome Nikola Zubić as a new PhD student in our lab!

March 30, 2023

Event-based Agile Object Catching with a Quadrupedal Robot

This work the low-latency advantages of event cameras for agil object catching with a quadrupedal robot. We use the event camera to estimate the trajectory of the object, which is then caught using an RL-trained policy. Our robot catches objects at up to 15 m/s with a 83% success rate. For more information, have a look at our ICRA 2023 paper, video and open-source code.

March 27, 2023

A Hybrid ANN-SNN Architecture for Low-Power and Low-Latency Visual Perception


This work proposes a hybrid model combining Spiking Neural Networks (SNN) and classical Artificial Neural Networks (ANN) to optimize power efficiency and latency in edge devices. The hybrid ANN-SNN model overcomes state transients and state decay issues while maintaining high temporal resolution, low latency, and low power consumption. In the context of 2D and 3D human pose estimation, the method achieves an 88% reduction in power consumption with only a 4% decrease in performance compared to fully ANN counterparts, and a 74% lower error compared to SNNs. For more information, have a look at our paper.

March 10, 2023

HILTI-SLAM Challenge 2023


RPG and HILTI are organizing the ICRA2023 HILTI SLAM Challenge! Instructions here. The HILTI SLAM Challenge dataset is a real-life, multi-sensor dataset with accurate ground truth to advance the state of the art in highly accurate state estimation in challenging environments. Participants will be ranked by the completeness of their trajectories and by the achieved accuracy. HILTI is a multinational company that offers premium products and services for professionals on construction sites around the globe. Behind this vast catalog is a global team comprising of 30.000 team members from 133 different nationalities located in more than 120 countries.

March 09, 2023

LINA Testing Facility at Dübendorf Airport


UZH Magazin releases a news article about our research on autonomous drones and our new testing facility at Dübendorf Airport that enables researchers to develop autonomous systems such as drones and ground-based robots from idea to marketable product. Read the article in English or in German. More information about the LINA project can be found here.

March 7, 2023

Our Master student Fang Nan wins ETH Medal for Best Master Thesis


Fang Nan, who did his Master thesis Nonlinear MPC for Quadrotor Fault-Tolerant Control at RPG has received the ETH Medal 2023 and the Willi Studer Prize for his outstanding work. Check out his RAL 2022 paper here, which is based on his Master thesis.


March 2, 2023

Learning Perception-Aware Agile Flight in Cluttered Environments

We propose a method to learn neural network policies that achieve perception-aware, minimum-time flight in cluttered environments. Our method combines imitation learning and reinforcement learning by leveraging a privileged learning-by-cheating framework. For more information, check out our ICRA23 paper or this video.

March 2, 2023

Weighted Maximum Likelihood for Controller Tuning

We present our new ICRA23 paper that leverages a probabilistic Policy Search method, Weighted Maximum Likelihood (WML), to automatically learn the optimal objective for MPCC. The data efficiency provided by the use of a model-based approach in the loop allows us to directly train in a high-fidelity simulator, which in turn makes our approach able to transfer zero-shot to the real world. For more information, check out our ICRA23 paper and video.

March 2, 2023

User-Conditioned Neural Control Policies for Mobile Robotics

We present our new paper that leverages a feature-wise linear modulation layer to condition neural control policies for mobile robotics. We demonstrate in simulation and in real-world experiments that a single control policy can achieve close to time-optimal flight performance across the entire performance envelope of the robot, reaching up to 60 km/h and 4.5 g in acceleration. The ability to guide a learned controller during task execution has implications beyond agile quadrotor flight, as conditioning the control policy on human intent helps safely bringing learning based systems out of the well-defined laboratory environment into the wild.
For more information, check out our ICRA23 paper and video.

February 28, 2023

Learned Inertial Odometry for Autonomous Drone Racing

We are excited to present our new RA-L paper on state estimation for autonomous drone racing. We propose a learning-based odometry algorithm that uses an inertial measurement unit (IMU) as the only sensor modality for autonomous drone racing tasks. The core idea of our system is to couple a model-based filter, driven by the inertial measurements, with a learning-based module that has access to the control commands. For more information, check out our paper, video, and code.

Feburary 15, 2023

Agilicious: Open-Source and Open-Hardware Agile Quadrotor for Vision-Based Flight

We are excited to present Agilicious, a co-designed hardware and software framework tailored to autonomous, agile quadrotor flight. It is completely open-source and open-hardware and supports both model-based and neural-network-based controllers. Also, it provides high thrust-to-weight and torque-to-inertia ratios for agility, onboard vision sensors, GPU-accelerated compute hardware for real-time perception and neural-network inference, a real-time flight controller, and a versatile software stack. In contrast to existing frameworks, Agilicious offers a unique combination of flexible software stack and high-performance hardware. We compare Agilicious with prior works and demonstrate it on different agile tasks, using both modelbased and neural-network-based controllers. Our demonstrators include trajectory tracking at up to 5 g and 70 km/h in a motion-capture system, and vision-based acrobatic flight and obstacle avoidance in both structured and unstructured environments using solely onboard perception. Finally, we demonstrate its use for hardware-in-the-loop simulation in virtual-reality environments. Thanks to its versatility, we believe that Agilicious supports the next generation of scientific and industrial quadrotor research. For more details check our paper, video and webpage.

January 17, 2023

Event-based Shape from Polarization


We introduce a novel shape-from-polarization technique using an event camera. Our setup consists of a linear polarizer rotating at high-speeds in front of an event camera. Our method uses the continuous event stream caused by the rotation to reconstruct relative intensities at multiple polarizer angles. Experiments demonstrate that our method outperforms physics-based baselines using frames, reducing the MAE by 25% in synthetic and real-world dataset. For more information, have a look at our paper.

January 11, 2023

Survey on Autonomous Drone Racing


We present our survey on Autonomous Drone Racing which covers the latest developments in agile flight for both model based and learning based approaches. We include extensive coverage of drone racing competitions, simulators, open source software, and the state of the art approaches for flying autonomous drones at their limits! For more information, see our paper

January 10, 2023

4th International Workshop on Event-Based Vision at CVPR 2023


The event will take place on June 19, 2023 in Vancouver, Canada. The deadline to submit a paper contribution is March 20 via CMT. More info on our website. The event is co-organized by Guillermo Gallego, Davide Scaramuzza, Kostas Daniilidis, Cornelia Femueller, Davide Migliore.

January 04, 2023

Davide Scaramuzza featured author of IEEE

We are honored that Davide Scaramuzza is featured authors on the IEEE website.

December 29, 2022

IEEE Top 10 Robotics Stories of 2022


It's an honor to be featured in the top 10 robotics stories of 2022 by IEEE Spectrum! Kudos and congratulations to our team that made this possible!

December 27, 2022

NCCR Robotics Most Impactful Paper Award


We won the NCCR Robotics Most Impactful Paper Award with the paper "A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots". Congrats to Alessandro Giusti and his co-authors!

December 24, 2022

12 Years of NCCR Robotics


After 12 amazing years, NCCR Robotics, the Swiss National Competence of Research in Robotics, has come to an end. I’m very proud to have been part of this! This RoboHub article summarizes all the key achievements, from assistive technologies that allowed patients with completely paralyzed legs to walk again, to winning the DARPA SubT Challenge, to legged and flying robots with self-learning capabilities for use in disaster mitigation as well as in civil and industrial inspection, to robotic startups that have become world leaders, to creating Cybathlon, the world-first Olympic-style competition for athletes with disabilities supported by assistive devices, to educational robots, such as Thymio, that have been used by thousands of children around the world. Congrats to all NCCR Robotics members who have made this possible! NCCR Robotics will continue to operate in four different projects. Check out this article to learn more: link.

December 16, 2022

Survey on visual SLAM for visually impaired people


We present the first survey on visual SLAM for visually impaired people. This technology has tremendous potential to assist people and it will be used, for the first time, in the next Cybathlon competition where we participate. For more information, have a look at our paper and the Cybathlon website.

December 1, 2022

10-Year Lab Anniversary

This week, we celebrate the 10th anniversary of RPG! This video celebrates our anniversary, the over 300 people who worked in our lab as Bsc/Msc/Ph.D. students, postdocs, visiting researchers, all our collaborators, our research sponsors, and the administration people at our university. We thank all of them for contributing to our research. And thank you as well for following our research. The lab made important contributions to autonomous, agile vision-based navigation of micro aerial vehicles and event cameras for mobile robotics and computer vision. Three startups and entrepreneurial projects came out of the lab: the first one, Zurich Eye, became Facebook-Meta Zurich, which contributed to the development of the VR headset Oculus Quest; the second one, Fotokite, makes tethered drones for first responders; the third one, SUIND, makes vision-based drones for precision agriculture. Our researchers won over 50 awards and many paper awards, have published more than 100 scientific articles, which have been cited more than 35 thousand times, and have been featured in many media, including The New York Times, Forbes, and The Economist (media page). We have also released more than 85 open-source software packages, datasets, and toolboxes to further accelerate science advancement and our research's reproducibility (software page). Our algorithms have inspired and have been transferred to many products and companies, including NASA, DJI, Bosch, Nikon, Magic Leap, Meta-Facebook, Huawei, Sony, and Hilti. Thank you for making all this possible! Video.

November 30, 2022

Authorship Attribution through Deep Learning


Can you guess who wrote a paper, just by reading it? We present a transformer-based AI that achieves over 70% accuracy on the newly created, largest-to-date, authorship-attribution dataset with over 2000 authors. For more information check out our paper and open-source code.

November 23, 2022

Pushing the Limits of Asynchronous Graph-based Object Detection with Event Cameras


We introduce various design principles that push the limits of asynchronous graph-based object detection from events by allowing us to design deeper, more powerful models, whithout sacrificing efficiency. While our smallest such model outperforms the best asynchronous methods by 7.4 mAP with 3.7 higher efficiency, our largest model even outperforms dense, feedforward methods, a feat previously unattained by asynchronous methods. For more information, check out our paper.

November 7, 2022

RPG featured in NZZ documentary on Military Drones


In the recent NZZ format documentary on military drones, our lab is featured in its role as a civil research institution working on possible dual-use technology. Our search-and-rescue technology is shown to underline the huge potential of drones to be used in critical missions, possibly saving many lives. Link

November 7, 2022

RPG Drones at the Swiss Robotics Day feature in SRF Tagesschau!

Our autonomous vision-based drones are features in the SRF Tagesschau (05.11.2022) report on the NCCR Swiss Robotics Day in Lausanne. We demonstrate how the technology we develop can be used in GPS-denied environments that are commonly encountered in, for example, search-and-rescue scenarios. YouTube [DE], YouTube [IT], SRF [DE], RSI [IT]

October 28, 2022

The Robotics and Perception Group participated in the parabolic flight campain of UZH Space Hub to study how gravity affects the decision-making of human drone pilots.

October 27, 2022

Learned Inertial Odometry for Autonomous Drone Racing


We propose a learning-based odometry algorithm that uses an inertial measurement unit (IMU) as the only sensor modality for autonomous drone racing tasks. The core idea of our system is to couple a model-based filter, driven by the inertial measurements, with a learning-based module that has access to the control commands. For more information, check out our paper and video.

October 14, 2022

Code release: Data-Efficient Collaborative Decentralized Thermal-Inertial Odometry


We released the code and datasets for our work "Data-Efficient Collaborative Decentralized Thermal-Inertial Odometry" with NASA JPL, extending the already-public JPL xVIO library. With this work, we unleash collaborative drone swarms in the dark, opening new challenging scenarios for the robotics community. For more details, visit the project page.

October 4, 2022

Zero Gravity - RPG participates in Parabolic Flight Campain


Today, we performed our first experiment in reduced, hyper, and zero gravity! Our goal: to study how different g affect self motion estimation in drone pilots in view of future human space missions. This unique opportunity was made possible by the UZH Space Hub and the Netherland Aerospace Center! With Christian Pfeiffer and Leyla Loued-Khenissi. For more information, check out our article or this video.

October 4, 2022

Code release: Event-based Vision meets Deep Learning on Steering Prediction for Self-driving Cars


We are releasing the code for our work which uses event-based vision and deep learning methods to predict the steering angle of self-driving cars. For more details, see our paper.

September 16, 2022

NCCR Robotics Master Thesis Award

Congratulations to our former Master student Michelle Ruegg for winning the NCCR Robotics Master Thesis Award for her thesis on combining frames and events for asynchronous multi-modal monocular depth prediction! The thesis was supervised by Daniel Gehrig and Mathias Gehrig.

September 6, 2022

We are hiring


We have multiple openings for Phd students and Postdocs in Reinforcement Learning for Agile Vision-based Navigation and Computer vision with Standard Cameras and Event Cameras. Job descriptions and how to apply: https://rpg.ifi.uzh.ch/positions.html

September 1, 2022

New Research Assistant

We warmly welcome Nikola Zubić as a new research assistant in our lab!

August 26, 2022

The HILTI SLAM Challenge 2022 paper and dataset is out!


Check out the paper describing the HILTI SLAM Challenge 2022 and the new dataset collected in collaboration with Oxford University. For more details, see our paper and dataset.

August 26, 2022

E-NeRF: Neural Radiance Fields from a Moving Event Camera


Check out our joint paper with Simon Klenk and Daniel Cremers from TU Munich on how to estimate a neural radiance field (NERF) from both a single moving event camera or from an event camera in combination with a standard camera. We show that we can estimate NERF with higher accuracy than standard cameras in scenes affected by motion blur or when only a few sparse frames are available. For more details, see our paper.

August 2, 2022

New ECCV Paper: ESS: Learning Event-based Semantic Segmentation from Still Images


We are excited to announce our ECCV paper, which overcomes the lack of semantic segmentation datasets for event cameras by directly transferring the semantic segmentation task from existing labeled image datasets to unlabeled events. Our approach neither requires video data nor per-pixel alignment between images and events. For more details, check out the paper, video, code, and dataset.

August 1, 2022

New Research Assistant

We warmly welcome Vincenzo Polizzi as a new research assistant in our lab!

July 31, 2022

RPG on the main German TV Kids program "1, 2 oder 3" on ZDF!


Leonard Bauersfeld and Elia Kaufmann were invited to the famous German TV program "1, 2 oder 3" to talk about drones. Watch the full video in the ZDF Mediathek here (available until 28.08.2022). The part featuring RPG starts at 14:45.
Photo: ZDF/Ralf Wilschewski.

July 29, 2022

Dataset and Code release for EKLT-VIO


We are excited to announce that the code and datasets for our RA-L paper Exploring Event Camera-based Odometry for Planetary Robots is released. Both code and datasets can be found here.

July 21, 2022

Time-optimal Online Replanning for Agile Quadrotor Flight

For the first time, a time-optimal trajectory can be generated and tracked in real-time, even with moving waypoints and strong unknown disturbances! Read our Time-optimal Online Replanning for Agile Quadrotor Flight paper and watch our IROS talk for further details.

July 13, 2022

RPG on the main Italian TV science program SuperQuark on RAI1!

Watch the full video report about our research on autonomous drones, from drone racing to search and rescue, from standard to event cameras. The video is in Italian with English subtitles.

July 7, 2022

First AI vs Human Drone Race!


On June 10-11, we organized the first race between an AI-powered vision-based drone vs human pilots. We invited two world champions and the Swiss champion. Read this report by Evan Ackerman from IEEE Spectrum, who witnessed the historic event in person.

July 6, 2022

Code Release: UltimateSLAM


We are releasing UltimateSLAM, which combines events, frames, IMU to achieve the ultimate slam performance in high speed and high dynamic range scenarios. Paper Code Video Project Webpage

July 5, 2022

IROS2022 Workshop: Agile Robotics: Perception, Learning, Planning, and Control


Do not miss our IROS2022 Workshop: Agile Robotics: Perception, Learning, Planning, and Control! Checkout the agenda and join the presentations at our workshop website. Organized by Giuseppe Loianno, Davide Scaramuzza, Shaojie Shen.

July 4, 2022

Congratulations to our former PhD Antonio for winning the 2022 George Giralt Award!


Congratulations to our former PhD student Antonio Loquercio for winning the 2022 George Giralt PhD Award, the most prestigious award for PhD dissertations in robotics in Europe, for his work on learning vision-based high-speed drone flight! We are very proud of you!
PhD thesis PDF
Video of the PhD defense
Google Scholar profile
Personal page

July 1, 2022

New RA-L Paper: Learning Minimum-Time Flight in Cluttered Environments

We are excited to announce our RA-L paper which tackles minimum-time flight in cluttered environments using a combination of deep reinforcement learning and classical topological path planning. We show that the approach outperforms the state-of-the-art in both planning quality and the ability to fly without collisions at high speeds. For more details, check out the paper and the YouTube.

June 17, 2022

New T-RO Paper: "A Comparative Study of Nonlinear MPC and Differential-Flatness-Based Control for Quadrotor Agile Flight"

We are excited to announce that our paper on A Comparative Study of Nonlinear MPC and Differential-Flatness-Based Control for Quadrotor Agile Flight was accepted at T-RO 2022. Our work empirically compares two state-of-the-art control frameworks: the nonlinear-model-predictive controller (NMPC) and the differential-flatness-based controller (DFBC), by tracking a wide variety of agile trajectories at speeds up to 72km/h. Read our A Comparative Study of Nonlinear MPC and Differential-Flatness-Based Control for Quadrotor Agile Flight for further details.

June 16, 2022

New RA-L paper: The Hilti SLAM Challenge Dataset


We release the Hilti SLAM Challenge Dataset! The sensor platform used to collect this dataset contains a number of visual, lidar and inertial sensors which have all been rigorously calibrated. All data is temporally aligned to support precise multi-sensor fusion. Each dataset includes accurate ground truth to allow direct testing of SLAM results. Raw data as well as intrinsic and extrinsic sensor calibration data from twelve datasets in various environments is provided. Each environment represents common scenarios found in building construction sites in various stages of completion. For more details, check out the paper, video and talk.

June 13, 2022

"Time Lens++: Event-based Frame Interpolation with Parametric Flow and Multi-scale Fusion" Dataset Release

We are excited to announce that our paper on Time Lens++ was accepted at CVPR 2022. To learn more about the next generation of event-based frame interpolation visit out project page There we release our new dataset BS-ERGB recorded with a beam splitter, which features aligned and synchronized events and frames."

June 3, 2022

Meet us at Swiss Drone Days 2022


We are excited to announce that the 2022 edition of the Swiss Drone Days will take place on 11-12 June in Dübendorf. The event will feature live demos including autonomous drone racing, inspection, and delivery drone in one of the largest drone flying arenas of the world; spectacular drone races by the Swiss drone league; presentations of distinguished speakers; an exhibition and trade fair. For more information, please visit www.swissdronedays.com

June 1, 2022

Two New PhD Students

We welcome Drew Hanover and Chao Ni as new PhD students in our lab!

May 27, 2022

Our work won the IEEE RAL Best Paper Award



We are honored that our IEEE Robotics and Automation Letters paper "Autonomous Quadrotor Flight Despite Rotor Failure With Onboard Vision Sensors: Frames vs. Events" was selected for the Best Paper Award. Congratulations to all collaborators!

PDF YouTube Code

May 20, 2022

Meet us at ICRA 2022!



We are looking forward to presenting these 9 papers on perception, learning, planning, and control in person next week at IEEE RAS ICRA! Additionally, we will be presenting in many workshops. A full list with links, times, and rooms can be found here

May 5, 2022

UZH lists AI racing-drones as a key finding of 2021

The University of Zurich celebrated its 189th birthday. During the celebrations rector Prof. Michael Schaepman names drones flying faster than humans as a testbed for AI research and search and rescue operations to be one of three key findings of UZH in 2021. A video of the speech can be found here (at 26:00 he starts to talk about drones).

May 4, 2022

New T-RO Paper: "Model Predictive Contouring Control for Time-Optimal Quadrotor Flight"

We are excited to announce that our paper on Model Predictive Contouring Control for Time-Optimal Quadrotor Flight was accepted at T-RO 2022. Thanks to our Model Predictive Contouring Control, the problem of flying through multiple waypoints in minimum time can now be solved in real-time. Read our Model Predictive Contouring Control for Time-Optimal Quadrotor Flight paper for further details.

May 2, 2022

New Postdoc

We welcome Dr. Marco Cannici as a new postdoc in our lab!

April 28, 2022

EDS: Event-aided Direct Sparse Odometry


We are excited to announce that our paper on Event-aided Direct Sparse Odometry was accepted at CVPR 2022 for an oral presentation. EDS is the first direct method combining events and frames. This work opens the door to low-power motion-tracking applications where frames are sparingly triggered "on demand'' and our method tracks the motion in between. For code, video and paper, visit our project page.

April 21, 2022

We are hiring


We have multiple openings for Phd students and Postdocs in machine learning for computer vision and vision-based robot navigation. Job descriptions and how to apply: https://rpg.ifi.uzh.ch/positions.html

April 21, 2022

New CVPRW Paper: Multi-Bracket High Dynamic Range Imaging with Event Cameras


We are excited to announce that our paper on combining events and frames for HDR imaging was accepted at the NTIRE22 workshop at CVPR 2022. In this paper, we propose the first multi-bracket HDR pipeline combining a standard camera with an event camera. For more details, check out the paper and video.

March 31, 2022

Meet us at Swiss Drone Days 2022


We are excited to announce that the 2022 edition of the Swiss Drone Days will take place on 11-12 June in D�bendorf. The event will feature live demos including autonomous drone racing, inspection, and delivery drone in one of the largest drone flying arenas of the world; spectacular drone races by the Swiss drone league; presentations of distinguished speakers; an exhibition and trade fair. For more information, please visit www.swissdronedays.com

March 29, 2022

"AEGNN: Asynchronous Event-based Graph Neural Networks" Code Release

We are excited to announce that our paper on Asynchronous Event-based Graph Neural Networks was accepted at CVPR 2022. Bring back the sparsity in event-based deep learning by adopting AEGNNs which reduce the computational complexity by up to 200 times. For code, video and paper, visit our project page.

March 29, 2022

"Are High-Resolution Cameras Really Needed?

In our newest paper we shed light on this question and find that, across a wide range of tasks, this question has a non-trivial answer. For video and paper, please visit our project page.

March 17, 2022

ICRA 2022 DodgeDrone Challenge

General-purpose autonomy requires robots to interact with a constantly dynamic and uncertain world. We are excited to announce the ICRA2022 DodgeDrone Challenge to push the limits of aerial navigation in dynamic environments. All we need is you! We provide an easy-to-use API and a Reinforcement Learning framework! Submit your work and take part in the challenge! The winner will get a keynote invitation at the ICRA workshop on aerial robotics and a money prize. Find out how to participate on our Website. The code is on GitHub.

March 14, 2022

From our lab to Skydio


Today, Skydio announces that it will be hiring some of our former PhD students. RPG is very proud of them! Link

March 10, 2022

Davide Scaramuzza interviewed by Robohub

In this interview for Robohub, Davide Scaramuzza talks about event cameras and their application to robotics, automotive, defense, safety and security, computer vision, and videography: Video and Article

March 1, 2022

New PLOS ONE Paper: Visual Attention Prediction Improves Performance of Autonomous Drone Racing Agents


We propose a novel method to improve performance in vision-based autonomous drone racing. By combining human eye-gaze based attention prediction and imitation learning, we enable a quadrotor to complete a challenging race track in drone racing simulator. Our method outperforms state-of-the-art methods using raw images and image-based abstractions (i.e., feature tracks). For more details, check out the paper and dataset.

February 28, 2022

New RAL Paper: Minimum-Time Quadrotor Waypoint Flight in Cluttered Environments


Planning minimum-time trajectories for quadrotors in the presence of obstacles was, so far, unaddressed by the robotics community. We propose a novel method to plan such trajectories in cluttered environments using a hierarchical, sampling-based method with an incrementally more complex quadrotor model. The proposed method is shown to outperform all related baselines in cluttered environments and is further validated in real-world flights at over 60km/h. Check our paper, video and code.


February 17, 2022

New RAL Paper: Continuous-Time vs. Discrete-Time Vision-based SLAM: A Comparative Study


In this work, we systematically compare the advantages and limitations of the discrete and continuous vision-based SLAM formulations. We perform an extensive experimental analysis, varying robot type, speed of motion, and sensor modalities. Our experimental analysis suggests that, independently of the trajectory type, continuous-time SLAM is superior to its discrete counterpart whenever the sensors are not time-synchronized. For more details, check out paper and code.

February 15, 2022

Perception-Aware Perching on Powerlines with Multirotors


Multirotor aerial robots are becoming widely used for the inspection of powerlines. To enable continuous, robust inspection without human intervention, the robots must be able to perch on the powerlines to recharge their batteries. This paper presents a novel perching trajectory generation framework that computes perception-aware, collision-free, and dynamically-feasible maneuvers to guide the robot to the desired final state. For more details, check out the paper and video. The developed code is available online at code

February 9, 2022

New RAL Paper: Nonlinear MPC for Quadrotor Fault-Tolerant Control


The mechanical simplicity, hover capabilities, and high agility of quadrotors lead to a fast adaption in the industry for inspection, exploration, and urban aerial mobility. On the other hand, the unstable and underactuated dynamics of quadrotors render them highly susceptible to system faults, especially rotor failures. In this work, we propose a fault-tolerant controller using nonlinear model predictive control (NMPC) to stabilize and control a quadrotor subjected to the complete failure of a single rotor. Check our paper and video.

February 4, 2022

UZH-FPV Drone Racing Dataset Standing Leader Board


We are delighted to announce the standing leader board of the UZH-FPV drone racing dataset. Participants submit the results of their VIO algorithms and receive the evaluation in few minutes thanks to our automatic code evaluation. For more details, check out the website! We look forward to receiving your submissions to advance the state-of-the-art of VIO in high speed state estimation.

February 2, 2022

New RAL Paper: Bridging the Gap between Events and Frames through Unsupervised Domain Adaptation


To overcome the shortage of event-based datasets, we propose a task transfer method that allows models to be trained directly with labeled images and unlabeled event data. Our method transfers from single images to events and does not rely on paired sensor data. Thus, our approach unlocks the vast amount of image datasets for the training of event-based neural networks. For more details, check out the paper, video, and code.

January 31, 2022

New RAL Paper: AutoTune: Controller Tuning for High-speed Flight


Tired of tuning your controllers by hand? Check out our RAL22 paper "AutoTune: Controller Tuning for High Speed Flight". We propose a gradient-free method based on Metropolis-Hastings Sampling to automatically find parameters to maximize the performance of a controller during high speed. We outperform both existing methods and human experts! Check paper, video, and code.

January 28, 2022

RPG research on event cameras featured in The Economist!


Excited to see our research on event cameras featured in The Economist! Check it out!

January 10, 2022

RPG research makes it to the top 10 UZH news of 2021!


Our press release on time optimal trajectory planning from July 2021 made it to the top 10 most successful media releases of UZH in 2021, just following the media release on the Alzheimer's FDA approved drug! Check it out!

January 10, 2022

3DV Oral Paper: Dense Optical Flow from Event Cameras


We propose E-RAFT, a novel method to estimate dense optical flow from events only, alongside DSEC-Flow, an extension of DSEC for optical flow estimation. Download the datasets and submit to the DSEC-Flow benchmark that automatically evaluates your submission. For more details, check out the paper, video, and project webpage. Our code is available on GitHub.

December 20, 2021

Philipp Foehn successfully passed his PhD defense


Congratulations to Philipp Foehn, who has successfully defended his PhD dissertation titled "Agile Aerial Autonomy: Planning and control", on December 14, 2021. We thank the reviewers: Prof. Moritz Diehl, Prof. Luca Carlone, and Prof. Roland Siegwart!

The full video of the PhD defense presentation is on YouTube.

December 15, 2021

Policy Search for Model Predicitive Control

We propose a novel method to merge reinforcement learning and model predictive control. Our approach enables a quadrotor to fly through dynamic gates. The paper has been accepted for publication in the IEEE Transactions on Robotics (T-RO), 2022. Checkout our paper and the code

December 9, 2021

Code Release: Event-based, Direct Camera Tracking

We release the code of our ICRA 2019 paper Event-based, Direct Camera Tracking from a Photometric 3D Map using Nonlinear Optimization. The code is implemented in C++ and runs in real-time on a laptop. Try it out for yourself on GitHub!

December 8, 2021

3DV Paper: Event-based Structured Light

We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing. Our method is robust to event jitter and therefore performs better at higher scanning speeds. Experiments demonstrate that our method can deal with high-speed motion and outperform state-of-the-art 3D reconstruction methods based on event cameras, reducing the RMSE by 83% on average, for the same acquisition time. For more details, check out the project page, paper, code, and video.

November 1, 2021

Davide Scaramuzza invited speaker at Tartan SLAM Series

The goal of the Tartan SLAM Series is to expand the understanding of those both new and experienced with SLAM. Sessions include research talks, as well as introductions to various themes of SLAM and thought provoking open-ended discussions. The lineup of events aim to foster fun, provocative discussions on robotics. In his talk, Davide Scaramuzza speaks about the main progresses of our lab in SLAM over the past years. He also introduces event-cameras and speaks about their potential applications in visual SLAM. Check out the slides and the video on Youtube!

October 21, 2021

Code Release: SVO Pro


We are excited to release fully open source SVO Pro! SVO Pro is the latest version of SVO developed over the past few years in our lab. SVO Pro features the support of different camera models, active exposure control, a sliding window based backend, and global bundle adjustment with loop closure. Check out the project page and the code on github!

October 20, 2021

New 3DV paper: Event Guided Depth Sensing


We present an efficient bio-inspired event-camera-driven depth sensing algorithm. Instead of uniformly sensing the depth of the scene, we dynamically illuminate areas of interest densely, depending on the scene activity detected by the event camera, and sparsely illuminate areas in the field of view with no motion. We show that, in natural scenes like autonomous driving and indoor environments, moving edges correspond to less than 10% of the scene on average. Thus our setup requires the sensor to scan only 10% of the scene, which could lead to almost 90% less power consumption by the illumination source. For more details, check out the paper and video.

October 20, 2021

We are hiring!
Come build the future of robotics with us!


We have three fully-funded openings for PhD students and Postdocs in computer vision and machine learning to contribute to the areas of:

  • Vision-based agile flight,
  • Autonomous inspection of power lines,
  • SLAM, Scene Understanding, and Computational Photography with Event Cameras.
Job descriptions and how to apply.

October 10, 2021

Drone Documentary from the Swiss Italian TV (LA1)

Check out the interview from the Swiss Italian TV LA1 on our research on drone racing and high-speed navigation. We explain why high-speed drones could make a difference in the future of search and rescue operations. In Italian with English subtitles!

October 6, 2021

Article Published in Science Robotics!


We are excited to share our latest Science Robotics paper, done in collaboration with Intel! An end-to-end policy trained in simulation flies vision-based drones in the wild at up to 40 kph! In contrast to classic methods, our approach uses a CNN to directly map images to collision-free trajectories. This approach radically reduces latency and sensitivity to sensor noise, enabling high-speed flight. The end-to-end policy has taken our drones on many adventures in Switzerland!
Check out the video on youtube! We also release the code and datasets on github!

October 1, 2021

Code Release: Time-Optimal Quadrotor Planning

We are excited to release the code accompanying our latest Science Robotics paper on time-optimal quadrotor trajectories! This provides an example implementation of our novel progress-based formulation to generate time-optimal trajectories through multiple waypoints while exploiting, but not violating the quadrotor's actuation constraints.
Check out our real-world agile flight footage with explanations and find the details in the paper on Science Robotics, and find the code on github.

October 1, 2021

IROS2021 Workshop: Integrated Perception, Learning, and Control for Agile Super Vehicles


Do not miss our IROS2021 Workshop: Integrated Perception, Learning, and Control for Agile Super Vehicles! Checkout the agenda and join the presentations at our workshop website. Organized by Giuseppe Loianno, Davide Scaramuzza, Sertac Karaman.

The workshop is today, October the 1st, and starts at 3pm Zurich time (GMT+2).

October 1, 2021

New Arxiv Preprint: The Hilti SLAM Challenge Dataset


We release the Hilti SLAM Challenge Dataset! The sensor platform used to collect this dataset contains a number of visual, lidar and inertial sensors which have all been rigorously calibrated. All data is temporally aligned to support precise multi-sensor fusion. Each dataset includes accurate ground truth to allow direct testing of SLAM results. Raw data as well as intrinsic and extrinsic sensor calibration data from twelve datasets in various environments is provided. Each environment represents common scenarios found in building construction sites in various stages of completion. For more details, check out the paper and video.

September 26, 2021

RPG wins the Tech Briefs "Create the Future" contest for the category Aerospace and Defense


Our work on controlling a quadrotor after motor failure with only onboard vision sensors, paper, is the winner of the Aerospace and Defense category in the 2021 Tech Briefs "Create the Future" contest out of over 700 participants worldwide! Watch the announcement of all the winners and finalists here.

September 15, 2021

New Arxiv Preprint: Expertise Affects Drone Racing Performance


We present an analysis of drone racing performance of professional and beginner pilots. Our results show that professional pilots consistently outperform beginner pilots and choose more optimal racing lines. Our results provide strong evidence for a contribution of expertise to performances in real-world human-piloted drone racing. We discuss the implications of these results for future work on autonomous fast and agile flight. For more details, check out the paper.

September 13, 2021

Our work was selected as IEEE Transactions on Robotics 2020 Best Paper Award finalist


Honored that our IEEE Transactions on Robotics 2020 paper "Deep Drone Racing: From Simulation to Reality with Domain Randomization" was selected Best Paper Award finalist! Congratulations to all collaborators for this great achievement! PDF YouTube 1 YouTube 2 Code

September 13, 2021

Range, Endurance, and Optimal Speed Estimates for Multicopters (Accepted at RAL)


We present an approach to accurately estimate the range, endurance, and optimal flight speed for general multicopters. This is made possible by combining a state-of-the-art first-principles aerodynamic multicopter model with an eletric-motor model and a precise graybox battery model. Additionally, we present an accurate pen-and-paper algorithm developed based on the complex model to estimate the range, endurance, and optimal speed of multicopters. For more details, check out the paper.

September 10, 2021

New Arxiv Preprint: Performance, Precision, and Payloads: Adaptive Nonlinear MPC for Quadrotors

We propose L1-NMPC, a novel hybrid adaptive NMPC to learn model uncertainties online and immediately compensate for them, drastically improving performance over non-adaptive baselines with minimal computational overhead. Our proposed architecture generalizes to many different environments from which we evaluate wind, unknown payloads, and highly agile flight conditions. For more details, check out the paper and video.

September 9, 2021

New Arxiv Preprint: A Comparative Study of Nonlinear MPC and Differential-Flatness-Based Control for Quadrotor Agile Flight

We perform a comparative study of two state-of-the-art control methods for quadrotor agile flights from the aspect of trajectory tracking accuracy, robustness, and computational efficiency. A wide variety of agile trajectories are tracked in this research at speeds up to 72 km/h. We show the superiority of NMPC in tracking dynamically infeasible trajectories at the cost of higher computation time and risk of numerical convergence issues. An inner-loop controller using the incremental nonlinear dynamic inversion (INDI) is proposed to hybridize with both methods, demonstrating more than 78% tracking error reduction. Non-expert readers can regard this work as a tutorial on agile quadrotor flight. For more details, check out the paper and video.

September 8, 2021

New Arxiv Preprint: Model Predictive Contouring Control for Time-Optimal Quadrotor Flight

We propose a Model Predictive Contouring Control (MPCC) method fly time-optimal trajectories through multiple waypoints with quadrotors. Our MPCC optimally selects the future states of the platform at runtime, while maximizing the progress along the reference path and minimizing the distance to it. We show that, even when tracking simplified trajectories, the proposed MPCC results in a path that approaches the true time-optimal one, and which can be generated in real-time. We validate our approach in the real-world, where we show that our method outperforms both the current state-of-the-art and a world-class human pilot in terms of lap time achieving speeds of up to 60 km/h. For more details, check out the paper and video.

September 2, 2021

HILTI-SLAM Challenge: win up to $10,000 prize money and keynote invitation


RPG and HILTI are organizing the IROS2021 HILTI SLAM Challenge! Participants can win up to $10,000 prize money and a keynote IROS workshop invitation! Instructions here. The HILTI SLAM Challenge dataset is a real-life, multi-sensor dataset with accurate ground truth to advance the state of the art in highly accurate state estimation in challenging environments. Participants will be ranked by the completeness of their trajectories and by the achieved accuracy. HILTI is a multinational company that offers premium products and services for professionals on construction sites around the globe. Behind this vast catalog is a global team comprising of 30.000 team members from 133 different nationalities located in more than 120 countries.

August 29, 2021

New Arxiv Preprint: Dense Optical Flow from Event Cameras


We propose a novel method to estimate dense optical flow from events only, alongside an extension of DSEC for optical flow estimation. Our approach takes inspiration from frame-based methods and outperforms previous event-based approaches with up to 66% EPE reduction. For more details, check out the paper and video.

August 20, 2021

New IROS Paper & Code Release: Powerline Tracking with Event Cameras


We propose a method that uses event cameras to robustly track lines and show an application for powerline tracking. Our method identifies lines in the stream of events by detecting planes in the spatio-temporal signal, and tracks them through time. For more details, check out the paper and video. We release the code fully open source.

August 17, 2021

Davide Scaramuzza invited speaker at Real Roboticist


The series Real Roboticist, produced by the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), shows the people at the forefront of robotics research from a more personal perspective. In his talk, Davide Scaramuzza explains his journey from Electronics Engineering to leading a top robotics vision research group developing a promising technology: event cameras. He also speaks about the challenges he faced along the way, and even how he combines the robotics research with another of his passions, magic. Read the article and watch the talk. Enjoy!

August 6, 2021

RPG Contributes to CARLA Optical Flow Camera


CARLA is the world leading simulator for autonomous driving, developed by Intel. Our lab contributed to the implementation of the optical flow camera, requested by the community since the inception the simulator.
Check out the release video for a short teaser and the documention for more information on how to use it.

July 21, 2021

Time-Optimal Quadrotor Planning faster than Humans

We are excited to announce our latest work on agile flight allowing us to generate "time-optimal quadrotor trajectories", which are faster than human drone racing pilots! Our novel algorithm published in Science Robotics uses a progress-based formulation to generate time-optimal trajectories through multiple waypoints while exploiting, but not violating the quadrotor's actuator constraints.
Check out our real-world agile flight footage with explanations and find the details in the paper on Science Robotics.

June 30, 2021

The World's Largest Indoor Drone-Testing Arena

We are excited to announce our new, indoor, drone-testing arena! Equipped with a real-time motion-capture system consisting of 36 Vicon cameras, and with a flight space of over 30x30x8 meters (7,000 cubic meters), this large research infrastructure allows us to deploy our most advanced perception, learning, planning, and control algorithms to push vision-based agile drones to speeds over 60 km/h and accelerations over 5g. It also allows us to fly in an unlimited number of virtual environments using hardware-in-the-loop simulation. Among the many projects we are currently working on, we aim to beat the best professional human pilot in a drone race. Turn up the volume and enjoy the video! And stay tuned... the best is about to come.. very soon!

June 30, 2021

Code Release: EVO: Event-based, 6-DOF Parallel Tracking and Mapping in Real-Time

We release EVO, an Event-based Visual Odometry algorithm from our RA-L paper EVO: Event-based, 6-DOF Parallel Tracking and Mapping in Real-Time. The code is implemented in C++ and runs in real-time on a laptop. Try it out for yourself on GitHub!

June 25, 2021

New RSS Paper & Dataset Release: NeuroBEM


We are happy to announce the release of the full dataset associated with our upcoming RSS paper NeuroBEM: Hybrid Aerodynamic Quadrotor Model. The dataset features over 1h15min of highly aggressive maneuvers recorded at high accuracy in one of the worlds largest optical tracking volumes. We provide time-aligned quadrotor state and motor-commands recorded at 400Hz in a curated dataset. For more details, check out our paper, dataset and video.

June 25, 2021

Fast Feature Tracking with ROS


Our work on GPU-optmized feature detection and tracking is now available as a simple ROS node. It implements GPU-optimized Fast, Harris, and Shi-Tomasi detectors and KLT tracking, running at hundreds of FPS on a Jetson TX2. For more details, check out our paper Faster than FAST and code.

June 11, 2021

TimeLens: Event-based Video Frame Interpolation

TimeLens is a new event-based video frame interpolation method that generates high speed video from low framerate RGB frames and asynchronous events. Learn more about TimeLens over at our project page where you can find code, datasets and more! We also release a High-Speed Event and RGB dataset which features complex scenarios like bursting balloons and spinning objects!

June 10, 2021

Video recordings of the ICRA 2021 Workshop on Perception and Action in Dynamic Environments are now available!

On June 4, 2021, Antonio Loquercio (RPG), Davide Scaramuzza (RPG), Luca Carlone (MIT), and Markus Ryll (TUM) organized the 1st International Workshop on Perception and Action in Dynamic Environments at ICRA.


May 18, 2021

Workshop on Perception and Action in Dynamic Environments


Do not miss our #ICRA2021 workshop on Perception and Action in Dynamic Environments! Checkout the agenda and join the presentations at our workshop website. Organized by Antonio Loquercio, Davide Scaramuzza, Markus Ryll, Luca Carlone.

The workshop is on June the 4th and starts at 4pm Zurich time (GMT+2).

May 18, 2021

CVPR competition on stereo matching


We are delighted to announce our CVPR event-based vision workshop competition on disparity/depth prediction on the new DSEC dataset. Visit our website for more details about the competition. Submission deadline is the 11th of June.

May 18, 2021

Davide Scaramuzza listed among the most influential scholars in robotics


Congratulations to our lab director, Davide Scaramuzza, for being listed among the 100 most influential robotics scholar by Aminer [ Link ].


May 11, 2021

Antonio Loquercio successfully passed his PhD defense


Congratulations to Antonio Loquercio, who has successfully defended his PhD dissertation titled "Agile Autonomy: Learning Tightly-Coupled Perception-Action for High-Speed Quadrotor Flight in the Wild", on May. 10, 2021. We thank the reviewers: Prof. Pieter Abbeel, Prof. Angela Schoellig and Prof. Roland Siegwart!

The full video of the PhD defense presentation is on YouTube.

May 10, 2021

IEEE Transactions on Robotics Best Paper Award Honorable Mention


Our paper Deep Drone Racing: from Simulation to Reality with Domain Randomization wins the prestigious IEEE Transactions on Robotics Best Paper Award Honorable Mention: PDF YouTube 1 YouTube 2 Code

May 7, 2021

How to Calibrate Your Event Camera


We propose a generic event camera calibration frame-work using image reconstruction. Check out our Code and PDF

April 30, 2021

DodgeDrone Challenge


We have organized a challenge to push current state of the art for agile navigation in dynamic environments. In this challenge, drones will have to avoid moving boulders while flying in a forest! Deadline for submission is June the 1st! The winner will be awarded with a Skydio2! Partecipate now at https://uzh-rpg.github.io/PADE-ICRA2021/ddc/!

April 26, 2021

Read how our research inspired Ingenuity's flight on Mars


Our research inspired the design of the vision-based navigation technology behind the Ingenuity helicopter that flew on Mars. Read the full article on SwissInfo [ English ], [ Italian ].

April 23, 2021

NASA collaborates with RPG


Our lab is collaborating with NASA/JPL to investigate event cameras for the next Mars helicopter missions! Read full interview on SwissInfo with Davide Scaramuzza [ Link ].

April 23, 2021

Davide Scaramuzza invited speaker at GRASP on Robotics


Davide Scaramuzza talks about "Autonomous, Agile Micro Drones: Perception, Learning, and Control" at GRASP on Robotics seminar series organized by the GRASP laboratory at University of Pennsylvania. In this talk, he shows how the combination of both model-based and machine learning methods united with the power of new, low-latency sensors, such as event cameras, can allow drones to achieve unprecedented speed and robustness by relying solely on onboard computing. Watch the presentation! Enjoy!

April 19, 2021

Autonomous Racing and Overtaking in GTS using Reinforcement Learning


We present Super-Human Performance in GTS Using Deep RL and Autonomous Overtaking in GTS Using Curriculum RL. Checkout the Website.

April 14, 2021

DSEC: Event Camera Dataset is Out!


DSEC is a new driving dataset with stereo VGA event cameras, RGB global shutter cameras and disparity groundtruth from Lidar. Download DSEC now to reap the benefits of this multi-modal dataset with high-quality calibration.
We also accompany the dataset with code and documentation. Check out our video, and paper too! Stay tuned for more!

March 18, 2021

Autonomous Drone Racing with Deep Reinforcement Learning


We present Autonomous Drone Racing with Deep RL, the first learning-based method that can achieve near-time-optimal performance in drone racing. Checkout the Preprint and the Video.

March 15, 2021

1st Workshop on Perception and Action in Dynamic Environments at ICRA 2021


We organized a #ICRA2021 workshop on perception and action dynamic environments! We brought together amazing keynote speakers and also organized a competition on drone navigation in a forest (Prize is a Skydio2)! All we need is you! Check out our website here for more info and the current list of invited speakers.

March 8, 2021

Check out our work on Visual Processing and Control in Human Drone Pilots!


Our work on Visual Processing and Control in Human Drone Pilots has been accepted in the IEEE Robotics and Automation Letters. Check out our Video, the Paper, and Open-Source Dataset too!

February 19, 2021

Check out our Event Camera Simulator, ESIM, now with python bindings and GPU support!


Our event camera simulator ESIM now features python bindings and GPU support for fully parallel event generation! Check out our project page, code and paper.

February 12, 2021

Check out our work on Combining Events and Frames using Recurrent Asynchronous Multimodal Networks!


Our work on combining events and frames using recurrent asynchronous multimodal networks has been accepted in the IEEE Robotics and Automation Letters. Check out the paper, the project page, and the source code.

February 12, 2021

Check out our work on data-driven MPC for quadrotors!


Our work on data-driven MPC for quadrotors has been accepted in the IEEE Robotics and Automation Letters. Check out the paper, the video, and the source code.

February 09, 2021

Our work on autonomous flight despite motor failure is featured on IEEE Spectrum


Our latest work on autonomous quadrotor flight despite rotor failure with onboard vision sensors (frames or event cameras) was featured on IEEE Spectrum. For more details, read the paper here and watch the video here. Source code here.

January 25, 2021

3rd Workshop on Event-based Vision at CVPR 2021


We are organizing the "3rd Workshop on Event-based Vision", which will take place in June at CVPR2021. The paper submission deadline is March 27. Check out our website here for more info and the current list of invited speakers.

January 14, 2021

Check out our work in the new Flying Arena!


Davide Scaramuzza and some of the lab's members talk about our work on drone racing in the new Flying Arena. Watch Davide Scaramuzza interview here. Watch Elia Kaufmann interview here. Watch Christian Pfeiffer interview here.

January 13, 2021

Check out our work on how to keep drones flying when a motor fails!


Our work on controlling a quadrotor after motor failure with only onboard vision sensors has been accepted in the IEEE Robotics and Automation Letters. Check out the paper, the video, and the source code.

January 12, 2021

Paper accepted in IJCV!


Our work on generating accurate reference poses for visual localization datasets has been accepted in the International Journal of Computer Vision. Check out the paper here, and the Aachen Day-Night v1.1 dataset in the paper can be accessed via the online visual localization benchmark service.

January 11, 2021

Check our new startup SUIND!


We are super excited to announce SUIND, our latest spin-off! Leveraging years of research in our lab, SUIND is building a groundbreaking safety suite for drones. Proud to see our former members Kunal Shrivastava and Kevin Kleber making a true impact in the industry! Read more here.

Video Highlights

September 13, 2023


The fundamental advantage of reinforcement learning over optimal control lies in its optimization objective.

September 1, 2023



Our AI Drone beats human world champion pilots in drone racing, while only relying on onboard sensing and compute!

December 1, 2022

RPG celebrates its 10th anniversary!

October 28, 2022

The Robotics and Perception Group participated in the parabolic flight campain of UZH Space Hub to study how gravity affects the decision-making of human drone pilots.

October 14, 2022

The first Data-Efficient Collaborative Decentralized Thermal-Inertial Odometry system has been released as open source, extending the already-public JPL xVIO library. Checkout the code and datasets, to discover how a drone swarm can collaborate in all types of light conditions.

July 13, 2022

Our lab is featured on the Italian RAI1 TV program SuperQuark. Watch the full video report about our research on autonomous drones, from drone racing to search and rescue, from standard to event cameras. The video is in Italian with English subtitles.

July 1, 2022

We are excited to announce our RA-L paper which tackles minimum-time flight in cluttered environments using a combination of deep reinforcement learning and classical topological path planning. We show that the approach outperforms the state-of-the-art in both planning quality and the ability to fly without collisions at high speeds. For more details, check out the paper and the YouTube.

June 26, 2022

For the first time, a time-optimal trajectory can be generated and tracked in real-time, even with moving waypoints and strong unknown disturbances! Read our Time-optimal Online Replanning for Agile Quadrotor Flight paper and watch our IROS talk for further details.

June 13, 2022

We are excited to announce that our paper on Time Lens++ was accepted at CVPR 2022. To learn more about the next generation of event-based frame interpolation visit out project page There we release our new dataset BS-ERGB recorded with a beam splitter, which features aligned and synchronized events and frames."

October 6, 2021

We train a high-speed navigation policy in simulation and deploy it on real drones in previously unknown, extremely challenging environments up to 40km/h (Switzerland is a great location for this!). The approach relies only on onboard vision and computation. Checkout our Science Robotics paper Learning High-Speed Flight in the Wild for further details.

September 10, 2021

We propose L1-NMPC, a novel hybrid adaptive NMPC to learn model uncertainties online and immediately compensate for them, drastically improving performance over non-adaptive baselines with minimal computational overhead. Our proposed architecture generalizes to many different environments from which we evaluate wind, unknown payloads, and highly agile flight conditions. Performance, Precision, and Payloads: Adaptive Nonlinear MPC for Quadrotors for further details.

September 9, 2021

In this work, we perform extensive experimental studies to quantitively compare two state-of-the-art control methods for quadrotor agile flight, from the aspect of trajectory tracking accuracy, robustness, and computational efficiency. A Comparative Study of Nonlinear MPC and Differential-Flatness-Based Control for Quadrotor Agile Flight paper for further details.

September 8, 2021

Thanks to our Model Predictive Contouring Control, the problem of flying through multiple waypoints in minimum time can now be solved in real-time. Read our Model Predictive Contouring Control for Time-Optimal Quadrotor Flight paper for further details.

June 28, 2021

AI Drone faster than Humans? Time-Optimal Planning for Quadrotor Waypoint Flight. Read our Time-optimal planning for quadrotor waypoint flight paper for further details.

June 28, 2021

The Robotics and Perception Group and the University of Zurich present one of the world's largest indoor drone-testing arenas. - Equipped with a real-time motion-capture system consisting of 36 Vicon cameras, and with a flight space of over 30x30x8 meters (7,000 cubic meters), this large research infrastructure allows us to deploy our most advanced perception, learning, planning, and control algorithms to push vision-based agile drones to speeds over 60 km/h and accelerations over 5g.

June 28, 2021

NeuroBEM is a framework that that allows simulation of very aggressive quadrotor flights with unprecedented precision. Learn more about our machine-learning augmented first-principles method at our project page. We also release a dataset that contains high-speed quadrotor flight data.

June 11, 2021

TimeLens is a new event-based video frame interpolation method that generates high speed video from low framerate RGB frames and asynchronous events. Learn more about TimeLens over at our project page where you can find code, datasets and more! We also release a High-Speed Event and RGB dataset which features complex scenarios like bursting balloons and spinning objects!

April 14, 2021

DSEC is a new stereo event camera dataset: over 400 GB of data, 53 sequences, 2 VGA event cameras, 2 RGB global shutter cameras, 53 sequences, day and night, urban and mountain driving, accurate calibration, disparity groundtruth from Lidar.

March 18, 2021

Watch our quadrotor flies near-time-optimal trajectories in Flightmare and the real world using Reinforcement Learnig! Read our Preprint for further details.

Jan 13, 2021

Watch our quadrotor flies after motor failure with only onboard vision sensors! Read our RA-L paper for further details.