Robotics and Perception Group

RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members
RPG members

Welcome to the website of the Robotics and Perception Group led by Prof. Davide Scaramuzza. Our lab was founded in February 2012 and is part of the Department of Informatics, at the University of Zurich, and the Department of Neuroinformatics, which is a joint institute of both the University of Zurich and ETH Zurich.


Our mission is to research the fundamental challenges of robotics and computer vision that will benefit all of humanity. Our key interest is to develop autonomous machines that can navigate all by themselves using only onboard cameras and computation, without relying on external infrastructure, such as GPS or position tracking systems, nor off-board computing. Our interests encompass predonimantly micro drones because they are more challenging and offer more research opportunities than ground robots.

News

March 12, 2024

Davide Scaramuzza plenary speaker at ERF 2024


At the European Robotics Forum 2024, Davide Scaramuzza will share how model-based and machine learning methods, united with new, low-latency sensors are laying the foundation for better productivity and safety of future autonomous aircrafts. Click for more information: https://erf2024.eu/.

March 11, 2024

Davide Scaramuzza keynote speaker at NVIDIA GTC 2024


What will it take to fly autonomous drones as agile as human pilots? At GTC 2024, Davide Scaramuzza will share how model-based and machine learning methods, united with new, low-latency sensors are laying the foundation for better productivity and safety of future autonomous aircrafts. Click here for more information.

February 29, 2024

Contrastive Learning for Enhancing Robot Scene Transfer in Vision-based Agile Flight

We introduce an approach to learning an environment-agnostic representation to enhance the robustness of scene transfer for end-to-end vision-based agile flight. We propose an adaptive contrastive loss that dynamically adjusts the contrastive loss weight during training. The learned task-related embedding is similar across different environments and can be used to transfer the policy to unseen environments. Subsequently, we use the learned embedding to train a sensorimotor policy that takes only images and IMU as inputs and directly outputs the control commands. Our vision encoder training strategy outperforms several state-of-the-art methods in terms of task performance Check out our ICRA 2024 paper.

February 29, 2024

Actor-Critic Model Predictive Control

How can we combine the task performance and reward flexibility of model-free RL with the robustness and online replanning capabilities of MPC? We provide an answer by introducing a new framework called Actor-Critic Model Predictive Control (ACMPC). The key idea is to embed a differentiable MPC within an actor-critic RL framework. For more details, check out our latest ICRA 2024 paper and our video.

February 26, 2024

State Space Models for Event Cameras (Arxiv 2024)


Today, state-of-the-art deep neural networks that process event-camera data first convert a temporal window of events into dense, grid-like input representations. As such, they exhibit poor generalizability when deployed at higher inference frequencies (i.e., smaller temporal windows) than the ones they were trained on. We address this challenge by introducing state-space models (SSMs) to event-based vision. This design adapts to varying frequencies without the need to retrain the network at different frequencies. We comprehensively evaluate our approach against existing methods based on RNN and Transformer architectures across various benchmarks, including Gen1 and 1 Mpx event camera datasets. Our results demonstrate that SSM-based models train 33% faster and also exhibit minimal performance degradation when tested at higher frequencies than the training input. Have a look at our paper.

February 26, 2024

Contrastive Initial State Buffer for Reinforcement Learning (ICRA 2024)

Our ICRA24 paper introduces a Contrastive Initial State Buffer, which strategically selects states from past experiences and uses them to initialize the agent in the environment in order to guide it toward more informative states. The experiments on drone racing and legged locomotion show that our method achieves higher task performance while also speeding up training convergence. Check out our paper and code.

February 12, 2024

The UZH-FPV Dataset: New standing leaderboard


We are excited to announce that we have a new standing leaderboard for our UZH FPV Drone Racing Dataset. To enter the new leader board, participants will need to submit the results of their state estimators on the new test dataset. The new test dataset includes sequences containing visual and inertial data recorded onboard a quadrotor flying at speeds up to 100 km/h. This data was used to train our autonomous system that defeated the world champion drone racing pilots in several head-to-head races (Paper). The data is available for download here here. The new standing leaderboard is here. We are looking forward to your new participation to the UZH-FPV standing leaderboard!

February 11, 2024

Dense Continuous-Time Flow from Events and Frames (TPAMI 2024)


Our new TPAMI paper introduces a data-driven method that can estimate per-pixel continuous-time trajectories from event data and frames. If you have wondered how these two modalities can be effectively combined for motion estimation, have a look at our paper and code.

January 15, 2024

Our AI drone SWIFT in the Top 10 UZH News 2023

Our research on the first AI powered drone to defeat a human pilot made it to the top 10 UZH News 2023! Congratulations to the entire team!

January 12, 2024

AERIAL-CORE: AI-Powered Aerial Robots for Inspection and Maintenance of Electrical Power Infrastructures

We are excited to share our new paper that summarizes the results achieved in the Aerialcore project. In Aerialcore, we have collaborated with top European academic and industrial partners to automate the inspection of large, critical infrastructures, such as power lines. Large-scale infrastructures are prone to deterioration due to age, environmental influences, and heavy usage. Ensuring their safety through regular inspections and maintenance is crucial to prevent incidents that can significantly affect public safety and the environment. This paper introduces the first autonomous system that combines various innovative aerial robots. This system is designed for extended-range inspections beyond the visual line of sight, features aerial manipulators for maintenance tasks, and includes support mechanisms for human operators working at elevated heights. For more details, check out our paper and video.

January 03, 2024

Our Nature Paper in the Top Robotics Stories 2023


Our Nature paper is featured in the Top Robotics Stories of 2023 by IEEE Spectrum.

January 08, 2024

Seeing behind dynamic occlusion with event cameras


Unwanted camera occlusions, such as debris, dust, rain-drops, and snow, can severely degrade the performance of computer-vision systems. Dynamic occlusions are particularly challenging because of the continuously changing pattern. Our solution relies for the first time on the combination of a traditional camera with an event camera. When an occlusion moves across a background image, it causes intensity changes that trigger events. These events provide additional information on the relative intensity changes between foreground and background at a high temporal resolution, enabling a truer reconstruction of the background content. For more details, check out our paper.

December 13, 2023

Our startup SUIND raises 600k USD in seed round

Our startup SUIND raises 600k USD in a seed round led by Sunicon and others: Link

December 11, 2023

Revisiting Token Pruning for Object Detection and Instance Segmentation (WACV 2024)


Our latest paper on efficient Vision Transformers, accepted to WACV 2024, presents a novel token pruning method for object detection and instance segmentation, significantly accelerating inference while minimizing performance loss. Utilizing a lightweight MLP for dynamic token pruning, the method achieves up to 34% faster inference and reduces the performance drop to approximately 0.3 mAP, outperforming previous approaches on the COCO dataset. For more information, have a look at our paper, code and video.

December 5, 2023

Our lab is featured on Italian TV La7


Davide Scaramuzza talks about the research of our lab on Italian TV. First our autonomous drone racing is explained and subsequently the application of our research to search and resuce missions is presented. Watch the full video here (we are featured from 03:20 to 06:35).

November 14, 2023

Watch Davide Scaramuzza's talk at ETH Zurich

In this talk, hosted on November 2 by the Swiss Association of Aeronautical Sciences at ETH Zurich, Davide Scaramuzza presents an overview of our latest research aimed at achieving human-level performance with autonomous vision-based drones. He shows how integrating deep learning techniques with fast-response sensors, such as event cameras, enables drones to attain remarkable levels of speed and resilience, relying exclusively on onboard computation. Finally, he talks about the evolution of event cameras in academia and industry and their potential for robotics to enable low-latency, high bandwidth control.

November 12, 2023

Former Postdoc Reza Sabzevari appointed Professor at TU Delft


Our former Postdoc Reza Sabzevari has been appointed professor of Perception and Robotics at the TU Delft Aerospace Engineering Faculty.

November 1, 2023

New Research Assistant

We welcome Christian Sprecher as a new Research Assistant in our lab!

October 23, 2023

We are hiring!


We have multiple openings for Phd students and Postdocs. Job descriptions and how to apply: https://rpg.ifi.uzh.ch/positions.html

October 18, 2023

inControl Podcast features Davide Scaramuzza



Davide Scaramuzza is featured in the inControl podcast. He talks about magic, autonomous vision-based navigation, agile drone racing, and event-based cameras. The podcast is available on all common platforms, including the inControl website, Spotify, Apple Podcasts, Google Podcasts and Youtube.

October 16, 2023

IEEE Spectrum interviews Davide Scaramuzza and Adam Bry

In the recent IEEE Robotics Podcast, Evan Ackerman hosts Davide Scaramuzza and Adam Bry (CEO of Skydio) to discuss autonomous drones. They delve into how autonomy and computer vision enables super-human skills and how future drones could evolve.

October 3, 2023

Our work won the IEEE/RSJ IROS Best Paper Award



We are honored that our IEEE/RSJ IROS paper "Autonomous Power Line Inspection with Drones via Perception-Aware MPC" was selected for the Best Paper Award. Congratulations to all collaborators!

PDF Video Code

October 3, 2023

ICCV23 Oral Paper: A 5-Point Minimal Solver for Event Camera Relative Motion Estimation

We propose a novel, space-time manifold parametrization to constrain events generated by a line observed under locally constant speed. This first-of-its-kind minimal solver decodes the motion-geometry unknowns and yields a remarkable 100% success rate in linear velocity estimation on open-source datasets, surpassing existing methods. This is joint collaboration between RPG and the Mobile Perception Lab at ShanghaiTech led by Prof. Laurent Kneip. For more details and materials, check out the video, poster, and project page.

September 28, 2023

Our work selected as an IROS paper award candidate


Congratulations to Jiaxu and Giovanni whose IROS paper "Autonomous Power Line Inspection with Drones via Perception-Aware MPC" is nominated for either the conference best paper or best student paper award! Only 12 papers have been nominated out of 1,096 accepted papers: 1% nomination rate!

September 25, 2023

Our work won the Best Paper Award at IROS23 Workshop Robotic Perception and Mapping

We are happy to announce that our work "HDVIO: Improving Localization and Disturbance Estimation with Hybrid Dynamics VIO" won the best paper award at IROS23 Workshop Robotic Perception and Mapping: Frontier Vision and Learning Techniques. The paper will be presented in a spotlight talk on Thursday October 5th in Detroit. Congratulations to all collaborators! Check it out paper and video.

September 25, 2023

Our work in collaboration with ASL, ETH Zurich, won the Best Paper Award at IROS23 Workshop Robotic Perception and Mapping


We are happy to announce that our work in collaboration with ASL, ETH Zurich, "Attending Multiple Visual Tasks for Own Failure Detection" won the best paper award at IROS23 Workshop Robotic Perception and Mapping: Frontier Vision and Learning Techniques. The paper will be presented in a spotlight talk on Thursday October 5th in Detroit. Congratulations to all collaborators! Check it out the paper.

September 24, 2023

End-to-End Learned Event- and Image-based Visual Odometry


RAMP-VO is a novel end-to-end learnable visual odometry system tailored for challenging conditions. It seamlessly integrates event-based cameras with traditional frames, utilizing Recurrent, Asynchronous, and Massively Parallel (RAMP) encoders. Despite being trained only in simulations, it outperforms both learning-based and model-based methods, demonstrating its potential for robust space navigation. For more details, check out our paper.

September 19, 2023

Code Release: Active Camera Exposure Control

We release the code of our camera controller that adjusts the exposure time and gain of the camera automatically. We propose an active exposure control method to improve the robustness of visual odometry in HDR (high dynamic range) environments. Our method evaluates the proper exposure time by maximizing a robust gradient-based image quality metric. Check out our paper for more details.

September 13, 2023

Reinforcement Learning vs. Optimal Control for Drone Racing


Reinforcement Learning (RL) vs. Optimal Control (OC) - why can RL achieve immpressive results beyond optimal control for many real-world robotic tasks? We investigate this question in our paper "Reaching the Limit in Autonomous Racing: Optimal Control versus Reinforcement Learning" published today in Science Robotics, available open access here!
Many works have focused on impressive results, but less attention has been paid to the systematic study of fundamental factors that have led to the success of reinforcement learning or have limited optimal control. Our results indicate that RL does not outperform OC because RL optimizes its objective better. Rather, RL outperforms OC because it optimizes a better objective: RL can directly optimize a task-level objective and can leverage domain randomization allowing the discovery of more robust control responses.
Check out our video to see our drone race autonomously with accelerations up to 12g!

September 1, 2023

AI Drone beats Human World Champions Head-to-Head Drone Race


We are thrilled to share our groundbreaking research paper published in Nature titled "Champion-Level Drone Racing using Deep Reinforcement Learning," available open access here!
We introduce "Swift," the first autonomous vision-based drone that won several fair head-to-head races against human world champions! The Swift AI drone combines deep reinforcement learning in simulation with data collected in the physical world. This marks the first time that an autonomous mobile robot has beaten human champions in a real physical sport designed for and by humans. As such it represents a milestone for mobile robotics, machine intelligence, and beyond, which may inspire the deployment of hybrid learning-based solutions in other physical systems, such as autonomous vehicles, aircraft, and personal robots, across a broad range of applications.
Curious to see "Swift" racing and know more? Check out these two videos from us and from Nature.

September 1, 2023

New PhD Student

We welcome Ismail Geles as a new PhD student in our lab!

August 30, 2023

From Chaos Comes Order: Ordering Event Representations for Object Recognition and Detection


State-of-the-art event-based deep learning methods typically convert raw events into dense input representations before they can be processed by standard networks. However, selecting this representation is very expensive, since it requires training a separate neural network for each representation and comparing the validation scores. In this work, we circumvent this bottleneck by measuring the quality of event representations with the Gromov-Wasserstein Discrepancy, which is 200 times faster to compute. This work opens a new unexplored field of explicit representation optimization. For more information, have a look at our paper. The code will be available on this link at the start of the ICCV 2023 conference.

August 25, 2023

IROS2023 Workshop: Learning Robot Super Autonomy


Do not miss our IROS2023 Workshop: Learning Robot Super Autonomy! The workshop features an incredible speakers lineup and we will have a best paper award with prize money. Checkout the agenda and join the presentations at our workshop website. Organized by Giuseppe Loianno and Davide Scaramuzza.

August 15, 2023

Scientifica - come and see our drones!

Our lab will open the doors of its large drone testing arena on August 30th, 14:00h. Bring your family and friends to learn more about drones and watch an autonomous drone race. If you are interested, please register here!

August 14, 2023

New Senior Scientist

We welcome Harmish Khambhaita as our new Senior Scientist. He obtained his Ph.D. in Toulouse and previously worked, among others, for Anybotics as the Autonomy and Perception Lead.

July 28, 2023

Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone Racing

We tackle the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies. We use contrastive learning to extract robust feature representations from the input images and leverage a learning-by-cheating framework for training a neural network policy. For more information, check out our IROS23 paper and video.

July 28, 2023

Our Science Robotics 2021 paper wins prestigious Chinese award!


We are truly honored to receive the prestigious Frontiers of Science Award in the category Robotics Science and Systems, which was presented on July 16th 2023 at the International Congress of Basic Science in the Beijing's People's Hall of China for our Science Robotics 2021's paper "Learning High Speed Flight in the Wild"! Congratulations to the entire team: Antonio Loquercio, Elia Kaufmann Rene Ranftl, Matthias Mueller, Vladlen Koltun. Many thanks to the award committee! Congratulations to other winners too. Paper, open-source code, and video.

July 04, 2023

Our paper on Authorship Attribution through Deep Learning accepted at PLOS ONE


We are excited to announce that our paper on authorship attribution for research papers has just been published in PLOS ONE. We developed a transformer-based AI that achieves over 70% accuracy on the newly created, largest-to-date, authorship-attribution dataset with over 2000 authors. For more information check out our PDF and open-source code.

July 03, 2023

Video Recordings of the 4th International Workshop on Event-Based Vision at CVPR 2023 available!


The recordings of the 4th international workshop on event-based vision at CVPR 2023 are available here. The event was co-organized by Guillermo Gallego, Davide Scaramuzza, Kostas Daniilidis, Cornelia Femueller, Davide Migliore.

June 21, 2023

Microgravity induces overconfidence in perceptual decision-making

We are excited to present our paper on the effects of microgravity on perceptual decision-making published in Nature Scientific Reports.

PDF YouTube Dataset

June 20, 2023

HDVIO: Improving Localization and Disturbance Estimation with Hybrid Dynamics VIO

We are excited to present our new RSS paper on state and disturbance estimation for flying vehicles. We propose a hybrid dynamics model that combines a point-mass vehicle model with a learning-based component that captures complex aerodynamic effects. We include our hybrid dynamics model in an optimization-based VIO system that estimates external disturbance acting on the robot as well as the robot's state. HDVIO improves the motion and external force estimation compared to the state-of-the-art. For more information, check out our paper and video.

June 13, 2023

Our CVPR Paper is Featured in Computer Vision News


Our CVPR highlight and award-candidate work "Data-driven Feature Tracking for Event Cameras" is featured on Computer Vision News. Find out more and read the complete interview with the authors Nico Messikommer, Mathias Gehrig and Carter Fang here!

Jun 13, 2023

DSEC-Detection Dataset Release


We release a new dataset for event- and frame-based object detection, DSEC-Detection based on the DSEC dataset, with aligned frames, events and object tracks. For more details visit the dataset website.

PDF YouTube Dataset Code

June 08, 2023

Our PhD student Manasi Muglikar is awarded UZH Candoc Grant

Manasi, PhD student in our lab, is awarded the UZH Candoc Grant 2023 for her outstanding research! Congratulations! Checkout her latest work on event-based vision here.

May 13, 2023

Training Efficient Controllers via Analytic Policy Gradient


In systems with limited compute, such as aerial vehicles, an accurate controller that is efficient at execution time is imperative. We propose an Analytic Policy Gradient (APG) method to tackle this problem. APG exploits the availability of differentiable simulators by training a controller offline with gradient descent on the tracking error. Our proposed method outperforms both model-based and model-free RL methods in terms of tracking error. Concurrently, it achieves similar performance to MPC while requiring more than an order of magnitude less computation time. Our work provides insights into the potential of APG as a promising control method for robotics.

PDF YouTube Code

May 10, 2023

We are hiring


We have multiple openings for a Scientific Research Manager, Phd students and Postdocs in Reinforcement Learning for Agile Vision-based Navigation and Computer vision with Standard Cameras and Event Cameras. Job descriptions and how to apply: https://rpg.ifi.uzh.ch/positions.html

May 09, 2023

NCCR Robotics Documentary

Check out this amazing 45-minute documentary on YouTube about the story of twelve years of groundbreaking robotics research by the Swiss National Competence Center of Research in Robotics (NCCR Robotics). The documentary summarizes all the key achievements, from assistive technologies that allowed patients with completely paralyzed legs to walk again to legged and flying robots with self-learning capabilities for disaster mitigation to educational robots used by thousands of children worldwide! Congrats to all NCCR Robotics members who have made this possible! And congratulations to the coordinator, Dario Floreano, and his management team! We are very proud to have been part of this! NCCR Robotics will continue to operate in four different projects. Check out this article to learn more.

May 04, 2023

Code Release: Tightly coupling global position measurements in VIO


We are excited to release fully open-source our code to tightly fuse global positional measurements in visual-inertial odometry (VIO)! Our code integrates global positional measurements, for example GPS, in SVO Pro, a sliding-window optimization-based VIO that uses the SVO frontend. We leverage the IMU preintegration theory to efficiently include the global position measurements in the VIO problem formulation. Our system outperforms the loosely-coupled approach in terms of absolute trajectory error up to 50% with negligible increase of the computational cost. For more information, have a look at our paper and code.

May 03, 2023

We win the ICRA Agile Movements Workshop Poster Award


Congratulations to Yunlong Song for winning the ICRA "Agile Movements: Animal Behaviour, Biomechanics, and Robot Devices" workshop poster award with his work "Fly fast with Reinforcement Learning".

April 25, 2023

Our work was selected as a CVPR Award Candidate

We are honored that our 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) paper "Data-driven Feature Tracking for Event Cameras" was selected as an award candidate. Congratulations to all collaborators!

PDF YouTube Code

April 17, 2023

Neuromorphic Optical Flow and Real-time Implementation with Event Cameras (CVPRW 2023)


We present a new spiking neural network (SNN) architecture that significantly improves optical flow prediction accuracy while reducing complexity, making it ideal for real-time applications in edge devices and robots. By leveraging event-based vision and SNNs, our solution achieves high-speed optical flow prediction with nearly two orders of magnitude less complexity, without compromising accuracy. This breakthrough paves the way for efficient real-time deployments in various computer vision pipelines. For more information, have a look at our paper.

April 13, 2023

Our Master student Asude Aydin wins the UZH Award for her Master Thesis

Asude Aydin, who did his Master thesis A Hybrid ANN-SNN Architecture for Low-Power and Low-Latency Visual Perception at RPG has received the UZH Award 2023 for her outstanding work. Check out her paper here, which is based on her Master thesis.


April 11, 2023

Event-based Shape from Polarization


We introduce a novel shape-from-polarization technique using an event camera (accepted at CVPR 2023). Our setup consists of a linear polarizer rotating at high-speeds in front of an event camera. Our method uses the continuous event stream caused by the rotation to reconstruct relative intensities at multiple polarizer angles. Experiments demonstrate that our method outperforms physics-based baselines using frames, reducing the MAE by 25% in synthetic and real-world dataset. For more information, have a look at our paper.

April 07, 2023

Recurrent Vision Transformers for Object Detection with Event Cameras (CVPR 2023)


We introduce a novel efficient and highly-performant object detection backbone for event-based vision. Through extensive architecture study, we find that vision transformers can be combined with recurrent neural networks to effectively extract spatio-temporal features for object detection. Our proposed architecture can be trained from scratch on publicly available real-world data to reach state-of-the-art performance while lowering inference time compared to prior work by up to 6 times. For more information, have a look at our paper and code.

April 3, 2023

Data-driven Feature Tracking for Event Cameras

We are excited to announce that our paper on Data-driven Feature Tracking for Event Cameras was accepted at CVPR 2023. In this work, we introduce the first data-driven feature tracker for event cameras, which leverages low-latency events to track features detected in a grayscale frame. Our data-driven tracker outperforms existing approaches in relative feature age by up to 130 % while also achieving the lowest latency
For more information, check out our paper, video and code.

April 3, 2023

Autonomous Power Line Inspection with Drones via Perception-Aware MPC

We are excited to present our new work on autonomous power line inspection with drones using perception-aware model predictive control (MPC). We propose a MPC that tightly couples perception and action. Our controller generates commands that maximize the visibility of the power lines while, at the same time, safely avoiding the power masts. For power line detection, we propose a lightweight learning-based detector that is trained only on synthetic data and is able to transfer zero-shot to real-world power line images. For more information, check out our paper and video.

April 3, 2023

RPG and LINA Project featured in RSI


In the recent news broadcast by RSI, our lab is featured for its efforts in developing and boosting research on civil applications for drones. The LINA project at the Dübendorf airport is making its infrastructure availble to researchers and industries to facilitate the testing and developing of autonomous flying systems hardware and software. RSI [IT]

April 1, 2023

New PhD Student

We welcome Nikola Zubić as a new PhD student in our lab!

March 30, 2023

Event-based Agile Object Catching with a Quadrupedal Robot

This work the low-latency advantages of event cameras for agil object catching with a quadrupedal robot. We use the event camera to estimate the trajectory of the object, which is then caught using an RL-trained policy. Our robot catches objects at up to 15 m/s with a 83% success rate. For more information, have a look at our ICRA 2023 paper, video and open-source code.

March 27, 2023

A Hybrid ANN-SNN Architecture for Low-Power and Low-Latency Visual Perception


This work proposes a hybrid model combining Spiking Neural Networks (SNN) and classical Artificial Neural Networks (ANN) to optimize power efficiency and latency in edge devices. The hybrid ANN-SNN model overcomes state transients and state decay issues while maintaining high temporal resolution, low latency, and low power consumption. In the context of 2D and 3D human pose estimation, the method achieves an 88% reduction in power consumption with only a 4% decrease in performance compared to fully ANN counterparts, and a 74% lower error compared to SNNs. For more information, have a look at our paper.

March 10, 2023

HILTI-SLAM Challenge 2023


RPG and HILTI are organizing the ICRA2023 HILTI SLAM Challenge! Instructions here. The HILTI SLAM Challenge dataset is a real-life, multi-sensor dataset with accurate ground truth to advance the state of the art in highly accurate state estimation in challenging environments. Participants will be ranked by the completeness of their trajectories and by the achieved accuracy. HILTI is a multinational company that offers premium products and services for professionals on construction sites around the globe. Behind this vast catalog is a global team comprising of 30.000 team members from 133 different nationalities located in more than 120 countries.

March 09, 2023

LINA Testing Facility at Dübendorf Airport


UZH Magazin releases a news article about our research on autonomous drones and our new testing facility at Dübendorf Airport that enables researchers to develop autonomous systems such as drones and ground-based robots from idea to marketable product. Read the article in English or in German. More information about the LINA project can be found here.

March 7, 2023

Our Master student Fang Nan wins ETH Medal for Best Master Thesis


Fang Nan, who did his Master thesis Nonlinear MPC for Quadrotor Fault-Tolerant Control at RPG has received the ETH Medal 2023 and the Willi Studer Prize for his outstanding work. Check out his RAL 2022 paper here, which is based on his Master thesis.


March 2, 2023

Learning Perception-Aware Agile Flight in Cluttered Environments

We propose a method to learn neural network policies that achieve perception-aware, minimum-time flight in cluttered environments. Our method combines imitation learning and reinforcement learning by leveraging a privileged learning-by-cheating framework. For more information, check out our ICRA23 paper or this video.

March 2, 2023

Weighted Maximum Likelihood for Controller Tuning

We present our new ICRA23 paper that leverages a probabilistic Policy Search method, Weighted Maximum Likelihood (WML), to automatically learn the optimal objective for MPCC. The data efficiency provided by the use of a model-based approach in the loop allows us to directly train in a high-fidelity simulator, which in turn makes our approach able to transfer zero-shot to the real world. For more information, check out our ICRA23 paper and video.

March 2, 2023

User-Conditioned Neural Control Policies for Mobile Robotics

We present our new paper that leverages a feature-wise linear modulation layer to condition neural control policies for mobile robotics. We demonstrate in simulation and in real-world experiments that a single control policy can achieve close to time-optimal flight performance across the entire performance envelope of the robot, reaching up to 60 km/h and 4.5 g in acceleration. The ability to guide a learned controller during task execution has implications beyond agile quadrotor flight, as conditioning the control policy on human intent helps safely bringing learning based systems out of the well-defined laboratory environment into the wild.
For more information, check out our ICRA23 paper and video.

February 28, 2023

Learned Inertial Odometry for Autonomous Drone Racing

We are excited to present our new RA-L paper on state estimation for autonomous drone racing. We propose a learning-based odometry algorithm that uses an inertial measurement unit (IMU) as the only sensor modality for autonomous drone racing tasks. The core idea of our system is to couple a model-based filter, driven by the inertial measurements, with a learning-based module that has access to the control commands. For more information, check out our paper, video, and code.

Feburary 15, 2023

Agilicious: Open-Source and Open-Hardware Agile Quadrotor for Vision-Based Flight

We are excited to present Agilicious, a co-designed hardware and software framework tailored to autonomous, agile quadrotor flight. It is completely open-source and open-hardware and supports both model-based and neural-network-based controllers. Also, it provides high thrust-to-weight and torque-to-inertia ratios for agility, onboard vision sensors, GPU-accelerated compute hardware for real-time perception and neural-network inference, a real-time flight controller, and a versatile software stack. In contrast to existing frameworks, Agilicious offers a unique combination of flexible software stack and high-performance hardware. We compare Agilicious with prior works and demonstrate it on different agile tasks, using both modelbased and neural-network-based controllers. Our demonstrators include trajectory tracking at up to 5 g and 70 km/h in a motion-capture system, and vision-based acrobatic flight and obstacle avoidance in both structured and unstructured environments using solely onboard perception. Finally, we demonstrate its use for hardware-in-the-loop simulation in virtual-reality environments. Thanks to its versatility, we believe that Agilicious supports the next generation of scientific and industrial quadrotor research. For more details check our paper, video and webpage.

January 17, 2023

Event-based Shape from Polarization


We introduce a novel shape-from-polarization technique using an event camera. Our setup consists of a linear polarizer rotating at high-speeds in front of an event camera. Our method uses the continuous event stream caused by the rotation to reconstruct relative intensities at multiple polarizer angles. Experiments demonstrate that our method outperforms physics-based baselines using frames, reducing the MAE by 25% in synthetic and real-world dataset. For more information, have a look at our paper.

January 11, 2023

Survey on Autonomous Drone Racing


We present our survey on Autonomous Drone Racing which covers the latest developments in agile flight for both model based and learning based approaches. We include extensive coverage of drone racing competitions, simulators, open source software, and the state of the art approaches for flying autonomous drones at their limits! For more information, see our paper

January 10, 2023

4th International Workshop on Event-Based Vision at CVPR 2023


The event will take place on June 19, 2023 in Vancouver, Canada. The deadline to submit a paper contribution is March 20 via CMT. More info on our website. The event is co-organized by Guillermo Gallego, Davide Scaramuzza, Kostas Daniilidis, Cornelia Femueller, Davide Migliore.

January 04, 2023

Davide Scaramuzza featured author of IEEE

We are honored that Davide Scaramuzza is featured authors on the IEEE website.

Video Highlights

September 13, 2023


The fundamental advantage of reinforcement learning over optimal control lies in its optimization objective.

September 1, 2023



Our AI Drone beats human world champion pilots in drone racing, while only relying on onboard sensing and compute!

December 1, 2022

RPG celebrates its 10th anniversary!

October 28, 2022

The Robotics and Perception Group participated in the parabolic flight campain of UZH Space Hub to study how gravity affects the decision-making of human drone pilots.

October 14, 2022

The first Data-Efficient Collaborative Decentralized Thermal-Inertial Odometry system has been released as open source, extending the already-public JPL xVIO library. Checkout the code and datasets, to discover how a drone swarm can collaborate in all types of light conditions.

July 13, 2022

Our lab is featured on the Italian RAI1 TV program SuperQuark. Watch the full video report about our research on autonomous drones, from drone racing to search and rescue, from standard to event cameras. The video is in Italian with English subtitles.

July 1, 2022

We are excited to announce our RA-L paper which tackles minimum-time flight in cluttered environments using a combination of deep reinforcement learning and classical topological path planning. We show that the approach outperforms the state-of-the-art in both planning quality and the ability to fly without collisions at high speeds. For more details, check out the paper and the YouTube.

June 26, 2022

For the first time, a time-optimal trajectory can be generated and tracked in real-time, even with moving waypoints and strong unknown disturbances! Read our Time-optimal Online Replanning for Agile Quadrotor Flight paper and watch our IROS talk for further details.

June 13, 2022

We are excited to announce that our paper on Time Lens++ was accepted at CVPR 2022. To learn more about the next generation of event-based frame interpolation visit out project page There we release our new dataset BS-ERGB recorded with a beam splitter, which features aligned and synchronized events and frames."

October 6, 2021

We train a high-speed navigation policy in simulation and deploy it on real drones in previously unknown, extremely challenging environments up to 40km/h (Switzerland is a great location for this!). The approach relies only on onboard vision and computation. Checkout our Science Robotics paper Learning High-Speed Flight in the Wild for further details.

September 10, 2021

We propose L1-NMPC, a novel hybrid adaptive NMPC to learn model uncertainties online and immediately compensate for them, drastically improving performance over non-adaptive baselines with minimal computational overhead. Our proposed architecture generalizes to many different environments from which we evaluate wind, unknown payloads, and highly agile flight conditions. Performance, Precision, and Payloads: Adaptive Nonlinear MPC for Quadrotors for further details.

September 9, 2021

In this work, we perform extensive experimental studies to quantitively compare two state-of-the-art control methods for quadrotor agile flight, from the aspect of trajectory tracking accuracy, robustness, and computational efficiency. A Comparative Study of Nonlinear MPC and Differential-Flatness-Based Control for Quadrotor Agile Flight paper for further details.

September 8, 2021

Thanks to our Model Predictive Contouring Control, the problem of flying through multiple waypoints in minimum time can now be solved in real-time. Read our Model Predictive Contouring Control for Time-Optimal Quadrotor Flight paper for further details.

June 28, 2021

AI Drone faster than Humans? Time-Optimal Planning for Quadrotor Waypoint Flight. Read our Time-optimal planning for quadrotor waypoint flight paper for further details.

June 28, 2021

The Robotics and Perception Group and the University of Zurich present one of the world's largest indoor drone-testing arenas. - Equipped with a real-time motion-capture system consisting of 36 Vicon cameras, and with a flight space of over 30x30x8 meters (7,000 cubic meters), this large research infrastructure allows us to deploy our most advanced perception, learning, planning, and control algorithms to push vision-based agile drones to speeds over 60 km/h and accelerations over 5g.

June 28, 2021

NeuroBEM is a framework that that allows simulation of very aggressive quadrotor flights with unprecedented precision. Learn more about our machine-learning augmented first-principles method at our project page. We also release a dataset that contains high-speed quadrotor flight data.

June 11, 2021

TimeLens is a new event-based video frame interpolation method that generates high speed video from low framerate RGB frames and asynchronous events. Learn more about TimeLens over at our project page where you can find code, datasets and more! We also release a High-Speed Event and RGB dataset which features complex scenarios like bursting balloons and spinning objects!

April 14, 2021

DSEC is a new stereo event camera dataset: over 400 GB of data, 53 sequences, 2 VGA event cameras, 2 RGB global shutter cameras, 53 sequences, day and night, urban and mountain driving, accurate calibration, disparity groundtruth from Lidar.

March 18, 2021

Watch our quadrotor flies near-time-optimal trajectories in Flightmare and the real world using Reinforcement Learnig! Read our Preprint for further details.

Jan 13, 2021

Watch our quadrotor flies after motor failure with only onboard vision sensors! Read our RA-L paper for further details.