How Fast is Too Fast? The Role of Perception Latency in High-Speed Sense and Avoid

In this work, we study the effects that perception latency has on the maximum speed a robot can reach to safely navigate through an unknown cluttered environment. We provide a general analysis that can serve as a baseline for future quantitative reasoning for design trade-offs in autonomous robot navigation. We consider the case where the robot is modeled as a linear second-order system with bounded input and navigates through static obstacles. Also, we focus on a scenario where the robot wants to reach a target destination in as little time as possible, and therefore cannot change its longitudinal velocity to avoid obstacles. We show how the maximum latency that the robot can tolerate to guarantee safety is related to the desired speed, the range of its sensing pipeline, and the actuation limitations of the platform (i.e., the maximum acceleration it can produce). As a particular case study, we compare monocular and stereo frame-based cameras against novel, low-latency sensors, such as event cameras, in the case of quadrotor flight. To validate our analysis, we conduct experiments on a quadrotor platform equipped with an event camera to detect and avoid obstacles thrown towards the robot. To the best of our knowledge, this is the first theoretical work in which perception and actuation limitations are jointly considered to study the performance of a robotic platform in high-speed navigation.


How Fast is Too Fast? The Role of Perception Latency in High-Speed Sense and Avoid

D. Falanga, S. Kim, D. Scaramuzza

How Fast is Too Fast? The Role of Perception Latency in High-Speed Sense and Avoid

IEEE Robotics and Automation Letters (RA-L), 2019.

PDF YouTube

The UZH-FPV Drone Racing Dataset

Despite impressive results in visual-inertial state estimation in recent years, high speed trajectories with six degree of freedom motion remain challenging for existing estimation algorithms. Aggressive trajectories feature large accelerations and rapid rotational motions, and when they pass close to objects in the environment, this induces large apparent motions in the vision sensors, all of which increase the difficulty in estimation. Existing benchmark datasets do not address these types of trajectories, instead focusing on slow speed or constrained trajectories, targeting other tasks such as inspection or driving.

We introduce the UZH-FPV Drone Racing dataset, consisting of over 27 sequences, with more than 10 km of flight distance, captured on a first-person-view (FPV) racing quadrotor flown by an expert pilot. The dataset features camera images, inertial measurements, event-camera data, and precise ground truth poses. These sequences are faster and more challenging, in terms of apparent scene motion, than any existing dataset. Our goal is to enable advancement of the state of the art in aggressive motion estimation by providing a dataset that is beyond the capabilities of existing state estimation algorithms.

Information field illustration

J. Delmerico, T. Cieslewski, H. Rebecq, M. Faessler, D. Scaramuzza

Are We Ready for Autonomous Drone Racing? The UZH-FPV Drone Racing Dataset

IEEE International Conference on Robotics and Automation (ICRA), 2019.

PDF YouTube Project Webpage and Datasets

Accurate Tracking of High-Speed Trajectories

In this work, we prove that the dynamical model of a quadrotor subject to linear rotor drag effects is differentially flat in its position and heading. We use this property to compute feed-forward control terms directly from a reference trajectory to be tracked. The obtained feed-forward terms are then used in a cascaded, nonlinear feedback control law that enables accurate agile flight with quadrotors. Compared to state-of-the-art control methods, which treat the rotor drag as an unknown disturbance, our method reduces the trajectory tracking error significantly. Finally, we present a method based on a gradient-free optimization to identify the rotor drag coefficients, which are required to compute the feed-forward control terms. The new theoretical results are thoroughly validated trough extensive comparative experiments.

Quadrotors are well suited for executing fast maneuvers with high accelerations but they are still unable to follow a fast trajectory with centimeter accuracy without iteratively learning it beforehand. In this work, we present a novel body-rate controller and an iterative thrust-mixing scheme, which improve the trajectory-tracking performance without requiring learning and reduce the yaw control error of a quadrotor, respectively. Furthermore, to the best of our knowledge, we present the first algorithm to cope with motor saturations smartly by prioritizing control inputs which are relevant for stabilization and trajectory tracking. The presented body-rate controller uses LQR-control methods to consider both the body rate and the single motor dynamics, which reduces the overall trajectory-tracking error while still rejecting external disturbances well. Our iterative thrust-mixing scheme computes the four rotor thrusts given the inputs from a position-control pipeline. Through the iterative computation, we are able to consider a varying ratio of thrust and drag torque of a single propeller over its input range, which allows applying the desired yaw torque more precisely and hence reduces the yaw-control error. Our prioritizing motor-saturation scheme improves stability and robustness of a quadrotor's flight and may prevent unstable behavior in case of motor saturations. We demonstrate the improved trajectory tracking, yaw-control, and robustness in case of motor saturations in real-world experiments with a quadrotor.


M. Faessler, A. Franchi, and D. Scaramuzza

Differential Flatness of Quadrotor Dynamics Subject to Rotor Drag for Accurate Tracking of High-Speed Trajectories

IEEE Robotics and Automation Letters (RA-L), 2018.

PDF YouTube


M. Faessler, D. Falanga, and D. Scaramuzza

Thrust Mixing, Saturation, and Body-Rate Control for Accurate Aggressive Quadrotor Flight

IEEE Robotics and Automation Letters (RA-L), Vol. 2, Issue 2, pp. 476-482, Apr. 2017.

PDF YouTube

Optimal and Perception Aware Control

Optimal control and model-based predictive control are extremely powerful methods to control systems or vehicles. However, most control approaches for MAVs use simple PID control in a cascaded structure and strictly split estimation, planning, and control into separate problems. With optimal control methods one could not only simplify the control architecture, task description, interfacing, and usability, but also take into account dynamic perception objectives and solve planning and control in one single step. Our control architectures abstract the underlying model and provide optimal control and receding horizon predictions at real-time with computation on low-power ARM processors. We also investigate the advantages of perception-aware control, where the robot's perception restrictions are taken into account in the control and planning stage and are used to improve perception performance. While such control pipelines are great for systems with known and simple dynamics, recent advances in machine learning (especially deep neural networks) have shown superior performance in very difficult high-level tasks and high-dimensional data processing. We strongly belief that both model-based and learning-based approaches should work as a union to fully exploit the advantages of both worlds. We intend to provide strong controllers as the foundation for neural-network-based high-level control and sensor data abstraction.


D. Falanga, P. Foehn, P. Lu, D. Scaramuzza

PAMPC: Perception-Aware Model Predictive Control for Quadrotors

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, 2018.

PDF YouTube Code

Onboard State Dependent LQR for Agile Quadrotors

P. Foehn, D. Scaramuzza

Onboard State Dependent LQR for Agile Quadrotors

IEEE International Conference on Robotics and Automation (ICRA), 2018

PDF Video ICRA18 Video Pitch PPT


P. Foehn, D. Falanga, N. Kuppuswamy, R. Tedrake, D. Scaramuzza

Fast Trajectory Optimization for Agile Quadrotor Maneuvers with a Cable-Suspended Payload

Robotics: Science and Systems (RSS), Boston, 2017.


Agile Drone Flight through Narrow Gaps

with Onboard Sensing and Computing



D. Falanga, E. Mueggler, M. Faessler, D. Scaramuzza

Aggressive Quadrotor Flight through Narrow Gaps with Onboard Sensing and Computing using Active Vision

IEEE International Conference on Robotics and Automation (ICRA), 2017.

PDF YouTube

In this work, we address one of the main challenges towards autonomous drone flight in complex environments, which is flight through narrow gaps. Indeed, one day micro drones will be used to search and rescue people in the aftermath of an earthquake. In these situations, collapsed buildings cannot be accessed through conventional windows, so that small gaps may be the only way to get inside. What makes this problem challenging is that a gap can be very small, such that precise trajectory-following is required, and can have arbitrary orientations, such that the quadrotor cannot fly through it in near-hover conditions. This makes it necessary to execute an agile trajectory (i.e., with high velocity and angular accelerations) in order to align the vehicle to the gap orientation.

Previous works on aggressive flight through narrow gaps have focused solely on the control and planning problem and therefore used motion-capture systems for state estimation and external computing. Conversely, we focus on using only onboard sensors and computing. More specifically, we address the case where state estimation is done via gap detection through a single, forward-facing camera and show that this raises an interesting problem of coupled perception and planning: for the robot to localize with respect to the gap, a trajectory should be selected, which guarantees that the quadrotor always faces the gap (perception constraint) and should be replanned multiple times during its execution to cope with the varying uncertainty of the state estimate. Furthermore, during the traverse, the quadrotor should maximize the distance from the edges of the gap (geometric constraint) to avoid collisions and, at the same time, it should be able to do so without relying on any visual feedback (when the robot is very close to the gap, this exits from the camera field of view). Finally, the trajectory should be feasible with respect to the dynamic constraints of the vehicle. Our proposed trajectory generation approach is independent of the gap-detection algorithm being used; thus, to simplify the perception task, we used a gap with a simple black-and-white rectangular pattern.

We successfully evaluated our approach with gap orientations of up to 45 degrees vertically and up to 30 horizontally. Our vehicle weighs 830 grams and has a thrust-to-weight ratio of 2.5. Our trajectory generation formulation handles trajectories up to 90-degree gap orientations although the quadrotor used in these experiments is too heavy and the motors saturate for more than 45-degree gap orientations. The vehicle reaches speeds of up to 3 meters per second and angular velocities of up to 400 degrees per second, with accelerations of up to 1.5 g. We can pass through gaps 1.5 times the size of the quadrotor, with only 10 centimeters of tolerance. Our method does not require any prior knowledge about the position and the orientation of the gap. No external infrastructure, such as a motion-capture system, is needed. This is the first time that such an aggressive maneuver through narrow gaps has been done by fusing gap detection from a single onboard camera and IMU.

We challenged two Swiss drone-racing pilots to demonstrate FPV flight through narrow gaps. It turned out not to be that easy.. but after some a few attempts they managed quite well!

Automatic Re-Initialization and Failure Recovery

High-resolution photos can be found here.

With drones becoming more and more popular, safety is a big concern. A critical situation occurs when a drone temporarily loses its GPS position information, which might lead it to crash. This can happen, for instance, when flying close to buildings where GPS signal is lost. In such situations, it is desirable that the drone can rely on fall-back systems and regain stable flight as soon as possible.

We developped a new technology to automatically recover and stabilize a quadrotor from any initial condition. On the one hand, this new technology can allow a quadrotor to be launched by simply tossing it in the air, like a "baseball". On the other hand, it allows a quadrotor to recover back into stable flight after a system failure. Since this technology does not rely on any external infrastructure, such as GPS, it enables the safe use of drones in both indoors and outdoors environments. Thus, our new technology can become relevant for commercial use of drones, such as parcel delivery.

Our quadrotor is equipped with a single camera, an inertial measurement unit, and a distance sensor (Teraranger One). The stabilization system of the quadrotor emulates the visual system and the sense of balance within humans. As soon as a toss or a failure situation is detected, our computer-vision software analyses the images looking for distinctive landmarks in the environment, which it uses to restore balance.

All the image processing and control runs on a smartphone processor onboard the drone. The onboard sensing and computation renders the drone safe and able to fly unaided. This allows the drone to fulfil its mission without any communication or interaction with the operator.

The recovery procedure consists of multiple stages, in which the quadrotor, first, stabilizes its attitude and altitude, then, re-initializes its visual state-estimation pipeline before stabilizing fully autonomously. To experimentally demonstrate the performance of our system, in the video we aggressively throw the quadrotor in the air by hand and have it recover and stabilize all by itself. We chose this example as it simulates conditions similar to failure recovery during aggressive flight. Our system was able to recover successfully in several hundred throws in both indoor and outdoor environments.


M. Faessler, F. Fontana, C. Forster, D. Scaramuzza

Automatic Re-Initialization and Failure Recovery for Aggressive Flight with a Monocular Vision-Based Quadrotor

IEEE International Conference on Robotics and Automation (ICRA), Seattle, 2015.

PDF YouTube

M. Faessler, F. Fontana, C. Forster, E. Mueggler, M. Pizzoli, D. Scaramuzza

Autonomous, Vision-based Flight and Live Dense 3D Mapping with a Quadrotor Micro Aerial Vehicle

Journal of Field Robotics, 2016.

PDF YouTube1 YouTube2 YouTube3 YouTube4 Software