Vision-based Navigation of Micro Aerial Vehicles (MAVs)

Our research goal is to develop teams of MAVs which can fly autonomously in city-like environments and which can be used to assist humans in tasks like rescue and monitoring. We focus on enabling autonomous navigation using vision and IMU as sole sensor modalities (i.e., no GPS, no laser).

Read more


Aggressive Vision-Based Flight with Quadrotors

We work on control strategies and trajectory generation algorithms to enable aggressive maneuvers with vision-based quadrotors. While pushing the agility boundaries of our quadrotors, we also enable them to recover from difficult conditions in case of a failure.

Read more


Event-based Vision

The Dynamic Vision Sensor (DVS) is a bio-inspired vision sensor that works like your eye. Instead of wastefully sending entire images at fixed frame rates, only the local pixel-level changes caused by movement in a scene are transmitted at the time they occur. The result is a stream of "address-events" at microsecond time resolution, equivalent to or better than conventional high-speed vision sensors running at thousands of frames per second.

Read more


Deep Learning

Deep learning is a branch of machine learning based on a set of algorithms that attempt to model high level abstractions in data. In our research, we apply deep learning to solve different mobile robot navigation problems, such as depth estimation, end-to-end navigation, and classification.

Read more


Monocular Dense Reconstruction

We are working on the problem of estimating dense and accurate 3D maps from a single moving camera. The monocular setting is an appealing sensing modality for Micro Aerial Vehicles (MAVs), where strict limitations apply on payload and power consumption. In this case, the high agility turns the platform into a formidable depth sensor, able to deal with a wide depth range and capable of achieving arbitrarily high confidence in the measurement.

Read more


Active Vision

One of the goals of our research is to enable our robots to perceive their environment by changing the viewpoint of their cameras such that they obtain more or better information from it. We consider this problem of Active Vision to be of great importance to building systems that operate robustly in the real world. Within this research area, we work on perception-aware path planning to help robots navigate by choosing where to look, as well as on efficiently reconstructing objects and scenes by choosing an optimal next-best-view for the camera.

Read more


Visual and Inertial Odometry

Given a car equipped with an omnidirectional camera, the motion of the vehicle can be purely recovered from salient features tracked over time. We propose the 1-Point RANSAC algorithm which runs at 800 Hz on a normal laptop. To our knowledge, this is the most efficient visual odometry algorithm.

Read more


Multi-Robot Systems

Teams of robots can succeed in situations where a single robot may fail. We investigate multi-robot systems composed of homogeneous or hetereogeneous robots, using both centralized and distributed communication. These teams are applied to problems in search and rescue robotics, as well as mapping and navigation.

Read more


Semantic Navigation for Robot-Human Teams

Most of the work done in localization, mapping, and navigation for both ground and aerial vehicles has been done by means of point landmarks or occupancy grids, using vision or laser range finders. However, to make these robots one day able to cooperate with humans in complex scenarios, we need to build semantic maps of the environment.

Read more


Sensor Calibration

We work on intrinsic calibration of different sensors, such as omnidirectional and event-based cameras. We also work on calibration between an omnidirectional camera and 3D laser range finder.

Read more