Active Vision

Active vision is concerned with obtaining more information from the environment by actively choosing where and how to observe it using a camera.

Aggressive Quadrotor Flight through Narrow Gaps with Onboard Sensing and Computing using Active Vision

In this paper, we address one of the main challenges towards autonomous quadrotor flight in complex environments, which is flight through narrow gaps. We present a method that allows a quadrotor to autonomously and safely pass through a narrow, inclined gap using only its onboard visual-inertial sensors and computer. Previous works have addressed quadrotor flight through gaps using external motion-capture systems for state estimation. Instead, we estimate the state by fusing gap detection from a single onboard camera with an IMU. Our method generates a trajectory that considers geometric, dynamic, and perception constraints: during the approach maneuver, the quadrotor always faces the gap to allow state estimation, while respecting the vehicle dynamics; during the traverse through the gap, the distance of the quadrotor to the edges of the gap is maximized. Furthermore, we replan the trajectory during its execution to cope with the varying uncertainty of the state estimate. We successfully evaluate and demonstrate the proposed approach in many real experiments, achieving a success rate of 80% and gap orientations up to 45 degrees. To the best of our knowledge, this is the first work that addresses and successfully reports aggressive flight through narrow gaps using only onboard sensing and computing.

References

Arxiv16_Falanga

D. Falanga, E. Mueggler, M. Faessler, D. Scaramuzza

Aggressive Quadrotor Flight through Narrow Gaps with Onboard Sensing and Computing

IEEE International Conference on Robotics and Automation, accepted, 2017.

PDF Arxiv YouTube


Active Autonomous Aerial Exploration for Ground Robot Path Planning

We address the problem of planning a path for a ground robot through unknown terrain, using observations from a flying robot. In search and rescue missions, which are our target scenarios, the time from arrival at the disaster site to the delivery of aid is critically important. Previous works required exhaustive exploration before path planning, which is time-consuming but eventually leads to an optimal path for the ground robot. Instead, we propose active exploration of the environment, where the flying robot chooses regions to map in a way that optimizes the overall response time of the system, which is the combined time for the air and ground robots to execute their missions. In our approach, we estimate terrain classes throughout our terrain map, and we also add elevation information in areas where the active exploration algorithm has chosen to perform 3D reconstruction. This terrain information is used to estimate feasible and efficient paths for the ground robot. By exploring the environment actively, we achieve superior response times compared to both exhaustive and greedy exploration strategies. We demonstrate the performance and capabilities of the proposed system in simulated and real-world outdoor experiments. To the best of our knowledge, this is the first work to address ground robot path planning using active aerial exploration.

References

RAL16_Delmerico

J. Delmerico, E. Mueggler, J. Nitsch, D. Scaramuzza

Active Autonomous Aerial Exploration for Ground Robot Path Planning

IEEE Robotics and Automation Letters (RA-L), 2017.

PDF YouTube


Perception-aware Path Planning

While most of the existing work on path planning focuses on reaching a goal as fast as possible, or with minimal effort, these approaches disregard the appearance of the environment and only consider the geometric structure. Vision-controlled robots, however, need to leverage the photometric information in the scene to localize themselves and perform egomotion estimation. In this work, we argue that motion planning for vision-controlled robots should be perception-aware in that the robot should also favor texture-rich areas to minimize the localization uncertainty during a goal-reaching task. Thus, we describe how to optimally incorporate the photometric information (i.e., texture) of the scene, in addition to the the geometric information, to compute the uncertainty of vision-based localization during path planning.

References

RAL16_kaiser

G. Costante, C. Forster, J. Delmerico, P. Valigi, D. Scaramuzza

Perception-aware Path Planning

IEEE Transactions on Robotics, conditionally accepted, 2016.

PDF YouTube


Information Gain Based Active Reconstruction

The Information Gain Based Active Reconstruction Framework is a modular, robot-agnostic, software package for performing next-best-view planning for volumetric object reconstruction using a range sensor. Our implementation can be easily adapted to any mobile robot equipped with any camera-based range sensor (e.g stereo camera, structured light sensor) to iteratively observe an object to generate a volumetric map and a point cloud model. The algorithm allows the user to define the information gain metric for choosing the next best view, and many formulations for these metrics are evaluated and compared in our ICRA paper. This framework is released open source as a ROS-compatible package for autonomous 3D reconstruction tasks.

Download the code from GitHub.

References

ICRA2016_Isler

S. Isler, R. Sabzevari, J. Delmerico, D. Scaramuzza

An Information Gain Formulation for Active Volumetric 3D Reconstruction

IEEE International Conference on Robotics and Automation (ICRA), Stockholm, 2016.

PDF YouTube Software

Active, Dense Reconstruction

The estimation of the depth uncertainty makes REMODE extremely attractive for motion planning and active-vision problems. In this work, we investigate the following problem: Given the image of a scene, what is the trajectory that a robot-mounted camera should follow to allow optimal dense 3D reconstruction? The solution we propose is based on maximizing the information gain over a set of candidate trajectories. In order to estimate the information that we expect from a camera pose, we introduce a novel formulation of the measurement uncertainty that accounts for the scene appearance (i.e., texture in the reference view), the scene depth, and the vehicle pose. We successfully demonstrate our approach in the case of realtime, monocular reconstruction from a small quadrotor and validate the effectiveness of our solution in both synthetic and real experiments. This is the first work on active, monocular dense reconstruction, which chooses motion trajectories that minimize perceptual ambiguities inferred by the texture in the scene.

References

C. Forster, M. Pizzoli, D. Scaramuzza

Appearance-based Active, Monocular, Dense Reconstruction for Micro Aerial Vehicles

Robotics: Science and Systems, Berkely, 2014.

PDF