Bootstrapping Reinforcement Learning with Imitation for Vision-Based Agile Flight

Conference on Robot Learning (CoRL) 2024

Abstract

Learning visuomotor policies for agile quadrotor flight presents significant difficulties, primarily from inefficient policy exploration caused by highdimensional visual inputs and the need for precise and low-latency control. To address these challenges, we propose a novel approach that combines the performance of Reinforcement Learning (RL) and the sample efficiency of Imitation Learning (IL) in the task of vision-based autonomous drone racing.

While RL provides a framework for learning high-performance controllers through trial and error, it faces challenges with sample efficiency and computational demands due to the high dimensionality of visual inputs. Conversely, IL efficiently learns from visual expert demonstrations, but it remains limited by the expert’s performance and state distribution.

To overcome these limitations, our policy learning framework integrates the strengths of both approaches. Our framework contains three phases: training a teacher policy using RL with privilege state information, distilling it into a student policy via IL, and adaptive fine-tuning via RL. Testing in both simulated and real-world scenarios shows our approach can not only learn in scenarios where RL from scratch fails but also outperforms existing IL methods in both robustness and performance, successfully navigating a quadrotor through a race course using only visual information.

Method

Method

We demonstrate visuomotor policy learning in three different stages. In stage I, we train a state-based teacher policy using RL. In stage II, we use IL to learn a student distillation policy using visual inputs. In stage III, we bootstrap the actor using the student policy to fine-tune the policy through vision-based RL

How should we combile IL and RL?

Method

In our experiment, we demonstrate that distributing the task between imitation learning and reinforcement learning allows us to achieve the best policy performance after approximately 60% of pretraining.

Real World Results

By leveraging the complementary advantages of Imitation Learning and Reinforcement Learning, we propose a framework that trains a policy capable of navigating through a sequence of gates using solely gate corners or RGB images.

Corner-Based Split-S Track

Image-Based Split-S Track

Race tracks Visualization

Method

Visualization of the drone racing tracks used for the experiments, each characterized by varying levels of complexity. All the tracks maintain a consistent size scale, spanning widths from 8 meters to 16 meters.

BibTeX

@article{xing2024bootstrapping,
  title = {Bootstrapping Reinforcement Learning with Imitation for Vision-Based Agile Flight},
  author = {Xing, Jiaxu and Romero, Angel and Bauersfeld, Leonard and Scaramuzza, Davide},
  journal = {8th Conference on Robot Learning (CoRL)},
  year = {2024},
}