Student Projects


How to apply

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP . Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).


Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).



Multi-agent Drone Racing via Self-play and Reinforcement Learning - Available

Description: Drone racing requires human pilots to not only complete a given race track in minimum-time, but also to compete with other pilots through strategic blocking, or to overtake opponents during extreme maneuvers. Single-player RL allows autonomous agents to achieve near-time-optimal performance in time trial racing. While being highly competitive in this setting, such training strategy can not generalize to the multi-agent scenario. An important step towards artificial general intelligence (AGI) is versatility -- the capability of discovering novel skills via self-play and self-supervised autocurriculum. In this project, we tackle multi-agent drone racing via self-play and reinforcement learning.

Goal: Create a multi-agent drone racing system that can discover novel racing skills and compete against each other. Applicants should have strong experience in C++ and python programming. Reinforcement learning and robotics background are required.

Contact Details: Yunlong Song (song (at) ifi (dot) uzh (dot) ch), Elia Kaufmann (ekaufmann (at) ifi (dot) uzh (dot) ch)

Thesis Type: Master Thesis

See project on SiROP

Deep reinforcement learning for collaborative aerial transportation - Available

Description: Collaborative object transportation using micro aerial vehicles (MAVs) is a promising drone technique. It is challenging from a control perspective, since multiple MAVs are mechanically coupled, imposing hard kinematic constraints. Traditional model-based methods often require linearization of the nonlinear problem which restrains the performance such as transporting speed and the payload. The goal of his project aims at exploring the possibility of using the deep reinforcement learning approach to obtain a centralized control policy for collaborative aerial transportation, which is more efficient than the state-of-the-art methods. The policy will be trained in a simulation environment and then transferred to real-life experiments. Applications should have strong experience in C++, Python. Applicants with reinforcement learning and flight control background are favored.

Goal: The goal of this project is to use deep reinforcement learning on collaborative aerial transportation. The method needs to be validated in real flight tests.

Contact Details: Sihao Sun (sun at ifi.uzh.ch), Yunlong Song (song at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Reinforcement Learning for Drone Racing - Available

Description: In drone racing, human pilots navigate quadrotor drones as quickly as possible through a sequence of gates arranged in a 3D track. Inspired by the impressive flight performance of human pilots, the goal of this project is to train a deep sensorimotor policy that can complete a given track as fast as possible. To this end, the policy directly predicts low-level control commands from noisy odometry data. Provided with an in-house drone simulator, the student investigates state-of-the-art reinforcement learning algorithms and reward designs for the task of drone racing. The ultimate goal is to outperform human pilots on a simulated track. Applicants should have strong experience in C++ and python programming. Reinforcement learning and robotics background are required.

Goal: Find the fastest possible trajectory through a drone racing track using reinforcement learning. Investigate different reward formulations for the task of drone racing. Compare the resulting trajectory with other trajectory planning methods, e.g., model-based path planning algorithms or optimization-based algorithms.

Contact Details: Yunlong Song (song (at) ifi (dot) uzh (dot) ch), Elia Kaufmann (ekaufmann (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project

See project on SiROP

Brain-Body-Drone Interface - Available

Description: Brain- and body-computer interfaces not only allow motion-impaired individuals to use machines - such as drones - but can increase the bandwidth of information transmission between abled-body users and machines, thereby improving performance in challenging tasks such as drone racing. This project aims at collecting a multimodal dataset (i.e., eye movements, electrical brain signals, manual control inputs, drone state) from drone racing pilots. These data will be used for training machine learning classifiers to predict future drone states, trajectories, and control commands. Successful classifiers will be evaluated in real-time using a high-quality drone racing simulator and real-world drone racing. The student will learn about experiment design and multimodal data collection (including eye-tracking and electroencephalography) in human subjects, perform classifier selection, and real-time evaluation. Requirements: Excellent knowledge in Python, Pytorch, Matlab; Interest in human subjects research; Experience with CPP, brain-computer interfaces, eye-tracking is a plus but not strictly necessary.

Goal: The goal of this project is to collect a multimodal dataset to develop brain- and body-machine interfaces for drone racing.

Contact Details: Christian Pfeiffer (cpfeiffe@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Rotor-Failure Recovery MPC - Available

Description: A multitude of advanced control techniques for quadrotors have matured over the recent years, with the most promising one being Model Predictive Control. While drones nowadays display high robustness and impressive flight capabilities, they’re not resilient to all possible failures. Especially in the case where a motor failure occurs most controllers struggle to keep the drone stable. Since the quadrotor is already an underactuated vehicle, the loss of one rotor also implies the loss of control over one degree-of-freedom, significantly changing the system dynamics. However, MPC provides some elegant methods to catch such failures. This thesis will investigate the possibilities in controlling a vehicle under rotor failure, design a control system within an existing flight software stack and use our latest drone hardware. The requirements are a basic knowledge of control systems, preliminary experience in optimal control, such as MPC, and being familiar with C++ programming.

Goal: The goal is to develop a solution using Model Predictive Control to catch a rotor failure, demonstrate it on an existing real quadrotor platform, and compare it to other existing approaches.

Contact Details: Philipp Föhn (foehn at ifi.uzh.ch), Sihao Sun (sun at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Events and Lidar For Autonomous Driving - Available

Description: Billions of dollars are spent each year to bring autonomous vehicles closer to reality. One of the remaining challenges is the design of reliable algorithms that work in a diverse set of environments and scenarios. At the core of this problem is the choice of sensor setup. Ideally, there is a certain redundancy in the setup while each sensor should also excel at a certain task. Sampling-based sensors (e.g. LIDAR, standard cameras, etc.) are today's essential building blocks of autonomous vehicles. However, they typically oversample far-away structure (e.g. building 200 meters away) and undersample close structure (e.g. fast bike crossing in front of the car). Thus, they enforce a trade-off between sampling frequency and computational budget. Unlike sampling-based sensors, event cameras capture change in their field-of-view with precise timing and do not record redundant information. As a result, they are well suited for highly dynamic scenarios such as driving on roads. We are currently building a large-scale dataset with high-resolution event cameras and 128-beam Lidar that is targeting object detection and tracking. This project builds on top of already existing hardware that we have built and tested in the last year. In this project, it will be extended with state-of-the-art sensors.

Goal: In this project, we explore the utility of event cameras in an autonomous car scenario. In order to achieve this, a high-quality driving dataset with state-of-the-art Lidar and event cameras will be created. Depending on the progress, the prospective student will work on building novel 3D object detection pipelines on the dataset. We seek a highly motivated student with the following minimum qualifications: - Experience with programming microcontrollers or motivation to acquire it quickly - Excellent coding skills in Python and C++ - At least one course in computer vision (multiple view geometry) - Strong work ethic - Excellent communication and teamwork skills Preferred qualifications: - Background in robotics and experience with ROS - Experience with machine learning - Experience with event-based vision Contact us for more details.

Contact Details: Mathias Gehrig (mgehrig at ifi.uzh.ch); Daniel Gehrig (dgehrig at ifi.uzh.ch) Please add CV + transcripts (Bachelor and Master)

Thesis Type: Semester Project / Internship / Master Thesis

See project on SiROP

Deep learning based motion estimation from events - Available

Description: Optical flow estimation is the mainstay of dynamic scene understanding in robotics and computer vision. It finds application in SLAM, dynamic obstacle detection, computational photography, and beyond. However, extracting the optical flow from frames is hard due to the discrete nature of frame-based acquisition. Instead, events from an event camera indirectly provide information about optical flow in continuous time. Hence, the intuition is that event cameras are the ideal sensors for optical flow estimation. In this project, you will dig deep into optical flow estimation from events. We will make use of recent innovations in neural network architectures and insights of event camera models to push the state-of-the-art in the field. Contact us for more details.

Goal: The goal of this project is to develop a deep learning based method for dense optical flow estimation from events. Strong background in computer vision and machine learning required.

Contact Details: Mathias Gehrig, mgehrig (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Asynchronous Processing for Event-based Deep Learning - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. Since their output is sparse traditional algorithms, which are designed for dense inputs such as frames, are not well suited. The goal of this project is explore ways to adapt existing deep learning algorithms to handle sparse asynchronous data from events. Applicants should have experience in C++ and python deep learning frameworks (tensorflow or pytorch), and have a strong background in computer vision.

Goal: The goal of this project is explore ways to adapt existing deep learning algorithms to handle sparse asynchronous data from events.

Contact Details: Daniel Gehrig (dgehrig at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Computational Photography and Videography - Available

Description: Computational Photography is a hot topic in computer vision because it finds widespread applications in mobile devices. Traditionally, the problem has been studied using frames from a single camera. Today, mobile devices feature multiple cameras and sensors that can be combined to push the frontier in computational photography and videography. In previous work (https://youtu.be/eomALySSGVU), we have successfully reconstructed high-speed, HDR video from events. In this project, we aim for combining information from a standard and event camera to exploit their complementary nature. Applications range from high-speed, HDR video to deblurring and beyond. Contact us for more details.

Contact Details: Mathias Gehrig (mgehrig at ifi.uzh.ch); Daniel Gehrig (dgehrig at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP