Student Projects


How to apply

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).


Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).



A 3D reconstruction algorithm using a stereo pair of event cameras - Available

Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The goal of this project is to use a stereo pair of event cameras to obtain a 3D reconstruction of a scene. The student will extend a recent event-based 3D reconstruction approach developed by our lab for monocular event cameras to the case of a pair of stereo event cameras. Applicants should have a good background in computer vision (especially stereo reconstruction techniques), and should be comfortable with C++.


Thesis Type: Semester Thesis / Master Thesis

Contact: Henri Rebecq (rebecq at ifi.uzh.ch)


Building a high-speed camera! Learning Image reconstruction with an Event Camera - Available

Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The output of an event camera is a sparse stream of events that encode only light intensity changes - in other terms, a highly compressed version of the visual signal. The goal of this project is to turn an event camera into a high-speed camera, by designing an algorithm to recover images from the compressed event stream. Inspired by a recent approach, the goal of this project will be to train a machine learning algorithm (or neural network) to learn how to reconstruct an image from the noisy event stream. The first part of the project will consist in acquiring training data, using both simulation and real event cameras. The second part will consist in designing and training a suitable machine learning algorithm to solve the problem. Finally, the algorithm will be compared against state-of-the-art image reconstruction algorithms. The expected candidate should have some background on both machine learning and computer vision (or image processing) in order to undertake this project.


Thesis Type: Master Thesis

Contact: Henri Rebecq (rebecq at ifi.uzh.ch), Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)


Event Camera Characterization - Available

Event cameras such as the Dynamic and Active Pixel Vision Sensor (DAVIS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. In spite the successful demonstration of the sensor to address several problems in computer vision and robotics, a comprehensive characterization of the sensor for such high level applications is still missing. The goal of this project is to characterize various aspects of these novel types of sensors, such as: event noise characteristics (distribution and spectral density), contrast threshold (relation to bias settings, variability: spatially, with the pixel, and photometrically, with respect to the scene illumination), non-linearities, etc. Additionally, the images and IMU measurements provided by the DAVIS also require an integrated characterization. A successful completion of the project will lead to a better understanding of the potential, limitations and impact of these sensors on the design of novel algorithms for computer vision and robotics. The expected candidate should have a background on instrumentation, electrical engineering (to understand the principle of operation of the DAVIS pixels) and random processes. This project involves close collaboration with the Institute of Neuroinformatics (INI) at UZH-ETH.


Thesis Type: Bachelor Thesis / Semester Thesis

Contact: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)


A real-time Event Camera Simulator - Available

Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. Recently, a variety of new algorithms to perform various vision tasks, such as fast object detection, visual odometry, or depth estimation using this sensor have emerged. However, the performance of these methods are usually not extensively assessed, for lack of ground truth data. To fill this gap, the goal of this project is to extend and improve an existing event camera simulator developed in our lab. The two major objectives of this project are: (i) integrate a real-time rendering engine (raw OpenGL, Unity, Unreal Engine ?) to our simulator in order to provide a real-time simulated event stream (ii) implement a realistic Inertial Measurement Unit (IMU) simulator. The expected candidate for this project should have a background in computer graphics, and be comfortable with programming with C++. Previous experience with 3D software or rendering engines such as Unity or Unreal Engine would be a must.


Thesis Type: Bachelor Thesis / Semester Thesis

Contact: Henri Rebecq (rebecq at ifi.uzh.ch)


Visual Bundle Adjustment with an Event Camera - Available

Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The goal of this project is to improve an existing visual odometry pipeline using an event camera by designing and integrating a visual bundle adjustment module in order to reduce the drift in the odometry pipeline. A good theoretical background on computer vision is necessary to undertake this project. The candidates will be expected to be comfortable with C++ as well.


Thesis Type: Master Thesis

Contact: Henri Rebecq (rebecq at ifi.uzh.ch), Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)


Integrated Multi-Camera Calibration Toolbox - Available

Camera calibration is a paramount pre-processing stage of many robotic vision applications such as 3D reconstruction, obstacle avoidance and ego-motion estimation. The goal of this project is to develop a user-friendly, single and multi-camera calibration toolbox adapted to our robotic system. The toolbox will integrate existing calibration software in our group and in other libraries and will provide user-friendly reports of the different stages to assess the quality of the processed dataset, thus speeding up and improving the understanding of the whole sensor calibration stage. The toolbox is expected to handle different camera brands, projection models and calibration patterns. In the multi-sensor scenario, the toolbox is also expected to compute the temporal offsets between the sensors. Special attention will be given to estimation of error measures, parameter uncertainties, detection of inconsistent data and interactive guidance of data acquisition.


Thesis Type: Semester Thesis / Master Thesis

Contact: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)


Hand-Eye Calibration Toolbox - Available

Hand-Eye calibration is a paramount pre-processing stage of many robotic and augmented reality applications, where the knowledge of the relative transformation between different sensors (e.g. a camera and a head-mounted display) is required to have an accurate geometric representation of the scene. The goal of this project is to develop a user-friendly hand-eye calibration toolbox integrated with our robotic system. The toolbox will contain existing and novel hand-eye calibration methods, and it will allow to visualize the results of the different methods in an integrated manner to improve the understanding of the quality of the processed dataset, specially paying attention to error estimates, uncertainties and detection of inconsistent data.


Thesis Type: Semester Thesis / Master Thesis

Contact: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)


Optical Flow Estimation with an Event Camera - Available

Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The goal of this project is to use event cameras to compute the optical flow in the image plane induced by either a moving camera in a scene or by moving objects with respect to a static event camera. Several existing methods as well as proposed new ones will be analyzed, implemented and compared. A successful candidate is expected to be familiar with state-of-the-art optical flow methods for standard cameras. This is a project with considerable room for creativity, for example in applying the ideas from low-level vision or ideas driving optical flow methods for standard cameras to the new paradigm of event-based vision. Experience in coding image processing algorithms in C++ is required.


Thesis Type: Master Thesis

Contact: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch), Henri Rebecq (rebecq at ifi.uzh.ch)


Continuous Structure From Motion (SfM) with an Event Camera - Available

Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The goal of this project is to explore the possibilities that continuous structure from motion (SfM) has to offer for event cameras. In the continuous formulation, visual egomotion methods attempt to estimate camera motion and scene parameters (depth of 3D points) from observed local image velocities such as optical flow. This formulation is appropriate for small-baseline displacements, which is the scale at which events are fired by a moving DVS in a static scene. Several ideas from classical and new methods will be taken into account to address the challenges that the fundamentally different output of a DVS poses to the SfM problem. The expected candidate should have good theoretical as well as programming skills to undertake this project.


Thesis Type: Master Thesis

Contact: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch), Henri Rebecq (rebecq at ifi.uzh.ch)


Projects related to an international robotics competition - Available

RPG is participating in an international robotics challenge (http://www.mbzirc.com/, https://www.youtube.com/watch?v=oVz2Sp3W468), and we have several work packages available as student projects, involving visual perception, MAV control and multi-robot collaboration. One sub-challenge is landing a MAV on a moving platform and another is one where multiple objects need to be retrieved from a large area using a collaborating group of MAVs.


Thesis Type: Semester Project / Master Thesis

Contact: Michael Gassner (gassner at ifi.uzh.ch)



Drake-ROS Integration for Quadrotor Simulation and Control - Available

Drake is an open source Control, Simulation and Analysis library supported by the Toyota Research Institute (TRI) and MIT. With its emphasis on state-of-art computational tools for the analysis of systems and development of nonlinear controllers, it is effective in tackling some of the toughest robotics planning and control problems. One interesting ongoing development work concerns the interface to the widely used ROS (Robot Operating System).

The proposed student project consists of two parts:

1) Designing and implementing an effective interface between Drake and ROS for the control and simulation of the Robotics and Perception Group (RPG) quadrotors.

2) Implementing an off-board higher level controller (using existing Drake and ROS tools) for trajectory control of the RPG quadrotors with visual feedback.

This project will involve close collaboration with TRI and MIT.


Thesis Type: Semester Project / Master Thesis

Contact: Naveen Kuppuswamy (naveen.kuppuswamy at tri.global), Davide Falanga (falanga at ifi.uzh.ch)



Nonlinear Control for Slung-load Throwing Using Quadrotors - Available

With recent advances in visually guided quadrotors and optimization based nonlinear control methods, the feasibility of tackling advanced control problems in various applications is increasing. One such interesting application in the search and rescue domain is that of utilising Quadrotors to carry loads by grabbing onto them with some kind of tether and to sling them towards a target (e.g. https://www.youtube.com/watch?v=08K_aEajzNA).

In this thesis topic, the student will be expected to tackle this challenge by designing an effective nonlinear controller and implement it on the Robotics and Perception Group (RPG) Quadrotor robots; Model Predictive Control (MPC) is one suggested framework for tackling this problem. The controller will be developed using Drake, an open source simulation, planning and control library supported by the Toyota Research Institute (TRI) and MIT.


Thesis Type: Master Thesis

Contact: Naveen Kuppuswamy (naveen.kuppuswamy at tri.global), Davide Falanga (falanga at ifi.uzh.ch)



Obstacle avoidance with Intel RealSense - Available

The goal of this project is to make a quadrotor fly around randomly, without the need of any human input. A key issue to solve in order to achieve this is obstacle avoidance.

In this project, you will implement a completely autonomous quadrotor that uses the Intel RealSense sensor for obstacle avoidance. Your first goal will be to achieve reactive obstacle avoidance: The robot will randomly move through open space. Once this is achieved, stretch goals include incorporating the sensor data into a global 3D map and performing intelligent exploration that maximizes the area explored in a given time.


Thesis Type: Semester Thesis / Master Thesis

Contact: Titus Cieslewski (titus at ifi.uzh.ch), Jeff Delmerico (jeffdelmerico at ifi.uzh.ch)



Place recognition from sparse structure - Available

Classical place recognition for Visual Odometry relies on matching of descriptors that are found in the camera images. We have previously proposed a novel approach that relies on matching the point structure in 3D instead. This enables place recognition also under appearance changes, or when images are not available, such as in event-based cameras.

In this project, you will contribute to developing this novel approach. Possibles avenues to develop are: Reducing the amount of parameter currently needed, e.g. by deriving the descriptor scale automatically from the structure; increasing robustness to appearance changes, e.g. by interpolating structure between points; verifying robustness to viewpoint changes; fusing strucural and visual descriptors; you name it. This is a project with a lot of creative potential, and potential for resulting in a scientific publication.


Thesis Type: Semester Thesis / Master Thesis

Contact: Titus Cieslewski (titus at ifi.uzh.ch)



Robust and Adaptive Multi-Camera Visual Odometry - Available

Most visual odometry algorithms are designed to work with monocular cameras and/or stereo cameras. One way to improve the robustness of visual odometry is to use more cameras (3 to N). While it is relatively easy to make visual odometry work with multiple cameras for a specific type of configuration, developing an adaptive solution that works with arbitrary camera configurations (i.e., without changing the code) and that is robust to failures (i.e., if one camera fails during the execution, the algorithm can still proceed) is not straightforward. The project aims to develop a robust and adaptive multi-camera visual odometry pipeline based on the existing framework in our lab.



Thesis Type: Semester Thesis / Master Thesis

Contact: Zichao Zhang (zzhang at ifi.uzh.ch)



Study the Motion Limit of Visual Odometry - Available

Visual odometry estimates the motion of the camera based on the images. It is an important algorithm in robotics and becomes widely used nowadays. However, when putting visual odometry into real world robotic applications, we need to understand the limitations of the algorithm. This project aims to address the question: how fast can a robot move while still keeping the visual odometry working properly? Specific work will include the theoretical analysis of the visual odometry pipeline and validation by simulation/experiments. It is also possible to perform the study for different camera configurations.


Thesis Type: Semester Thesis / Master Thesis

Contact: Zichao Zhang (zzhang at ifi.uzh.ch)



Learning Motion from Blur - Available

When the camera moves fast, the images appear blurry. The effect of motion blur is in general undesired for vision algorithms but also encodes the motion information. Can we extract the motion information by using Machine Learning algorithms? This would help visual odometry to deal with fast motion. For example, when a robot moves too fast, it can switch to Machine Learning when standard visual odometry fails. The goal of the project is to implement a Machine Learning algorithm that can recover the camera motion from a single blurry image.


Thesis Type: Semester Thesis / Master Thesis

Contact: Zichao Zhang (zzhang at ifi.uzh.ch)



Smart Feature Selection In Visual Odometry - Available

For most robotic platforms, computational resources are usually limited. Therefore, ideally, algorithms running onboard should be adaptive to the available computational power. For visual odometry, the number of features largely decides the resource the algorithm needs. The project aims to study the problem of smart feature selection for visual odometry. By using a selected subset of features, we can reduce the required computational resource without losing accuracy significantly. The student is expected to study how motion estimation is affected by feature selection (e.g., number of features, different feature locations). The ultimate goal will be to implement a smart feature selection mechanism in our visual odometry framework.


Thesis Type: Semester Thesis / Master Thesis

Contact: Zichao Zhang (zzhang at ifi.uzh.ch)



Learning Depth from Images and IMU Measurements - Available

Many researchers work on using deep convolutional neural network (CNN) to estimate depth from a single image. However, the depth information from a single image/camera is ambiguous in scale. Therefore, adding scale information can improve the depth estimation using CNN. In robotics, the inertial measurement unit (IMU) is a commonly used sensor and provides the motion information of the correct scale. This project aims to implement a deep learning algorithm that can estimate the scene depth from images and the IMU measurements.


Thesis Type: Semester Thesis / Master Thesis

Contact: Zichao Zhang (zzhang at ifi.uzh.ch)



Aerial monocular vision-based ball catching - Available

The goal of this project is to enable vision-based drones to catch a ball thrown by hand without relying on any external sensor or motion capture system. This project will consist of the following work packages, among which the student can choose according to his interests and skills.

1) Development of a real-time trajectory planning algorithm for micro aerial vehicles. The algorithm must be fast enough to be implemented and run onboard and must keep into account both dynamical and perception-related constraints.

2) Monocular vision-based fast ball detection. The first part will consist in studying the literature for fast ball detection, then the goal will be to implement a robust ball detection and tracking algorithm that can run onboard.

3) Ball trajectory estimation with uncertainty propagation. The goal is to predict the motion of the ball based on physical considerations and measurements from the onboard camera.

For the first package, knwoledge in control theory and optimization are required; the second one requires knowledge in computer vision; the third, finally, is suitable to students with knwoledge in recursive estimation.


Thesis Type: Semester Project or Master Thesis, according to the package(s) chosen

Contact: Davide Falanga (falanga at ifi.uzh.ch), Henri Rebecq (rebecq at ifi.uzh.ch), Zichao Zhang (zzhang at ifi.uzh.ch)



Optimal Sensor Placement - Available

The goal of this project is to investigate how sensor type, position, and orientation affect the performance of a visually-guided agile MAV. Part of the project will be designing a photorealistic simulation environment in which to test different camera arrangements on a flying platform, so experience with blender, Unreal Engine, etc. will be a bonus. The project will consist of evaluating these sensor setups in offline simulations of several tasks (e.g. fast outdoor flight, flying through a forest or through a window), and time permitting, performing further evaluations in a closed-loop physics simulation. Inspiration for candidate arrangements should come from examples in both the robotics literature and nature.


Thesis Type: Semester Thesis / Master Thesis

Contact: Zichao Zhang (zzhang at ifi.uzh.ch), Henri Rebecq (rebecq at ifi.uzh.ch)



Virtual / augmented reality with HTC Vive - Available

The ultimate goal of this project is to provide augmented reality that can help robot operators to overlay different data from the robot over reality (provided by cameras mounted on the Vive). Such data can include created maps, planned trajectories, what have you... .

In order to achieve this task, you will first start with the simpler task of showing the data in virtual reality. Warning: This project will contain a fair bit of hacking. Previous experience in ROS, RViZ or Unity (or making barely unsupported things work on Linux) would be highly appreciated.


Thesis Type: Semester Thesis / Master Thesis

Contact: Titus Cieslewski (titus at ifi.uzh.ch), Davide Falanga (falanga at ifi.uzh.ch)



High-speed visual obstacle avoidance - Available

We are currently evaluating different ways to do obstacle avoidance from camera images, at high speed.

In this project, you will choose an approach or propose a new one and thoroughly investigate it.


Thesis Type: Semester Thesis / Master Thesis

Contact: Titus Cieslewski (titus at ifi.uzh.ch), Zichao Zhang (zzhang at ifi.uzh.ch)



Physical Threshold Detection - Available

In human environments, windows and doors represent thresholds between spaces. For a robot exploring an unknown environment, these portals offer new frontiers, but they can be challenging for a robot to safely traverse. This project deals with designing a system to robustly detect windows/doors/thresholds that a quadrotor could fly through, using cameras and range sensors. This system would need to detect these portals via appearance and geometry, and evaluate the feasibility of traversal with a very low false-positive rate using machine learning techniques and geometric constraints. The ultimate goal is to deploy this system on a quadrotor for a live demo.


Thesis Type: Semester Thesis / Master Thesis

Contact: Jeff Delmerico (jeffdelmerico at ifi.uzh.ch), Michael Gassner (gassner at ifi.uzh.ch)



Active Visual Saliency Mapping with Machine Learning - Available

This project is motivated by a search and rescue scenario wherein a flying robot is searching for a "victim" in an outdoor environment. The goal is to identify and investigate areas of the environment that look different than their surroundings. The robot will use computer vision and machine learning techniques to build a visual saliency map, and will be controlled by a path planner that interactively evaluates this map and chooses the most visually distinct regions to investigate more closely.


Thesis Type: Semester Thesis / Master Thesis

Contact: Jeff Delmerico (jeffdelmerico at ifi.uzh.ch)



A Visual-Inertial Odometry System for Event-based Vision Sensor - Available

Event-based cameras are recent revolutionary sensors with large potential for high-speed and low-powered robotic applications. The goal of this project is to develop visual-inertial pipeline for the Dynamic and Active Vision Sensor (DAVIS). The system will estimate the pose of the DAVIS using the event stream and IMU measurements delivered by the sensor. Filtering approaches as well as batch optimization will be investigated.


Thesis Type: Master Thesis

Contact: Henri Rebecq (rebecq at ifi.uzh.ch), Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)