Student Projects


How to apply

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP . Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).


Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).



Optical Flow Estimation with an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The goal of this project is to use event cameras to compute the optical flow in the image plane induced by either a moving camera in a scene or by moving objects with respect to a static event camera. Several existing methods as well as proposed new ones will be analyzed, implemented and compared. A successful candidate is expected to be familiar with state-of-the-art optical flow methods for standard cameras. This is a project with considerable room for creativity, for example in applying the ideas from low-level vision or ideas driving optical flow methods for standard cameras to the new paradigm of event-based vision. Experience in coding image processing algorithms in C++ is required.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch), Henri Rebecq (rebecq at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Building a high-speed camera! Learning Image reconstruction with an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The output of an event camera is a sparse stream of events that encode only light intensity changes - in other terms, a highly compressed version of the visual signal.

Goal: The goal of this project is to turn an event camera into a high-speed camera, by designing an algorithm to recover images from the compressed event stream. Inspired by a recent approach, the goal of this project will be to train a machine learning algorithm (or neural network) to learn how to reconstruct an image from the noisy event stream. The first part of the project will consist in acquiring training data, using both simulation and real event cameras. The second part will consist in designing and training a suitable machine learning algorithm to solve the problem. Finally, the algorithm will be compared against state-of-the-art image reconstruction algorithms. The expected candidate should have some background on both machine learning and computer vision (or image processing) in order to undertake this project.

Contact Details: Henri Rebecq (rebecq at ifi.uzh.ch), Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

A Visual-Inertial Odometry System for Event-based Vision Sensor - Available

Description: Event-based cameras are recent revolutionary sensors with large potential for high-speed and low-powered robotic applications. The goal of this project is to develop visual-inertial pipeline for the Dynamic and Active Vision Sensor (DAVIS). The system will estimate the pose of the DAVIS using the event stream and IMU measurements delivered by the sensor. Filtering approaches as well as batch optimization will be investigated. http://www.inilabs.com/products/davis

Contact Details: Henri Rebecq (rebecq at ifi.uzh.ch), Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Continuous Structure From Motion (SfM) with an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The goal of this project is to explore the possibilities that continuous structure from motion (SfM) has to offer for event cameras. In the continuous formulation, visual egomotion methods attempt to estimate camera motion and scene parameters (depth of 3D points) from observed local image velocities such as optical flow. This formulation is appropriate for small-baseline displacements, which is the scale at which events are fired by a moving DVS in a static scene. Several ideas from classical and new methods will be taken into account to address the challenges that the fundamentally different output of a DVS poses to the SfM problem. The expected candidate should have good theoretical as well as programming skills to undertake this project.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch), Henri Rebecq (rebecq at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Hand-Eye Calibration Toolbox - Available

Description: Hand-Eye calibration is a paramount pre-processing stage of many robotic and augmented reality applications, where the knowledge of the relative transformation between different sensors (e.g. a camera and a head-mounted display) is required to have an accurate geometric representation of the scene.

Goal: The goal of this project is to develop a user-friendly hand-eye calibration toolbox integrated with our robotic system. The toolbox will contain existing and novel hand-eye calibration methods, and it will allow to visualize the results of the different methods in an integrated manner to improve the understanding of the quality of the processed dataset, specially paying attention to error estimates, uncertainties and detection of inconsistent data.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Semester project / Master Thesis

See project on SiROP

Integrated Multi-Camera Calibration Toolbox - Available

Description: The toolbox is expected to handle different camera brands, projection models and calibration patterns. In the multi-sensor scenario, the toolbox is also expected to compute the temporal offsets between the sensors. Special attention will be given to estimation of error measures, parameter uncertainties, detection of inconsistent data and interactive guidance of data acquisition.

Goal: The goal of this project is to develop a user-friendly, single and multi-camera calibration toolbox adapted to our robotic system. The toolbox will integrate existing calibration software in our group and in other libraries and will provide user-friendly reports of the different stages to assess the quality of the processed dataset, thus speeding up and improving the understanding of the whole sensor calibration stage.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Semester project / Master Thesis

See project on SiROP

A real-time Event Camera Simulator - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. Recently, a variety of new algorithms to perform various vision tasks, such as fast object detection, visual odometry, or depth estimation using this sensor have emerged. However, the performance of these methods are usually not extensively assessed, for lack of ground truth data. To fill this gap, the goal of this project is to extend and improve an existing event camera simulator developed in our lab.

Goal: The two major objectives of this project are: 1. integrate a real-time rendering engine (raw OpenGL, Unity, Unreal Engine ?) to our simulator in order to provide a real-time simulated event stream 1. implement a realistic Inertial Measurement Unit (IMU) simulator.

Contact Details: The expected candidate for this project should have a background in computer graphics, and be comfortable with programming with C++. Previous experience with 3D software or rendering engines such as Unity or Unreal Engine would be a must. Henri Rebecq (rebecq at ifi.uzh.ch)

Thesis Type: Semester project / Bachelor Thesis

See project on SiROP

Study the Motion Limit of Visual Odometry - Available

Description: Visual odometry estimates the motion of the camera based on the images. It is an important algorithm in robotics and becomes widely used nowadays. However, when putting visual odometry into real world robotic applications, we need to understand the limitations of the algorithm. This project aims to address the question: how fast can a robot move while still keeping the visual odometry working properly?

Goal: Specific work will include the theoretical analysis of the visual odometry pipeline and validation by simulation/experiments. It is also possible to perform the study for different camera configurations.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester project / Master Thesis

See project on SiROP

Robust and Adaptive Multi-Camera Visual Odometry - Available

Description: Most visual odometry algorithms are designed to work with monocular cameras and/or stereo cameras. One way to improve the robustness of visual odometry is to use more cameras (3 to N). While it is relatively easy to make visual odometry work with multiple cameras for a specific type of configuration, developing an adaptive solution that works with arbitrary camera configurations (i.e., without changing the code) and that is robust to failures (i.e., if one camera fails during the execution, the algorithm can still proceed) is not straightforward.

Goal: The project aims to develop a robust and adaptive multi-camera visual odometry pipeline based on the existing framework in our lab.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester project / Master Thesis

See project on SiROP

Learning Depth from Images and IMU Measurements - Available

Description: Many researchers work on using deep convolutional neural network (CNN) to estimate depth from a single image. However, the depth information from a single image/camera is ambiguous in scale. Therefore, adding scale information can improve the depth estimation using CNN.

Goal: In robotics, the inertial measurement unit (IMU) is a commonly used sensor and provides the motion information of the correct scale. This project aims to implement a deep learning algorithm that can estimate the scene depth from images and the IMU measurements.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester project / Master Thesis

See project on SiROP

Smart Feature Selection In Visual Odometry - Available

Description: For most robotic platforms, computational resources are usually limited. Therefore, ideally, algorithms running onboard should be adaptive to the available computational power. For visual odometry, the number of features largely decides the resource the algorithm needs. By using a selected subset of features, we can reduce the required computational resource without losing accuracy significantly.

Goal: The project aims to study the problem of smart feature selection for visual odometry. The student is expected to study how motion estimation is affected by feature selection (e.g., number of features, different feature locations). The ultimate goal will be to implement a smart feature selection mechanism in our visual odometry framework.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester project / Master Thesis

See project on SiROP

Learning Motion from Blur - Available

Description: When the camera moves fast, the images appear blurry. The effect of motion blur is in general undesired for vision algorithms but also encodes the motion information. Can we extract the motion information by using Machine Learning algorithms? This would help visual odometry to deal with fast motion. For example, when a robot moves too fast, it can switch to Machine Learning when standard visual odometry fails.

Goal: The goal of the project is to implement a Machine Learning algorithm that can recover the camera motion from a single blurry image.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester project / Master Thesis

See project on SiROP

Physical Threshold Detection - Available

Description: In human environments, windows and doors represent thresholds between spaces. For a robot exploring an unknown environment, these portals offer new frontiers, but they can be challenging for a robot to safely traverse. This project deals with designing a system to robustly detect windows/doors/thresholds that a quadrotor could fly through, using cameras and range sensors. This system would need to detect these portals via appearance and geometry, and evaluate the feasibility of traversal with a very low false-positive rate using machine learning techniques and geometric constraints. The ultimate goal is to deploy this system on a quadrotor for a live demo.

Contact Details: Jeff Delmerico (jeffdelmerico at ifi.uzh.ch), Michael Gassner (gassner at ifi.uzh.ch)

Thesis Type: Semester project / Master Thesis

See project on SiROP

Event Camera Characterization - Available

Description: Event cameras such as the Dynamic and Active Pixel Vision Sensor (DAVIS, http://inilabs.com/products/dynamic-and-active-pixel-vision-sensor/ ) are recent sensors with large potential for high-speed and high dynamic range robotic applications. In spite the successful demonstration of the sensor to address several problems in computer vision and robotics, a comprehensive characterization of the sensor for such high level applications is still missing.

Goal: The goal of this project is to characterize various aspects of these novel types of sensors, such as: event noise characteristics (distribution and spectral density), contrast threshold (relation to bias settings, variability: spatially, with the pixel, and photometrically, with respect to the scene illumination), non-linearities, etc. Additionally, the images and IMU measurements provided by the DAVIS also require an integrated characterization. A successful completion of the project will lead to a better understanding of the potential, limitations and impact of these sensors on the design of novel algorithms for computer vision and robotics. The expected candidate should have a background on instrumentation, electrical engineering (to understand the principle of operation of the DAVIS pixels) and random processes. This project involves close collaboration with the Institute of Neuroinformatics (INI) at UZH-ETH.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Semester project / Bachelor Thesis

See project on SiROP