Student Projects


How to apply

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP . Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).


Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).



Robust and Adaptive Multi-Camera Visual Odometry - Available

Description: Most visual odometry algorithms are designed to work with monocular cameras and/or stereo cameras. One way to improve the robustness of visual odometry is to use more cameras (3 to N). While it is relatively easy to make visual odometry work with multiple cameras for a specific type of configuration, developing an adaptive solution that works with arbitrary camera configurations (i.e., without changing the code) and that is robust to failures (i.e., if one camera fails during the execution, the algorithm can still proceed) is not straightforward.

Goal: The project aims to develop a robust and adaptive multi-camera visual odometry pipeline based on the existing framework in our lab.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Smart Feature Selection In Visual Odometry - Available

Description: For most robotic platforms, computational resources are usually limited. Therefore, ideally, algorithms running onboard should be adaptive to the available computational power. For visual odometry, the number of features largely decides the resource the algorithm needs. By using a selected subset of features, we can reduce the required computational resource without losing accuracy significantly.

Goal: The project aims to study the problem of smart feature selection for visual odometry. The student is expected to study how motion estimation is affected by feature selection (e.g., number of features, different feature locations). The ultimate goal will be to implement a smart feature selection mechanism in our visual odometry framework.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Online time offset estimation for visual-inertial system - Available

Description: Visual-inertial odometry (VIO) has progressed significantly recently and finds a lot of real-world applications. One of the crucial requirement for good performance is to have a synchronized camera and inertial measurement unit. However, many low-cost systems do not have good synchronization, which limits the use of VIO. As an alternative, the time offset can be estimated by software. Existing methods to estimate the time offset either operate offline or only applies to specific algorithms. A lightweight algorithm that can estimate the camera-IMU offset will greatly extend the application scenarios of VIO.

Goal: The goal of the project is to develop an efficient and flexible algorithm to estimate the time offset between a camera and an IMU.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Benchmarking camera control for visual odometry - Available

Description: There are many existing datasets to evaluate the performance of visual odometry algorithms. However, few work has been done in providing a principled way to benchmark the performance of camera control (exposure time/gain) algorithms, which have a large impact on the performance of visual odometry. Most of the current datasets/benchmark tools simply contain images captured at a certain camera configuration, which is not suitable for this purpose. A proper benchmark tool can fill this gap and will be useful for understanding the strengths and weaknesses of different algorithms.

Goal: The goal of this project is to make use of both synthetic and real data to build a benchmark tool and evaluate the influence of different camera control algorithms on the performance of visual odometry.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Online loop detection and closing - Available

Description: Loop detection and closing technique help robots recognize visited places. It can be used to recover odometry failure as well as reduce drift. However, it is still challenging to run such algorithms in real-time on an embedded system. The goal of this project is to integrate a loop closing and detection algorithm into our visual odometry frontend. We aim to run the loop detection and closing algorithm using only onboard computing resource and enable our robot to localize reliably. The applicant is required to be proficient in C++ programming and have computer vision knowledge.

Goal: The project aims to develop loop detection and closing algorithms suitable for embedded systems.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch) Titus Cieslewski (titus at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Motion-aware camera control - Available

Description: It is well know that the camera needs to be set to proper exposure time and gain to work well in practice. In the case of visual odometry, however, the motion of the camera also needs to be considered. For example, if the exposure time is too high, too much motion blur will also cause the image to degrade. Therefore, the camera control algorithm should also take into consideration of the camera motion to optimize the performance of visual odometry.

Goal: The goal of this project is to develop a camera control algorithm that is aware of the camera motion and can adjust the exposure time considering both motion blur and the scene brightness.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Communication for a (Real) Group of Robots - Available

Description: One of our visions for the future is to deploy a group of robots in an unknown environment and have them create a map of that environment. However, a key obstacle to doing this in the real world is communication. Although there are many theoretical solutions, it is a very common theme in the community that deploying them in practice is hard.

Goal: In this work, you will approach this problem from a strictly practical perspective. You will operate on the concrete case of an experiment where a group of flying robots is to be deployed in the offices of our lab. You will start with the simple approach of a well-placed router, and evaluate what we can afford in that scenario, and what factors impact reliability. If that approach is well-explored, you will consider more advanced approaches such as one of the robots providing a wifi hotspot for the other robots, or using specialized hardware like Zigbee. An interesting sub-project would be to create a communication map ( https://tinyurl.com/y7g5qmfc ) of the offices.

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), ATTACH CV AND TRANSCRIPT! Required skills: Linux, ability to learn autonomously, a sense for the practical.

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Simulating decentralized multi-robot SLAM - Available

Description: We have recently developed decentralized multi-robot visual place recognition (neural network based) and SLAM and demonstrated them on well-known datasets ( http://rpg.ifi.uzh.ch/docs/arXiv17_Cieslewski.pdf ). We want to take this work one step further and deploy it in the real world with a group of quadrotors. Since this is quite an effort, logistically, a first step will be to simulate the full system (SLAM, but also obstacle avoidance and control) in simulation.

Goal: In this project, you will simulate a scenario where a group of quadrotors explores and maps an unknown environment. We will start with a simplistic simulation and gradually increase its complexity. Axes in which to complexity can be increased: From manual camera placement to using a full control stack, from rendering camera frames in Gazebo to rendering them in a more photorealistic simulator, from random motion with reactive obstacle avoidance to active exploration, from a few robots to many robots, …

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), ATTACH CV AND TRANSCRIPT! Required skills: Linux, experience in ROS or a very strong ability to learn, C++/Python.

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Trajectory estimation and scene reconstruction from any YouTube video! - Available

Description: We believe that with the right processing, it should be possible to obtain trajectory estimation and scene reconstruction from many of the videos that are already out there on the internet! This could have nice applications: two approaches that we would be quite passionate about would be a) to visualize FPV races ( https://www.youtube.com/watch?v=EcLk_uZe33w ) and b), more practical for the community, to create new robotics datasets with little effort.

Goal: Go from YouTube videos to trajectory estimation and scene reconstruction. The more general, the better. Of course, you will start with an as simple as possible approach: Slow motion, 360° videos (theoretically no need to calibrate), then increase in complexity. While this will likely be rigged with engineering, hacks and qualitative evaluation, it might, with the right approach, also lead to interesting research (e.g. how to deal with motion blur in tracking, auto-calibration, potential to apply machine learning etc...).

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), ATTACH CV AND TRANSCRIPT! Appreciated skills: Linux, ROS, good programming skills, ideally in Matlab or Python. Prefer students who took the Vision Algorithms for Mobile Robots class!

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Pushing hard cases in tag detection with a CNN - Available

Description: Visual Tags such as April or Aruco tags are nowadays detected with a handcrafted algorithm. This algorithm has its limitations in special cases, such as when the tag is far away from the camera, when the tag is partially occluded or when a camera with high distortion is used.

Goal: In this project, you will train a CNN to handle these special cases. We will first brainstorm a meaningful architecture that will allow a CNN to complement classical tag detection in the most effective way. You will then figure out the most effective way to create meaningful training data (hybrid of synthetic and real data?). Finally, you will use that data to train the desired detector.

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), ATTACH CV AND TRANSCRIPT! Required skills: Linux, Python, ability to read C++ code. Desirable skill: Tensorflow or similar.

Thesis Type: Semester Project / Bachelor Thesis / Master Thesis

See project on SiROP

Data-Efficient Decentralized Bundle Adjustment (Map Optimization) - Available

Description: In State-of-the-Art decentralized mapping methods, optimization (correcting odometry drift) is typically done using pose graph optimization due to the fact that a pose graph is a very compact representation. Unfortunately, this compression in data results in limitations in precision and robustness. Bundle Adjustment is a map optimization method for visual maps which is much more precise and robust, but also much more data intensive.

Goal: In this work, you will figure out a way to achieve the superior precision of Bundle Adjustment while minimizing the amount of data that needs to be exchanged between robots in a decentralized setting.

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), ATTACH CV AND TRANSCRIPT! Required skills: Matlab or C++, with a preference for the latter. Desirable: Background in optimization (Nonlinear least squares, Gauss-Newton or similar)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Visual Bundle Adjustment with an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The goal of this project is to improve an existing visual odometry pipeline using an event camera by designing and integrating a visual bundle adjustment module in order to reduce the drift in the odometry pipeline. A good theoretical background on computer vision is necessary to undertake this project. The candidates will be expected to be comfortable with C++ as well.

Contact Details: Henri Rebecq (rebecq at ifi.uzh.ch), Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

A real-time Event Camera Simulator - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. Recently, a variety of new algorithms to perform various vision tasks, such as fast object detection, visual odometry, or depth estimation using this sensor have emerged. However, the performance of these methods are usually not extensively assessed, for lack of ground truth data. To fill this gap, the goal of this project is to extend and improve an existing event camera simulator developed in our lab.

Goal: The two major objectives of this project are: 1. integrate a real-time rendering engine (raw OpenGL, Unity, Unreal Engine ?) to our simulator in order to provide a real-time simulated event stream 1. implement a realistic Inertial Measurement Unit (IMU) simulator.

Contact Details: The expected candidate for this project should have a background in computer graphics, and be comfortable with programming with C++. Previous experience with 3D software or rendering engines such as Unity or Unreal Engine would be a must. Henri Rebecq (rebecq at ifi.uzh.ch)

Thesis Type: Semester Project / Bachelor Thesis

See project on SiROP

Optical Flow Estimation with an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The goal of this project is to use event cameras to compute the optical flow in the image plane induced by either a moving camera in a scene or by moving objects with respect to a static event camera. Several existing methods as well as proposed new ones will be analyzed, implemented and compared. A successful candidate is expected to be familiar with state-of-the-art optical flow methods for standard cameras. This is a project with considerable room for creativity, for example in applying the ideas from low-level vision or ideas driving optical flow methods for standard cameras to the new paradigm of event-based vision. Experience in coding image processing algorithms in C++ is required.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch), Henri Rebecq (rebecq at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Event Camera Characterization - Available

Description: Event cameras such as the Dynamic and Active Pixel Vision Sensor (DAVIS, http://inilabs.com/products/dynamic-and-active-pixel-vision-sensor/ ) are recent sensors with large potential for high-speed and high dynamic range robotic applications. In spite the successful demonstration of the sensor to address several problems in computer vision and robotics, a comprehensive characterization of the sensor for such high level applications is still missing.

Goal: The goal of this project is to characterize various aspects of these novel types of sensors, such as: event noise characteristics (distribution and spectral density), contrast threshold (relation to bias settings, variability: spatially, with the pixel, and photometrically, with respect to the scene illumination), non-linearities, etc. Additionally, the images and IMU measurements provided by the DAVIS also require an integrated characterization. A successful completion of the project will lead to a better understanding of the potential, limitations and impact of these sensors on the design of novel algorithms for computer vision and robotics. The expected candidate should have a background on instrumentation, electrical engineering (to understand the principle of operation of the DAVIS pixels) and random processes. This project involves close collaboration with the Institute of Neuroinformatics (INI) at UZH-ETH.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Semester Project / Bachelor Thesis

See project on SiROP

Building a high-speed camera! Learning Image reconstruction with an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The output of an event camera is a sparse stream of events that encode only light intensity changes - in other terms, a highly compressed version of the visual signal.

Goal: The goal of this project is to turn an event camera into a high-speed camera, by designing an algorithm to recover images from the compressed event stream. Inspired by a recent approach, the goal of this project will be to train a machine learning algorithm (or neural network) to learn how to reconstruct an image from the noisy event stream. The first part of the project will consist in acquiring training data, using both simulation and real event cameras. The second part will consist in designing and training a suitable machine learning algorithm to solve the problem. Finally, the algorithm will be compared against state-of-the-art image reconstruction algorithms. The expected candidate should have some background on both machine learning and computer vision (or image processing) in order to undertake this project.

Contact Details: Henri Rebecq (rebecq at ifi.uzh.ch), Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

A Visual-Inertial Odometry System for Event-based Vision Sensor - Available

Description: Event-based cameras are recent revolutionary sensors with large potential for high-speed and low-powered robotic applications. The goal of this project is to develop visual-inertial pipeline for the Dynamic and Active Vision Sensor (DAVIS). The system will estimate the pose of the DAVIS using the event stream and IMU measurements delivered by the sensor. Filtering approaches as well as batch optimization methods will be investigated. https://youtu.be/bYqD2qZJlxE http://www.inilabs.com/products/davis

Contact Details: Henri Rebecq (rebecq at ifi.uzh.ch), Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Continuous Structure From Motion (SfM) with an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The goal of this project is to explore the possibilities that continuous structure from motion (SfM) has to offer for event cameras. In the continuous formulation, visual egomotion methods attempt to estimate camera motion and scene parameters (depth of 3D points) from observed local image velocities such as optical flow. This formulation is appropriate for small-baseline displacements, which is the scale at which events are fired by a moving DVS in a static scene. Several ideas from classical and new methods will be taken into account to address the challenges that the fundamentally different output of a DVS poses to the SfM problem. The expected candidate should have good theoretical as well as programming skills to undertake this project.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch), Henri Rebecq (rebecq at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Hand-Eye Calibration Toolbox - Available

Description: Hand-Eye calibration is a paramount pre-processing stage of many robotic and augmented reality applications, where the knowledge of the relative transformation between different sensors (e.g. a camera and a head-mounted display) is required to have an accurate geometric representation of the scene.

Goal: The goal of this project is to develop a user-friendly hand-eye calibration toolbox integrated with our robotic system. The toolbox will contain existing and novel hand-eye calibration methods, and it will allow to visualize the results of the different methods in an integrated manner to improve the understanding of the quality of the processed dataset, specially paying attention to error estimates, uncertainties and detection of inconsistent data.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Integrated Multi-Camera Calibration Toolbox - Available

Description: The toolbox is expected to handle different camera brands, projection models and calibration patterns. In the multi-sensor scenario, the toolbox is also expected to compute the temporal offsets between the sensors. Special attention will be given to estimation of error measures, parameter uncertainties, detection of inconsistent data and interactive guidance of data acquisition.

Goal: The goal of this project is to develop a user-friendly, single and multi-camera calibration toolbox adapted to our robotic system. The toolbox will integrate existing calibration software in our group and in other libraries and will provide user-friendly reports of the different stages to assess the quality of the processed dataset, thus speeding up and improving the understanding of the whole sensor calibration stage.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Build an Indoor Visual Positioning System - Available

Description: Visual-inertial odometry/SLAM has come to the level of maturity to be used in real-world applications. In this project, we are interested in using this technology to determine the motion of a robot reliably in an indoor environment, which requires an integration of different algorithms. Ideally, this project will lead to an indoor visual positioning system with potential applications such as augmented reality. This project is a joint work with Huawei.

Goal: The goal of this project is to integrate a state-of-the-art visual-inertial odometry algorithm and a place recognition module into an accurate and robust visual positioning system for a room-like environment. It is desired that the system scales to different computational platforms (i.e., works for both laptops and embedded systems) to suit different applications.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch) Required skills: Linux, C++, hands-on experience in visual odometry/SLAM, experience in ROS is a plus.

Thesis Type: Master Thesis

See project on SiROP