Student Projects


How to apply

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP . Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).


Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).



Flight Trajectory Modeling for Human-Piloted Drone Racing - Available

Description: In drone racing, human pilots navigate quadrotor drones as quickly as possible through a sequence of gates arranged in a 3D track. There is a large number of possible trajectories for linking gates among which pilots have to choose. This project aims to identify the most common and most efficient flight trajectories used by human pilots. The student will collect flight trajectory data for various tracks from human pilots using a drone racing simulator. The student will analyze 3D trajectories and motion kinematics to identify trajectories that achieve the fastest lap times most consistently. Finally, the student will compare flight trajectories from human pilots to trajectories from a minimum-time planner used for autonomous navigation. Requirements: Experience in computer vision and machine learning; ability to code in Python/Matlab, Linux, C++, ROS; experience in 3D kinematic analysis is a plus.

Goal: Extend knowledge about flight trajectory planning in human-piloted drone racing.

Contact Details: Please send your CV and transcripts (bachelor and master) to: Christian Pfeiffer (cpfeiffe (at) ifi (dot) uzh (dot) ch), Elia Kaufmann (ekaufmann (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Develop a Research-Grade Flight Simulator for Human-Piloted Drone Racing - Available

Description: The goal of this project is to develop a simulator for research on human-piloted drone racing. The student will integrate an existing drone racing simulator with custom software packages (ROS or Python) for logging drone state in 3D (i.e., position, rotation, velocity, acceleration), camera images, and control commands (i.e., thrust, yaw, pitch, roll). The student will create a custom GUI for changing quadrotor settings (i.e., weight, motor thrust, camera angle, rate profiles) and race track layouts (i.e. size, position, and static-vs-moving type, number of gates, track type and illumination). Finally, the features and performance of the integrated simulator will be compared to existing commercial and research-grade simulators. Requirements: Strong programming skills in Python, C++, C#, experience with Linux, ROS. Experience in Unity3D or Unreal Engine is a plus.

Goal: The developed software package will be used for human-subjects research on first-person-view drone racing.

Contact Details: Please send your CV and transcripts (bachelor and master) to: Christian Pfeiffer (cpfeiffe (at) ifi (dot) uzh (dot) ch), Yunlong Song (song (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project

See project on SiROP

Visual Processing and Control in Human-Piloted Drone Racing - Available

Description: In drone racing, human pilots use visual information from a drone-mounted camera for selecting control commands they send to the drone via a remote controller. It is currently unknown how humans process visual information during fast and agile drone flight and how visual processing affects their choice of control commands. To answer these questions, this project will collect eye-tracking and control-command data from human pilots using a drone racing simulator. The student will use statistical modeling and machine learning to investigate the relationship between eye movements, control commands, and drone state. Requirements: Background in computer vision and machine learning, solid programming experience in Python; experience in eye-tracking and human subjects research is a plus (not mandatory).

Goal: Extend knowledge about visual processing and control in human-piloted drone racing.

Contact Details: Please send your CV and transcripts (bachelor and master) to: Christian Pfeiffer (cpfeiffe (at) ifi (dot) uzh (dot) ch), Antonio Loquercio (loquercio (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Reinforcement learning for 3D surgery planning of Femoral Head Reduction Osteotomy (in collaboration with Balgrist hospital) - Available

Description: Morbus Legg-Calvé-Perthes is a paediatric disorder of the lower extremities, causing deformities of the femoral head. Surgical treatment for this bone deformity can be achieved by a procedure known as femoral head reduction osteotomy (FHRO), which involves the resection of a wedge from the femoral head to restore the function of the joint. The preoperative planning of this procedure is a complex three-dimensional (3D) optimization problems involving more than 20 degrees of freedom (DoF) as it comprises the calculation of the surgical cuts and the reposition of the resected fragment to the desired anatomical position. This process is currently done manually in collaboration between engineers and surgeons.

Goal: In the course of this master thesis, you will help us to improve our current surgery planning methods by developing an approach to predict the reposition of the fragment and the pose of the cutting planes defining the bone wedge. The objective of this master thesis is to apply deep (reinforcement) learning techniques to automatically find an optimal solution for the preoperative planning of FHRO. We will start by solving a simplified version of the optimization problem, with a reduced DoF involving only the calculation of the bone fragment reposition and we will gradually increase the DoF and the complexity of the task. This project is part of a bigger framework, which is currently under development in our clinic for optimal surgical outcomes. (The student will mainly work at the Balgrist CAMPUS) **Requirements:** Hands-on experience in reinforcement learning, deep learning. Strong coding skills in Python. Experience in mathematical optimization and spatial transformation is a plus.

Contact Details: Yunlong Song (song@ifi.uzh.ch), Ackermann Joelle (joelle.ackermann@balgrist.ch) Prof. Philipp Fuernstahl (philipp.fuernstahl@balgrist.ch)

Thesis Type: Master Thesis

See project on SiROP

Machine Learning for Feature-tracking with Event Cameras - Available

PLEASE LOG IN TO SEE DESCRIPTION: This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions: - Click link "Open this project..." below. - Log in to SiROP using your university login or create an account to see the details. If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Bringing Thermal Cameras into Robotics - Available

Description: Thermographic cameras can capture detailed images regardless of ambient lighting conditions.They use an infrared (IR) sensing technology to map heat variations within the sensor’s range and field-of-view, providing movement detection and hot-spot mapping even in total darkness. Visible range covers wavelengths of approximately 400 – 700 nanometres (nm) in length. However, thermographic cameras generally sample thermal radiation from within the longwave infrared range(approximately 7,000 – 14,000 nm) with a great potential in robotics. Thermography images are useful to identify week points on the power line, along the cable and on the isolators or containers. However, current lightweight thermal cameras are unexplored, with limited in pixel resolution (32x32 pixels) unable to deliver exceptional sensitivity, resolution and image quality for meaningful applications. This work aims to expand the frontiers of computer vision by using thermographic cameras and investigate their application in robotics i.e. perception, state estimation and path planning. The project will combine traditional computer vision techniques together with deep-learning approaches to bring thermography images into the field of robotics. Requirements: Background in computer vision and machine learning - Deep learning experience preferable – Excellent programming experience in C++ and Python

Goal: Perception, state estimation or path planning using thermographic cameras.

Contact Details: Javier Hidalgo-Carrió (jhidalgocarrio@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Probabilistic System Identification of a Quadrotor Platform - Available

Description: Most planning & control algorithms used on quadrotors make use of a nominal model of the platform dynamics to compute feasible trajectories or generate control commands. Such models are derived using first principles and typically cannot fully capture the true dynamics of the system, leading to sub-optimal performance. One appealing approach to overcome this limitation is to use Gaussian Processes for system modeling. Gaussian Process regression has been widely used in supervised machine learning due to its flexibility and inherent ability to describe uncertainty in the prediction. This work investigates the usage of Gaussian Processes for uncertainty-aware system identification of a quadrotor platform. Requirements: - Machine learning experience preferable but not strictly required - Programming experience in C++ and Python

Goal: Implement an uncertainty-aware model of the quadrotor dynamics, train and evaluate the model on simulated and real data.

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Event-based Feature Tracking on an Embedded Platform - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications, such as fast obstacle avoidance. In particular, event cameras can be used to track features or objects in the blind time between two frames which makes it possible to react quickly to changes in the scene.

Goal: In this project we want to deploy an event-based feature tracking algorithm on a resource constrained platform such as a drone. Applicants should have a strong background in C++ programming and low-level vision. Experience with embedded programming is a plus.

Contact Details: Daniel Gehrig (dgehrig (at) ifi.uzh.ch), Elia Kaufmann (ekaufmann (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project

See project on SiROP

MPC for high speed trajectory tracking - Available

Description: Many algorithms exist for model predictive control for trajectory tracking for quadrotors and equally many implementation advantages and disadvantages can be listed. This thesis should find the main influence factors on high speed/high precision trajectory tracking such as: modell accuracy, aerodynamic forces modelling, execution speed, underlying low-level controllers, sampling times and sampling strategies, noise sensitivity or even come up with a novel implementation.

Goal: The end-goal of the thesis should be a comparison of the influence factors and based on that a recommendation or even implementation of an improved solution.

Contact Details: Philipp Föhn (foehn at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Generation of Fast or Time-Optimal Tracjectories for Quadrotor Flight - Available

Description: With the rise of complex control and planning methods, quadrotors are capable of executing astonishing maneuvers. While generating trajectories between two known poses or states is relatively simple, planning through multiple waypoints is rather complicated. The master class of this problem is the task of flying as fast as possible through multiple gates, as done in drone racing. While humans can perform such fast racing maneuvers at extreme speeds of more than 100 km/h, algorithms struggle with even planning such trajectories. Within this thesis, we want to research methods to generate such fast trajectories and work towards a time-optimal planner. This requires some prior knowledge in at least some of the topics including: planning for robots, optimization techniques, model predictive control, RRT, and quadrotors or UAVs in general. The tasks will reach from problem analysis, approximation, and solution concepts to implementation and testing in simulation with existing software tools.

Goal: The goal would be to analyse the planning problem, develop approximation techniques and solve it as time-optimal as possible during thesis.

Contact Details: Philipp Föhn (foehn at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Data-Driven Visual Inertial Odometry for Quadrotor Flight - Available

Description: Classical VIO pipelines use geometric information to infer the ego-motion of the camera and couple this information with measurements from the IMU. While these pipelines have shown very good performance in controlled, structured environments, their performance decreases when applied in low-texture or dynamic environments or when applied to high-speed motion. Recent works propose the usage of data-driven approaches for camera ego-motion estimation. While such approaches could potentially learn a VIO pipeline end-to-end, their generalizability is not good enough for real-world deployment. This work investigates the usage of a hybrid VIO pipeline featuring a learned visual frontend. Requirements: - Background in computer vision and machine learning - Deep learning experience preferable but not strictly required - Programming experience in C++ and Python

Goal: Based on results from a previous student project, the goal is to deploy a hybrid VIO pipeline on a quadrotor equipped with a GPU (Jetson TX2).

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch); Philipp Foehn (foehn@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learning-Guided MPC Flight - Available

Description: Model predictive control (MPC) is a versatile optimization-based control method that allows to incorporate constraints directly into the control problem. The advantages of MPC can be seen in its ability to accurately control dynamical systems that include large time delays and high-order dynamics. Recent advances in compute hardware allow to run MPC even on compute-constrained quadrotors. While model-predictive control can deal with complex systems and constraints, it still assumes the existence of a reference trajectory. With this project we aim to guide the MPC to a feasible reference trajectory by using a neural network that directly predicts from camera images an expressive intermediate representation. Such tight coupling of perception and control would allow to push the speed limits of autonomous flight through cluttered environments. Requirements: - Machine learning experience (TensorFlow and/or PyTorch) - Experience in MPC preferable but not strictly required - Programming experience in C++ and Python

Goal: Evaluate different intermediate representations for autonomous flight. Implement the learned perception system in simulation and integrate the predictions into an existing MPC pipeline. If possible, deploy on a real system.

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch) Philipp Föhn (foehn@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Simulation to Real World Transfer - Available

Description: Recent techniques based on machine learning enabled robotics system to perform many difficult tasks, such as manipulation or navigation. Those techniques are usually very data-intensive, and require simulators to generate enough training data. However, a system only trained in simulation (usually) fails when deployed in the real world. In this project, we will develop techniques to maximally transfer knowledge from simulation to the real world, and apply them to real robotics systems.

Goal: The project aims to develop techniques based on machine learning to have maximal knowledge transfer between simulated and real world on a navigation task.

Contact Details: **Antonio Loquercio** loquercio@ifi.uzh.ch

Thesis Type: Semester Project / Bachelor Thesis / Master Thesis

See project on SiROP

Unsupervised Obstacle Detection Learning - Available

Description: Supervised learning is the gold standard algorithm to solve computer vision tasks like classification, detection or segmentation. However, for several interesting tasks (e.g. moving object detection, depth estimation, etc.) collecting the large annotated datasets required by the aforementioned algorithms is a very tedious and costly process. In this project, we aim to build a self-supervised depth estimation and segmentation algorithm by embedding classic computer vision principles (e.g. brightness constancy) into a neural network. **Requirements**: Computer vision knowledge; programming experience with python. Machine learning knowledge is a plus but it is not required.

Goal: The goal of this project consists of building a perception system which can learn to detect obstacles without any ground truth annotations.

Contact Details: Antonio Loquercio, _loquercio@ifi.uzh.ch_

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Target following on nano-scale UAV - Available

Description: Autonomous Unmanned Aerial Vehicles (UAVs) have numerous applications due to their agility and flexibility. However, navigation algorithms are computationally demanding, and it is challenging to run them on-board of nano-scale UAVs (i.e., few centimeters of diameter). This project focuses on the object tracking, (i.e., target following) on such nano-UAVs. To do this, we will first train a Convolutional Neural Network (CNN) with data collected in simulation, and then run the aforementioned network on a parallel ultra-low-power (PULP) processor, enabling flight with on-board sensing and computing only. **Requirements**: Knowledge of python, cpp and embedded programming. Machine learning knowledge is a plus but it is not strictly required.

Contact Details: Antonio Loquercio, _loquercio@ifi.uzh.ch_

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learning to Deblur Images with Events - Available

Description: Images suffer from motion blur due to long exposure in poor light condition or rapid motion. Unlike conventional cameras, event-cameras do not suffer from motion blur. This is due to the fact that event-cameras provide events together with the exact time when they were triggered. In this project, we will make use of hybrid sensors which provide both conventional images and events such that we can exploit the advantages of both. By the end of this project you will have developed a great amount of experience in event-based vision, deep learning and computational photography. Requirements: - Background in computer vision and machine learning - Deep learning experience preferable but not strictly required - Programming experience in C++ and Python

Goal: The goal is to develop an algorithm capable producing a blur-free image from the captured, blurry image, and events within the exposure time. To this end, synthetic data can be generated by our simulation framework which is able to generate both synthetic event data and motion blurred images. This data can be used by machine learning algorithms designed to solve the task at hand. At the end of the project, the algorithm will be adapted to perform optimally with real-world data.

Contact Details: Mathias Gehrig (mgehrig at ifi.uzh.ch); Daniel Gehrig (dgehrig at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Optimization for Spiking Neural Networks - Available

Description: Spiking neural networks (SNNs) are neural networks that process information with timing of events/spikes rather than numerical values. Together with event-cameras, SNNs show promise to both lower latency and computational burden compared to artificial neural networks. In recent years, researchers have proposed several methods to estimate gradients of SNN parameters in a supervised learning context. In practice, many of these approaches rely on assumptions that might not hold in all scenarios. Requirements: - Background in machine learning; especially deep learning - Good programming skills; experience in CUDA is a plus.

Goal: In this project we explore state-of-the-art optimization methods for SNNs and their suitability to solve the temporal credit-assignment problem. As a first step, an in-depth evaluation of a selection of algorithms is required. Based on the acquired insights, the prospective student can propose improvements and implement their own method.

Contact Details: Mathias Gehrig, mgehrig (at) ifi (dot) uzh (dot) ch

Thesis Type: Master Thesis

See project on SiROP

Designing a New Event Camera with Events and Images - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with a lot of potential for high-speed and high dynamic range robotic applications. They have been successfully applied in many applications, such as high speed video and high speed visual odometry. Due to their high speed and

Goal: The goal of this project is to design a new event camera that combines events and standard images.

Contact Details: Daniel Gehrig (dgehrig (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Designing an Event Camera for Learning - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with a lot of potential for high-speed and high-dynamic-range robotic applications.They have been successfully applied in many applications, such as high speed video and high speed visual odometry. Recently, many new event cameras have been commercialized with higher and higher spatial resolutions and high temporal resolution. However, these developments steadily increase the the computational requirements for downstream algorithms, increasing the necessary bandwidth and reducing the time available to process events. In this work we want to find out how important these design parameters are for deep learning applications. Applicants should have experience in coding image processing algorithms in C++ and experience with learning frameworks in python such as tensorflow or pytorch.

Goal: The goal of this project is to find out how important the design parameters of event cameras, such as spatial and temporal resolution, are for deep learning applications.

Contact Details: Daniel Gehrig (dgehrig (at) ifi (dot) uzh (dot) ch), Antonio Loquercio (antonilo (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Internship / Master Thesis

See project on SiROP

Learning 3D Reconstruction using an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. In particular, they have been used to generate high speed video and for high speed visual odometry. In this project we want to explore the possibility using an event camera to do asynchronous 3D reconstruction with very high temporal resolution. These properties are critical in applications such as fast obstacle avoidance and fast mapping. Applicants should have a background in C++ programming and low-level vision. In addition, familiarity with learning frameworks such as pytorch or tensorflow are required.

Goal: The goal of this project is to explore a learning-based 3D reconstruction method with an event camera.

Contact Details: Daniel Gehrig (dgehrig (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Collaboration / Master Thesis

See project on SiROP

Learning an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with a lot of potential for high-speed and high dynamic range robotic applications. They have been successfully applied in many applications, such as high speed video and high speed visual odometry. In spite of this success, the exact operating principle of event cameras, that is, how events are generated from a given visual signal and how noise is generated, is not well understood. In his work we want to explore new techniques for modelling the generation of events in an event camera, which would have wide implications for existing techniques. Applicants should have a background in C++ programming and low-level vision. In addition, familiarity with learning frameworks such as pytorch or tensorflow are required.

Goal: The goal of this project is to explore new techniques for modelling an event camera.

Contact Details: Daniel Gehrig (dgehrig (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Internship / Master Thesis

See project on SiROP

Asynchronous Processing for Event-based Deep Learning - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. Since their output is sparse traditional algorithms, which are designed for dense inputs such as frames, are not well suited. The goal of this project is explore ways to adapt existing deep learning algorithms to handle sparse asynchronous data from events. Applicants should have experience in C++ and python deep learning frameworks (tensorflow or pytorch), and have a strong background in computer vision.

Goal: The goal of this project is explore ways to adapt existing deep learning algorithms to handle sparse asynchronous data from events.

Contact Details: Daniel Gehrig (dgehrig at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Pushing hard cases in tag detection with a CNN - Available

Description: Visual Tags such as April or Aruco tags are nowadays detected with a handcrafted algorithm. This algorithm has its limitations in special cases, such as when the tag is far away from the camera, when the tag is partially occluded or when a camera with high distortion is used.

Goal: In this project, you will train a CNN to handle these special cases. We will first brainstorm a meaningful architecture that will allow a CNN to complement classical tag detection in the most effective way. You will then figure out the most effective way to create meaningful training data (hybrid of synthetic and real data?). Finally, you will use that data to train the desired detector.

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), APPLY VIA EMAIL, ATTACH CV AND TRANSCRIPT! Required skills: Linux, Python, ability to read C++ code. Desirable skill: Tensorflow or similar.

Thesis Type: Semester Project / Bachelor Thesis / Master Thesis

See project on SiROP

Teach and Aggressive Repeat - Available

Description: When we think of robot path planning, we often think of fitting optimal trajectories into dense 3D maps. This requires high quality 3D maps in the first place, which are often hard to obtain. An alternative approach, called Teach and Repeat, is to retrace previously traversed paths. Teach and Repeat maps are easier to create, as no globally consistent pose estimate is required. They can also be very compact, as the environment only needs to be sampled at sparse, visually salient locations. In this project, you will do T&R with a twist: Try to fly the repeat as fast as possible.

Goal: Start by building a basic teach and repeat based on existing components. Then, start increasing the repeat speed. Find out what the limitations are. Perceptual limitations like motion blur? If so, can this be solved with event cameras ( https://goo.gl/itzpJN ) ? Or is it avoiding collisions, as potentially tight maneuvers from the slow teach phase cannot be repeated at high velocities? You will most likely start with deployment on a real quadrotor very soon.

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), APPLY VIA EMAIL, ATTACH CV AND TRANSCRIPT (also Bachelor)! Required skills: Linux, C++, ROS. Students who took the Vision Algorithms for Mobile Robots class are at an advantage.

Thesis Type: Master Thesis

See project on SiROP