Student Projects


How to apply

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP . Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).


Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).



Perception Aware Model Predictive Control for Autonomous Power Line Tracking - Available

Description: Classical power line inspection and maintenance are dangerous, costly and time consuming. Drones could mitigate the risk for humans and minimize the cost for the direct benefit of the power line infrastructure. Coupling perception and path planning with control has become increasingly popular in aerial vehicles. This project will continue to investigate vision-based navigation tightly couple approaches of perception and control to satisfy the drone dynamics and compute feasible trajectories with respect to the input saturation. This involves to further development the research on perception aware Model Predictive Control (MPC) for quadrotors, solving the challenging aspects of a power line inspection scenario. A perception aware MPC approach would ideally improve the aerial vehicle's behavior during an approaching maneuver.

Goal: Pursue research on a unified control and planning approach that integrates the action and perception objectives to satisfy this functionality. The functionality will accomplish to the challenging environmental conditions and the morphology of the target which imposes several points of interest for the optimizer.

Contact Details: Javier Hidalgo-Carrió (jhidalgocarrio@ifi.uzh.ch) and Yunlong Song (song@ifi.uzh.ch)

Thesis Type: Bachelor Thesis / Master Thesis

See project on SiROP

Power-Line Dataset for Autonomous Drone Inspection - Available

Description: Classical power line inspection and maintenance are dangerous, costly and time consuming. Drones could mitigate the risk for humans and minimize the cost for direct benefit of the infrastructure. Several sensing capabilities has been already tested (i.e. RGB, LiDAR) which gives the drone the abilities to operate in unstructured environments. Sensor fusion is a popular technique to get the best of each sensor for autonomous navigation. Benchmark of perception strategies is a key part for solid and robust algorithm development before final deployment on the system. However, the lack of relevant and accurate data for multiple sensors makes difficult the Verification and Validation (V&V) process of perception algorithms. The goals of this project is to deliver the first multi sensor power-line inspection dataset for drones with alternative sensory data and ground truth. Requirements: Background in robotics and autonomous systems – Drone navigation preferable – Excellent programming in C++ and Python – Knowledge of ROS and robotic middle-ware - Passionate about robotics and engineering in general. - Linux

Goal: Release an open-access dataset for the evaluation of perception pipelines for autonomous drones. The goal is to establish a solid benchmark for autonomous drone inspection of power-lines . The following sensors are consider to be part of the dataset: - Absolute depth information - RGB images - Event-based camera - Thermography - Inertial Sensory information - Ground truth positioning

Contact Details: Javier Hidalgo-Carrió (jhidalgocarrio@ifi.uzh.ch)

Thesis Type: Bachelor Thesis / Master Thesis

See project on SiROP

nanoSVO: Visual Inertial Odometry on a NanoPi - Available

Description: Classical VIO pipelines use geometric information to infer the ego-motion of the camera and couple this information with measurements from the IMU. VIO are established technologies used nowadays in many embedded application like Space exploration, drones, VR/AR goggles, etc.. Therefore, the future is to compute the VIO pipeline in a dedicated embedded computer in order to liberate the main computer for high level perception and complex tasks. This project is about solving the challenging tasks of compiling, deploying and testing SVO algorithm in a NanoPi computer integrated on a quadrotor.

Goal: Generated a cross compilation pipeline for cutting edge visual-inertial odometry (SVO) with real field testing in drones. The target platform in an ARM Cortex AX Quad-Core family of processors. Continuous Integration (CI) of the cross compilation pipeline is also desired.

Contact Details: Javier Hidalgo-Carrió (jhidalgocarrio@ifi.uzh.ch) and Thomas Laengle (tlaengle@ifi.uzh.ch)

Thesis Type: Semester Project / Bachelor Thesis

See project on SiROP

Informative feature selection for multi-camera SLAM - Available

Description: Adding more cameras to the system increases the robustness and accuracy of SLAM systems, but demands more computation resources. The computation cost increases in general with the number of cameras and the number of features per camera. However, all the camera features do not necessarily contribute towards pose estimation. We therefore look into feature selection that samples from all features only those that are relevant for SLAM.Thus we are able to gain performance, in terms of time, without compromising in accuracy. Selecting features for a general multiple camera setup is not an easy problem: e.g., we need to consider the distance, distribution of the landmarks, time that the landmarks can be tracked/visible (this is coupled with the motion of the robot). In this project, we would explore different informative feature selection and evaluate their performance with a multi-camera system consisting of maximum 10 cameras.

Goal: The goal is to design information theoretic-based feature sampling methods that ensure efficient use of computational resources while preserving the accuracy of the SLAM system

Contact Details: Manasi Muglikar (muglikar(at)ifi(dot)uzh(dot)ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Data-Driven Quadrotor Simulation - Available

Description: Current quadrotor simulators used in research only model the simplified dynamics of quadrotors and typically don’t account for aerodynamic effects encountered at high speeds. To push the limits in fast autonomous flight, access to a simulator that accurately models the platform in those regimes is very important. With having access to the largest optitrack-space in Zurich, the goal of this thesis is to record a dataset of very agile maneuvers and use it to accurately identify the quadrotor platform. After analyzing different identification methods, the best performing candidate is used to build an accurate quadrotor dynamics model that is then integrated in a simulator. Applicants should have strong experience in C++ and Python programming and a robotics background.

Goal: Collect a dataset of high-speed maneuvers in the optitrack and identify the quadrotor platform. Use this model to create a fast and accurate quadrotor simulation. Verify the simulator by comparing it to real-world data.

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch), Yunlong Song (song@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Reinforcement learning for 3D surgery planning of Femoral Head Reduction Osteotomy (in collaboration with Balgrist hospital) - Available

Description: Morbus Legg-Calvé-Perthes is a paediatric disorder of the lower extremities, causing deformities of the femoral head. Surgical treatment for this bone deformity can be achieved by a procedure known as femoral head reduction osteotomy (FHRO), which involves the resection of a wedge from the femoral head to restore the function of the joint. The preoperative planning of this procedure is a complex three-dimensional (3D) optimization problems involving more than 20 degrees of freedom (DoF) as it comprises the calculation of the surgical cuts and the reposition of the resected fragment to the desired anatomical position. This process is currently done manually in collaboration between engineers and surgeons.

Goal: In the course of this master thesis, you will help us to improve our current surgery planning methods by developing an approach to predict the reposition of the fragment and the pose of the cutting planes defining the bone wedge. The objective of this master thesis is to apply deep (reinforcement) learning techniques to automatically find an optimal solution for the preoperative planning of FHRO. We will start by solving a simplified version of the optimization problem, with a reduced DoF involving only the calculation of the bone fragment reposition and we will gradually increase the DoF and the complexity of the task. This project is part of a bigger framework, which is currently under development in our clinic for optimal surgical outcomes. (The student will mainly work at the Balgrist CAMPUS) **Requirements:** Hands-on experience in reinforcement learning, deep learning. Strong coding skills in Python. Experience in mathematical optimization and spatial transformation is a plus.

Contact Details: Yunlong Song (song@ifi.uzh.ch), Ackermann Joelle (joelle.ackermann@balgrist.ch) Prof. Philipp Fuernstahl (philipp.fuernstahl@balgrist.ch)

Thesis Type: Master Thesis

See project on SiROP

Learning features for efficient deep reinforcement learning - Available

Description: The study of end-to-end deep learning in computer vision has mainly focused on developing useful object representations for image classification, object detection, or semantic segmentation. Recent work has shown that it is possible to learn temporally and geometrically aligned keypoints given only videos, and the object keypoints learned via unsupervised learning manners can be useful for efficient control and reinforcement learning.

Goal: The goal of this project is to find out if it is possible to learn useful features or intermediate representation s for controlling mobile robots in high-speed. For example, can we use the Transporter (a neural network architecture) for finding useful features in an autonomous car racing environment? if so, can we use these features for discovering an optimal control policy via deep reinforcement learning? **Required skills:** Python/C++ reinforcement learning, and deep learning skills.

Contact Details: Yunlong Song (song@ifi.uzh.ch) and Titus Cieslewski ( titus at ifi.uzh.ch )

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Reinforcement Learning for Drone Racing - Available

Description: In drone racing, human pilots navigate quadrotor drones as quickly as possible through a sequence of gates arranged in a 3D track. Inspired by the impressive flight performance of human pilots, the goal of this project is to train a deep sensorimotor policy that can complete a given track as fast as possible. To this end, the policy directly predicts low-level control commands from noisy odometry data. Provided with an in-house drone simulator, the student investigates state-of-the-art reinforcement learning algorithms and reward designs for the task of drone racing. The ultimate goal is to outperform human pilots on a simulated track. Applicants should have strong experience in C++ and python programming. Reinforcement learning and robotics background are required.

Goal: Find the fastest possible trajectory through a drone racing track using reinforcement learning. Investigate different reward formulations for the task of drone racing. Compare the resulting trajectory with other trajectory planning methods, e.g., model-based path planning algorithms or optimization-based algorithms.

Contact Details: Yunlong Song (song (at) ifi (dot) uzh (dot) ch), Elia Kaufmann (ekaufmann (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project

See project on SiROP

Embedded systems development with NVIDIA Jetson TX2 for fast drone flying - Available

Description: The TX2 is a powerful computational unit with 2 Denver 64-bit CPUs + Quad-Core A57 Complex, NVIDIA Pascal™ Architecture GPU. We use image processing and IMU data in order to deploy our machine learning algorithms in real life experiments. This makes our robots fly autonomously without any help of external communication. In a first iteration, the objective is to have a fully functional connector board including power management, USB OTG, USB 3.0, 2 UARTs, (1 Serial port) Ethernet and a CSI (camera connector). In a second iteration we are redesigning the hardware, integrating our own Obvio-board (time synchronized IMU and RGB) and our own flight controller (integration of an ARM STM32 MyC).

Goal: Test and verify the existing prototype in collaboration with our lab engineers. Create a second iteration with the integration of a microcontroller in order to integrate time synchronization with an IMU and a camera and integrate our custom built flight controller using D-Shot.Applicants should have a solid understanding of Linux device tree for embedded ARM core and some experience using pcb software (kiCAD/Eagle) as well as solid knowledge on communication protocols (UART, USB, SPI, Ethernet).

Contact Details: Manuel Sutter (Systems Engineer BSc) msutter (at) ifi (dot) uzh (dot) ch

Thesis Type: Internship / Bachelor Thesis

See project on SiROP

Learning 3D Reconstruction using an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. In particular, they have been used to generate high speed video and for high speed visual odometry. In this project we want to explore the possibility using an event camera to do asynchronous 3D reconstruction with very high temporal resolution. These properties are critical in applications such as fast obstacle avoidance and fast mapping. Applicants should have a background in C++ programming and low-level vision. In addition, familiarity with learning frameworks such as pytorch or tensorflow are required.

Goal: The goal of this project is to explore a learning-based 3D reconstruction method with an event camera.

Contact Details: Daniel Gehrig (dgehrig (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Collaboration / Master Thesis

See project on SiROP

Learning an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with a lot of potential for high-speed and high dynamic range robotic applications. They have been successfully applied in many applications, such as high speed video and high speed visual odometry. In spite of this success, the exact operating principle of event cameras, that is, how events are generated from a given visual signal and how noise is generated, is not well understood. In his work we want to explore new techniques for modelling the generation of events in an event camera, which would have wide implications for existing techniques. Applicants should have a background in C++ programming and low-level vision. In addition, familiarity with learning frameworks such as pytorch or tensorflow are required.

Goal: The goal of this project is to explore new techniques for modelling an event camera.

Contact Details: Daniel Gehrig (dgehrig (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Internship / Master Thesis

See project on SiROP

Asynchronous Processing for Event-based Deep Learning - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. Since their output is sparse traditional algorithms, which are designed for dense inputs such as frames, are not well suited. The goal of this project is explore ways to adapt existing deep learning algorithms to handle sparse asynchronous data from events. Applicants should have experience in C++ and python deep learning frameworks (tensorflow or pytorch), and have a strong background in computer vision.

Goal: The goal of this project is explore ways to adapt existing deep learning algorithms to handle sparse asynchronous data from events.

Contact Details: Daniel Gehrig (dgehrig at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Flight Trajectory Modeling for Human-Piloted Drone Racing - Available

Description: In drone racing, human pilots navigate quadrotor drones as quickly as possible through a sequence of gates arranged in a 3D track. There is a large number of possible trajectories for linking gates among which pilots have to choose. This project aims to identify the most common and most efficient flight trajectories used by human pilots. The student will collect flight trajectory data for various tracks from human pilots using a drone racing simulator. The student will analyze 3D trajectories and motion kinematics to identify trajectories that achieve the fastest lap times most consistently. Finally, the student will compare flight trajectories from human pilots to trajectories from a minimum-time planner used for autonomous navigation. Requirements: Experience in computer vision and machine learning; ability to code in Python/Matlab, Linux, C++, ROS; experience in 3D kinematic analysis is a plus.

Goal: Extend knowledge about flight trajectory planning in human-piloted drone racing.

Contact Details: Please send your CV and transcripts (bachelor and master) to: Christian Pfeiffer (cpfeiffe (at) ifi (dot) uzh (dot) ch), Elia Kaufmann (ekaufmann (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Develop a Research-Grade Flight Simulator for Human-Piloted Drone Racing - Available

Description: The goal of this project is to develop a simulator for research on human-piloted drone racing. The student will integrate an existing drone racing simulator with custom software packages (ROS or Python) for logging drone state in 3D (i.e., position, rotation, velocity, acceleration), camera images, and control commands (i.e., thrust, yaw, pitch, roll). The student will create a custom GUI for changing quadrotor settings (i.e., weight, motor thrust, camera angle, rate profiles) and race track layouts (i.e. size, position, and static-vs-moving type, number of gates, track type and illumination). Finally, the features and performance of the integrated simulator will be compared to existing commercial and research-grade simulators. Requirements: Strong programming skills in Python, C++, C#, experience with Linux, ROS. Experience in Unity3D or Unreal Engine is a plus.

Goal: The developed software package will be used for human-subjects research on first-person-view drone racing.

Contact Details: Please send your CV and transcripts (bachelor and master) to: Christian Pfeiffer (cpfeiffe (at) ifi (dot) uzh (dot) ch), Yunlong Song (song (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project

See project on SiROP

Visual Processing and Control in Human-Piloted Drone Racing - Available

Description: In drone racing, human pilots use visual information from a drone-mounted camera for selecting control commands they send to the drone via a remote controller. It is currently unknown how humans process visual information during fast and agile drone flight and how visual processing affects their choice of control commands. To answer these questions, this project will collect eye-tracking and control-command data from human pilots using a drone racing simulator. The student will use statistical modeling and machine learning to investigate the relationship between eye movements, control commands, and drone state. Requirements: Background in computer vision and machine learning, solid programming experience in Python; experience in eye-tracking and human subjects research is a plus (not mandatory).

Goal: Extend knowledge about visual processing and control in human-piloted drone racing.

Contact Details: Please send your CV and transcripts (bachelor and master) to: Christian Pfeiffer (cpfeiffe (at) ifi (dot) uzh (dot) ch), Antonio Loquercio (loquercio (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Machine Learning for Feature-tracking with Event Cameras - Available

PLEASE LOG IN TO SEE DESCRIPTION: This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions: - Click link "Open this project..." below. - Log in to SiROP using your university login or create an account to see the details. If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Event-based Feature Tracking on an Embedded Platform - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications, such as fast obstacle avoidance. In particular, event cameras can be used to track features or objects in the blind time between two frames which makes it possible to react quickly to changes in the scene.

Goal: In this project we want to deploy an event-based feature tracking algorithm on a resource constrained platform such as a drone. Applicants should have a strong background in C++ programming and low-level vision. Experience with embedded programming is a plus.

Contact Details: Daniel Gehrig (dgehrig (at) ifi.uzh.ch), Elia Kaufmann (ekaufmann (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project

See project on SiROP

MPC for high speed trajectory tracking - Available

Description: Many algorithms exist for model predictive control for trajectory tracking for quadrotors and equally many implementation advantages and disadvantages can be listed. This thesis should find the main influence factors on high speed/high precision trajectory tracking such as: modell accuracy, aerodynamic forces modelling, execution speed, underlying low-level controllers, sampling times and sampling strategies, noise sensitivity or even come up with a novel implementation.

Goal: The end-goal of the thesis should be a comparison of the influence factors and based on that a recommendation or even implementation of an improved solution.

Contact Details: Philipp Föhn (foehn at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Generation of Fast or Time-Optimal Tracjectories for Quadrotor Flight - Available

Description: With the rise of complex control and planning methods, quadrotors are capable of executing astonishing maneuvers. While generating trajectories between two known poses or states is relatively simple, planning through multiple waypoints is rather complicated. The master class of this problem is the task of flying as fast as possible through multiple gates, as done in drone racing. While humans can perform such fast racing maneuvers at extreme speeds of more than 100 km/h, algorithms struggle with even planning such trajectories. Within this thesis, we want to research methods to generate such fast trajectories and work towards a time-optimal planner. This requires some prior knowledge in at least some of the topics including: planning for robots, optimization techniques, model predictive control, RRT, and quadrotors or UAVs in general. The tasks will reach from problem analysis, approximation, and solution concepts to implementation and testing in simulation with existing software tools.

Goal: The goal would be to analyse the planning problem, develop approximation techniques and solve it as time-optimal as possible during thesis.

Contact Details: Philipp Föhn (foehn at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Simulation to Real World Transfer - Available

Description: Recent techniques based on machine learning enabled robotics system to perform many difficult tasks, such as manipulation or navigation. Those techniques are usually very data-intensive, and require simulators to generate enough training data. However, a system only trained in simulation (usually) fails when deployed in the real world. In this project, we will develop techniques to maximally transfer knowledge from simulation to the real world, and apply them to real robotics systems.

Goal: The project aims to develop techniques based on machine learning to have maximal knowledge transfer between simulated and real world on a navigation task.

Contact Details: **Antonio Loquercio** loquercio@ifi.uzh.ch

Thesis Type: Semester Project / Bachelor Thesis / Master Thesis

See project on SiROP

Unsupervised Obstacle Detection Learning - Available

Description: Supervised learning is the gold standard algorithm to solve computer vision tasks like classification, detection or segmentation. However, for several interesting tasks (e.g. moving object detection, depth estimation, etc.) collecting the large annotated datasets required by the aforementioned algorithms is a very tedious and costly process. In this project, we aim to build a self-supervised depth estimation and segmentation algorithm by embedding classic computer vision principles (e.g. brightness constancy) into a neural network. **Requirements**: Computer vision knowledge; programming experience with python. Machine learning knowledge is a plus but it is not required.

Goal: The goal of this project consists of building a perception system which can learn to detect obstacles without any ground truth annotations.

Contact Details: Antonio Loquercio, _loquercio@ifi.uzh.ch_

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Target following on nano-scale UAV - Available

Description: Autonomous Unmanned Aerial Vehicles (UAVs) have numerous applications due to their agility and flexibility. However, navigation algorithms are computationally demanding, and it is challenging to run them on-board of nano-scale UAVs (i.e., few centimeters of diameter). This project focuses on the object tracking, (i.e., target following) on such nano-UAVs. To do this, we will first train a Convolutional Neural Network (CNN) with data collected in simulation, and then run the aforementioned network on a parallel ultra-low-power (PULP) processor, enabling flight with on-board sensing and computing only. **Requirements**: Knowledge of python, cpp and embedded programming. Machine learning knowledge is a plus but it is not strictly required.

Contact Details: Antonio Loquercio, _loquercio@ifi.uzh.ch_

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learning to Deblur Images with Events - Available

Description: Images suffer from motion blur due to long exposure in poor light condition or rapid motion. Unlike conventional cameras, event-cameras do not suffer from motion blur. This is due to the fact that event-cameras provide events together with the exact time when they were triggered. In this project, we will make use of hybrid sensors which provide both conventional images and events such that we can exploit the advantages of both. By the end of this project you will have developed a great amount of experience in event-based vision, deep learning and computational photography. Requirements: - Background in computer vision and machine learning - Deep learning experience preferable but not strictly required - Programming experience in C++ and Python

Goal: The goal is to develop an algorithm capable producing a blur-free image from the captured, blurry image, and events within the exposure time. To this end, synthetic data can be generated by our simulation framework which is able to generate both synthetic event data and motion blurred images. This data can be used by machine learning algorithms designed to solve the task at hand. At the end of the project, the algorithm will be adapted to perform optimally with real-world data.

Contact Details: Mathias Gehrig (mgehrig at ifi.uzh.ch); Daniel Gehrig (dgehrig at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Optimization for Spiking Neural Networks - Available

Description: Spiking neural networks (SNNs) are neural networks that process information with timing of events/spikes rather than numerical values. Together with event-cameras, SNNs show promise to both lower latency and computational burden compared to artificial neural networks. In recent years, researchers have proposed several methods to estimate gradients of SNN parameters in a supervised learning context. In practice, many of these approaches rely on assumptions that might not hold in all scenarios. Requirements: - Background in machine learning; especially deep learning - Good programming skills; experience in CUDA is a plus.

Goal: In this project we explore state-of-the-art optimization methods for SNNs and their suitability to solve the temporal credit-assignment problem. As a first step, an in-depth evaluation of a selection of algorithms is required. Based on the acquired insights, the prospective student can propose improvements and implement their own method.

Contact Details: Mathias Gehrig, mgehrig (at) ifi (dot) uzh (dot) ch

Thesis Type: Master Thesis

See project on SiROP