Student Projects


How to apply

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP . Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).


Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).



Bayesian Optimization for Racing Aerial Vehicle MPC Tuning - Available

Description: In recent years, model predictive control, one of the most popular methods for controlling constrained systems, has benefitted from the advancements of learning methods. Many applications showed the potential of the cross fertilization between the two fields, i.e., autonomous drone racing, autonomous car racing, etc. Most of the research efforts have been dedicated to learn and improve the model dynamics, however, the controller tuning, which has a crucial importance, have not been studied much.

Goal: The objective of this project is to implement an auto-tuning learning-based algorithm for Model Predictive Contouring Control in a racing drone setting. The controller will learn how to tune the controller weights by using specialized Bayesian optimization algorithms [1] that can explore the large dimensional controller parameters space. The learning algorithm will be first tested in simulation and then validated with hardware experiments on a racing aerial vehicle. Your project would include: - A literature research on the current state of the art about Bayesian optimization [1] for controller tuning and on MPCC literature [2] - Implementation of the state-of-the-art algorithms identified in the previous point - Development of a tailored automatic controller parameters adaptation based on Bayesian optimization - Simulation of the developed algorithms on a racing drone - Test of the algorithms on a real racing drone The thesis will be in collaboration between UZH Robotics and Perception group and ETH IDSC Intelligent Control Systems group. [1] Fröhlich, Lukas P., Melanie N. Zeilinger, and Edgar D. Klenske. "Cautious Bayesian optimization for efficient and scalable policy search." Learning for Dynamics and Control. PMLR, 2021. [2] A. Romero, S. Sun, P. Foehn, and D. Scaramuzza, “Model predictive contouring control for time-optimal quadrotor flight,” IEEE Trans. Robot., doi: 10.1109/TRO.2022.3173711.

Contact Details: Angel Romero Aguilar roagui@ifi.uzh.ch, Andrea Carron, carrona@ethz.ch, Kim Wabersich wkim@ethz.ch

Thesis Type: Master Thesis

See project on SiROP

Generating High-Speed Video with Event Cameras - Available

Description: Event cameras have shown amazing capabilities in slowing down video as was shown in our previous work, TimeLens (https://www.youtube.com/watch?v=dVLyia-ezvo). This is because, compared to standard cameras, event cameras only capture a highly compressed representation of the visual signal, and do this with high dynamic range and very low latency. It is this signal that can be decoded into intermediate frames. In this project we want to push the limits of what is possible using such a method and explore new extensions.

Goal: In this project we want to explore new extensions of video frame interpolation using an event camera.

Contact Details: Daniel Gehrig (dgehrig (at) ifi.uzh.ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learning an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with a lot of potential for high-speed and high dynamic range robotic applications. They have been successfully applied in many applications, such as high speed video and high speed visual odometry. In spite of this success, the exact operating principle of event cameras, that is, how events are generated from a given visual signal and how noise is generated, is not well understood. In his work we want to explore new techniques for modelling the generation of events in an event camera, which would have wide implications for existing techniques. Applicants should have a background in C++ programming and low-level vision. In addition, familiarity with learning frameworks such as pytorch or tensorflow are required.

Goal: The goal of this project is to explore new techniques for modelling an event camera.

Contact Details: Daniel Gehrig (dgehrig (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Internship / Master Thesis

See project on SiROP

Asynchronous Processing for Event-based Deep Learning - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. Since their output is sparse traditional algorithms, which are designed for dense inputs such as frames, are not well suited. The goal of this project is explore ways to adapt existing deep learning algorithms to handle sparse asynchronous data from events. Applicants should have experience in C++ and python deep learning frameworks (tensorflow or pytorch), and have a strong background in computer vision.

Goal: The goal of this project is explore ways to adapt existing deep learning algorithms to handle sparse asynchronous data from events.

Contact Details: Daniel Gehrig (dgehrig at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Deep Learning for Model Predictive Contouring Control - Available

Description: Model Predictive Contouring Control (MPCC) has shown to achieve very good results in the task of time-optimal multi-waypoint flight. MPCC methods have the freedom to select the optimal states of the system at runtime, dropping the need for a computationally expensive reference trajectory. Our recent work shows MPCC can achieve better lap times than state-of-the-art planning+tracking approaches, and that the method can be run in real-time.

Goal: One of the extra benefits of the MPCC approach is that there are only two relevant parameters to be tuned in the cost function: contour weight and progress weight. In this project, we aim to exploit the low dimensionality of this tuning parameter space and apply learning techniques to find a mapping from a high-level task (track waypoints in a certain order in minimum time, for example) to MPCC tuning parameters.

Contact Details: Please send your CV and transcripts (bachelor and master) to Angel Romero (roagui AT ifi DOT uzh DOT ch) and Yunlong Song (song AT ifi DOT uzh DOT ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Localization techniques for drone racing - Available

Description: For fast and agile flight, most approaches require precise knowledge of the metric state. In contrast to a classical SLAM setting, drone racing offers additional features. In this project, we want to evaluate and compare different strategies for localization in this drone-racing scenario. The following classes of methods could be investigated: - classic feature-based SLAM - learned features with classic SLAM pipeline - learning-based localization - filtering based approaches Requirements: - Machine learning experience (TensorFlow and/or PyTorch) - Programming experience in C++ and Python

Goal: The goal of the project is to gain a detailed understanding of which method is best suited for - real-time localization and - offline postprocessing of the data.

Contact Details: Leonard Bauersfeld (bauersfeld AT ifi DOT uzh DOT ch), Giovanni Cioffi ( cioffi (at) ifi (dot) uzh (dot) ch)

Thesis Type: Master Thesis

See project on SiROP

Visual Odometry with new Unprecedented Event Camera - Available

Goal: In this project, you will explore and implement new algorithms for visual odometry (VO) with a new prototype event camera with unprecedented performance. This new and unexplored sensor has a high potential to improve upon existing works in the field of VO. Together with us, you will also collaborate closely with our industry partner. You should have prior programming experience and completed at least one course in computer vision.

Contact Details: Nico Messikommer [nmessi (at) ifi (dot) uzh (dot) ch], Daniel Gehrig [dgehrig (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Event-based Structured Light for Depth Sensing and Material Differentiation in Forest Canopies - Available

Description: Forest canopies provide unique challenges for robotic sensors. Fine foliage and branches increase sensor noise, and inconsistent illumination and photometric effects reduce sensor accuracy, resulting in poor performance of conventional depth sensors. Event-based structured light approaches have demonstrated promising results, utilizing the low latency, high dynamic range, and high temporal resolution of event cameras to provide accurate and high-speed depth sensing.

Goal: The goal of the project is to evaluate and characterize an existing event-based structured light system for depth estimation in cluttered and dense forest canopies. Next, material differentiation (e.g. leaves or branches) should be developed by utilizing the different available wavelengths of the light projector and correlating the resulting event intensity response to an expected spectral response.

Contact Details: Please submit a CV, transcript, and short motivation to Christian Geckeler (cgeckeler (at) ethz (dot) ch) and Manasi Muglikar (muglikar (at) ifi (dot) uzh (dot) ch)

Thesis Type: Master Thesis

See project on SiROP

Study on the effects of camera resolution in Visual Odometry - Available

Description: Visual Odometry (VO) algorithms have gone beyond academic research and are now widely used in the real world. Robotics and AR/VR applications, among many others, rely on VO to estimate the ego motion of the camera. Hardware and software co-design is key to develop accurate and robust algorithms. In this project, we will investigate how design choices at the hardware level affect the VO performance. In particular, we will study how the camera resolution affects the accuracy and robustness of some of the state-of-art VO pipelines. We believe that the results of this project will help academic research and companies in the hardware and software co-design of VO solutions and expand the use of VO algorithms in commercial products.

Goal: Get familiar with VO pipelines and simulation tools. Generate a high-resolution dataset including different camera motions. Benchmark some of the state-of-the-art VO pipelines on this dataset as well as real-world ones. We look for students with strong programming (C++ preferred) and computer vision (ideally have taken Prof. Scaramuzza's class) background.

Contact Details: Giovanni Cioffi (cioffi@ifi.uzh.ch), Manasi Muglikar (muglikar@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Deep line detection for autonomous navigation - Available

Description: Detecting lines is necessary to achieve robust autonomous navigation in a number of different scenarios such as autonomous driving and drone inspection. Most of the works in autonomous driving detect lines from the segmentation of the entire scene. Although these approaches achieve high accuracy, they are computationally demanding. Consequently they cannot be used on resource-constrained platforms as quadrotors. Contrary, cheaper computationally methods rely on object-detection capabilities to detect the lines but they do not exploit prior knowledge regarding the line shape and parameterization. In this project, inspired by state-of-the-art deep networks designed for object detection, we will design a light-weight algorithm that exploits prior knowledge on the line parameterization to achieve robust detections. The goal of the project is to develop an approach that is light-weight and runs in different scenarios, e.g.drone inspection and autonomous driving. A successful thesis will lead to the deployment of the developed algorithm on a real quadrotor platform for power-line inspection. Requirements: - Hands-on experience with deep learning - Passionate about robotics - Programming skills in python and deep learning softwares (e.g. Pytorch and / or Tensorflow).

Goal: In this project we will develop a learning-based robust line detection algorithm for autonomous navigation. Nice-to-have: deployment on a real robotic platform.

Contact Details: Giovanni Cioffi [cioffi (at) ifi (dot) uzh (dot) ch], Daniel Gehrig [dgehrig (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project

See project on SiROP

Data-driven Keypoint Extractor for Event Data - Available

Description: Neuromorphic cameras exhibit several amazing properties such as robustness to HDR scenes, high-temporal resolution, and low power consumption. Thanks to these characteristics, event cameras are applied for camera pose estimation for fast motions in challenging scenes. A common technique for camera pose estimation is the extraction and tracking of keypoints on the camera plane. In the case of event cameras, most existing keypoint extraction methods are handcrafted manually. As a new promising direction, this project tackles the keypoint extraction in a data-driven fashion based on recent advances in frame-based keypoint extractors.

Goal: The project aims to develop a data-driven keypoint extractor, which computes interest points in event data. Based on the current advances of learned keypoint extractors for traditional frames, the approach will leverage neural network architectures to extract and describe keypoints in an event stream. The student should have prior programming experience in a deep learning framework and completed at least one course in computer vision.

Contact Details: Contact Details: Nico Messikommer [nmessi (at) ifi (dot) uzh (dot) ch], Mathias Gehrig [mgehrig (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Domain Transfer between Events and Frames - Available

Description: During the last years, a vast collection of frame-based datasets was collected for countless tasks. In comparison, event-based datasets represent only a tiny fraction of the available datasets. Thus, it is highly promising to use labelled frame datasets to train event-based networks as current data-driven approaches heavily rely on labelled data.

Goal: In this project, the student extends current advances from the UDA literature for traditional frames to event data in order to transfer multiple tasks from frames to events. The approach should be validated on several tasks (segmentation, object detection, etc.) in challenging environments (night, high-dynamic scenes) to highlight the benefits of event cameras. As several deep learning methods are used as tools for the task transfer, a strong background in deep learning is required. If you are interested, we are happy to provide more details.

Contact Details: Nico Messikommer [nmessi (at) ifi (dot) uzh (dot) ch], Daniel Gehrig [dgehrig (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Vision-Based MPC Control - Available

Description: Model predictive control (MPC) is a versatile optimization-based control method that allows the incorporation of constraints directly into the control problem. The advantages of MPC can be seen in its ability to accurately control dynamic systems that include large time delays and high-order dynamics. Recent advances in compute hardware allow running MPC even on compute-constrained quadrotors. While model-predictive control can deal with complex systems and constraints, it still assumes the existence of a reference trajectory. With this project, we aim to find a tight coupling of perception and control would allow pushing the speed limits of autonomous flight through cluttered environments. Requirements: Machine learning experience (TensorFlow and/or PyTorch), Experience in MPC preferable but not strictly required, Programming experience in C++ and Python

Goal: Implement the learned perception system in simulation and integrate the predictions into an existing MPC pipeline. If possible, deploy on a real system.

Contact Details: Leonard Bauersfeld (bauersfeld AT ifi DOT uzh DOT ch), Drew Hanvoer (hanover (at) ifi (dot) uzh (dot) ch)

Thesis Type: Master Thesis

See project on SiROP

Learned Low-Level Controller - Available

Description: Typical control pipelines of drones consist of a high- and a low-level controller, where the outer loop sends high-level commands such as desired velocities (VEL). Alternatively, the outer controller can send collective thrust and body rate (CTBR) commands to the low-level controller. The latter then computes the motor commands based on the current state of the drone and the reference signal provided by the outer loop. It is well-known that collective thrust and bodyrate commands are more suitable for agile flight. In this project we investigate whether the advantage the CTBR control strategy can be offset using a learned low-level controller which takes velocity commands as an input. Requirements: Machine learning experience (TensorFlow and/or PyTorch), Programming experience in C++ and Python

Goal: Develop and deploy (simulation and, optionally, real world) a neural network controller that controls the drone using only linear-velocity commands as an input. This controller should be suitable for agile flight.

Contact Details: Leonard Bauersfeld (bauersfeld AT ifi DOT uzh DOT ch), Drew Hanover ( hanover (at) ifi (dot) uzh (dot) ch)

Thesis Type: Master Thesis

See project on SiROP

3D reconstruction with event cameras - Available

Description: Event cameras are bio-inspired sensors that offer several advantages, such as low latency, high-speed and high dynamic range, to tackle challenging scenarios in computer vision. Research on structure from motion and multi-view stereo with images has produced many compelling results, in particular accurate camera tracking and sparse reconstruction. Active sensors with standard cameras like Kinect have been used for dense scene reconstructions. Accurate and efficient reconstructions using event-camera setups is still an unexplored topic. This project will focus on solving the problem of 3D reconstruction using active perception with event cameras​ .

Goal: The goal is to develop a system for accurate mapping of complex and arbitrary scenes using depth acquired by an event camera setup. We seek a highly motivated student with the following minimum qualifications: - Excellent coding skills in Python and C++ - At least one course in computer vision (multiple view geometry) - Strong work ethic - Excellent communication and teamwork skills Preferred qualifications: - Experience with machine learning Contact for more details.

Contact Details: Manasi Muglikar, muglikar (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Event-based depth estimation​ - Available

Description: Depth estimation plays an important role in many computer vision and robotics applications, such as augmented reality, navigation, or industrial inspection. Structured light (SL) systems estimate depth by actively projecting a known pattern on the scene and observing with a camera how light interacts (i.e., deforms and reflects) with the surfaces of the objects. This project will focus on event-based depth estimation using structured light systems. The resulting approach would make structured light systems suitable for generating high-speed scans.

Goal: The goal is to develop a system for 3D depth maps with event cameras. Preferred candidate should have knowledge of computer vision and strong programming skills in Python, C++.

Contact Details: Manasi Muglikar, muglikar (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Sensor Fusion for Drone Racing - Available

Description: In autonomous drone racing, a quadrotor drone must pass through a sequence of gates as fast as possible. The main challenge is to accurately estimate the drone’s state from onboard sensors, such as cameras and IMUs.

Goal: The goal of this project is to implement a sensor fusion pipeline on real hardware and to evaluate the state estimation performance in drone racing in the real world. Requirements: Strong background in robotics, machine learning, and computer vision. ROS, C++, and Python skills. Experience with quadrotor hardware and drone flight is a plus.

Contact Details: Please send your CV and transcripts (bachelor and master) to Christian Pfeiffer (cpfeiffe AT ifi DOT uzh DOT ch) and Drew Hanover (hanover AT ifi DOT uzh DOT ch).

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learned Perception for Drone Racing - Available

Description: In autonomous racing, the quadrotor drone must pass through a sequence of gates as fast as possible. A major challenge is detecting and identifying relevant objects - such as racing rates and colliders - using an onboard camera even under challenging lighting conditions and motion blur.

Goal: The goal of this project is to use machine learning to develop a robust object detector for drone racing and to evaluate its performance for vision-based autonomous drone racing. Requirements: Strong background in machine learning, computer vision, and robotics. Pytorch, Tensorflow, Python, and ROS skills. Experience with quadrotor hardware and drone flight is a plus.

Contact Details: Please send your CV and transcripts (bachelor and master) to Christian Pfeiffer (cpfeiffe AT ifi DOT uzh DOT ch) and Yunlong Song (song AT ifi DOT uzh DOT ch).

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Deep Learning for Vision-Based State Estimation in Drone Racing - Available

Description: Drone pilots use eye movement to extract relevant visual information from a first-person video stream. How eye movements affect drone state estimation and piloting behaviour is poorly understood.

Goal: The goal of this study is to investigate the relationship of eye gaze, optical flow, and drone state using deep learning and statistical modeling. The student will be provided with a large dataset of eyetracking and optical flow data of human pilots in a drone race. The student will use statistical methods (e.g., general linear mixed models), machine learning (e.g., LSTM, deep learning), and data visualization techniques to clarify the relationship between optical flow, eye gaze, and piloting behavior in various drone racing maneuvers. Requirements: Strong programming skills in Python or Matlab. Background in machine learning and statistics. Previous experience with optical flow and eyetracking is a plus.

Contact Details: Please send your CV and transcripts (bachelor and master) to Christian Pfeiffer (cpfeiffe AT ifi DOT uzh DOT ch) and Yunlong Song (song AT ifi DOT uzh DOT ch).

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Vision for human-piloted Drone Racing - Available

Description: Human drone pilots use a single First-Person-View camera to gather information about the drone’s pose and the immediate visual environment. This project aims to identify which type(s) of visual information are necessary and minimally sufficient for performing a drone racing task successfully. The student will investigate the effects of active vs restrained eye movements, large vs narrow camera field of view, and central vs peripheral vision on the flight performance of professional pilots using a high-quality drone racing simulator. The student will learn about experiment design and data collection in human subjects (i.e., eye tracking, control commands, drone state, video frames) and statistical analyses of these behavioral and physiological time-series data.

Goal: The goal is to conduct a human-subjects experiment to identify the effects of active vs. restrained eye movements, large vs. narrow camera field of view, and central vs. peripheral vision on flight performance in drone racing. Requirements: Strong Python or Matlab skills; Interest in human-subjects research; Pytorch, Eye tracking, and drone flight experience is a plus but not strictly necessary.

Contact Details: Please send your CV and transcripts (bachelor and master) to Christian Pfeiffer (cpfeiffe AT ifi DOT uzh DOT ch) and Leonard Bauersfeld (bauersfeld AT ifi DOT uzh DOT ch).

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learning to calibrate an event camera - Available

Description: Camera calibration is an important prerequisite for 3D computer vision tasks. Calibration techniques currently used for event cameras require a special calibration target with blinking pattern. This project focuses on developing a toolkit to calibrate an event camera using deep learning methods. The project will build on state of the art deep learning techniques for events and evaluate on camera calibration task.

Goal: The goal of this project is to develop and evaluate deep learning tools for event camera for the task of calibration.

Contact Details: Manasi Muglikar, muglikar (at) ifi (dot) uzh (dot) ch , Mathias Gehrig, mgehrig (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Bachelor Thesis / Master Thesis

See project on SiROP

Vision-based Dynamic Obstacle Avoidance - Available

Description: Dynamic obstacle avoidance is a grand challenge in vision-based drone navigation. The classical mapping-planning-control pipeline might have difficulties when facing dynamic objects. Learning-based systems, such as end-to-end neural network policies, are gaining popularity in robotics for dynamic objects, due to their powerful performance and versatility in handling high-dimensional state representations. Particularly, deep reinforcement learning allows for optimizing neural network policies via trial-and-error, forgoing the need for demonstrations.

Goal: The goal is to develop an autonomous vision-based navigation system that can avoid dynamic obstacles using deep reinforcement learning. Applicants should have strong experience in C++ and python programming. Reinforcement learning and robotics background are required.

Contact Details: Yunlong Song (song (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Event-based Vision for Autonomous Driving - Available

Description: Billions of dollars are spent each year to bring autonomous vehicles closer to reality. One of the remaining challenges is the design of reliable algorithms that work in a diverse set of environments and scenarios. At the core of this problem is the choice of sensor setup. Ideally, there is a certain redundancy in the setup while each sensor should also excel at a certain task. Sampling based sensors (e.g. LIDAR, standard cameras, etc.) are today's essential building blocks of autonomous vehicles. However, they typically oversample far-away structure (e.g. building 200 meters away) and undersample close structure (e.g. fast bike crossing in front of the car). Thus, they enforce a trade-off between sampling frequency and computational budget. Unlike sampling-based sensors, event cameras capture change in their field-of-view with precise timing and do not record redundant information. As a result, they are well suited for highly dynamic scenarios such as driving on roads. There are also other benefits such as very high dynamic range, unmatched by standard cameras.

Goal: Event-based vision is a fast growing field in need of high-quality datasets. In this project, we explore the utility of event cameras in an autonomous car scenario. In order to achieve this, a high-quality driving dataset will be created that incorporates not only common sensors such as standard cameras, GPS, IMU and possibly LIDAR but also state-of-the-art event cameras. You will be collaborating with a research division of Volkswagen to join expertise in event-based vision and autonomous driving for high-quality results. We seek a highly motivated student with the following minimum qualifications: - Experience with programming microcontrollers or motivation to acquire it quickly - Good coding skills in Python and C++ - At least one course in computer vision - Strong work ethic Preferred qualifications: - Background in robotics and experience with ROS - Experience with deep learning - Experience with event-based vision

Contact Details: Mathias Gehrig (mgehrig at ifi.uzh.ch); Daniel Gehrig (dgehrig at ifi.uzh.ch) Please add CV + transcripts (Bachelor and Master)

Thesis Type: Master Thesis

See project on SiROP

Computational Photography and Videography - Available

Description: Computational Photography is a hot topic in computer vision because it finds widespread applications in mobile devices. Traditionally, the problem has been studied using frames from a single camera. Today, mobile devices feature multiple cameras and sensors that can be combined to push the frontier in computational photography and videography. In previous work (https://youtu.be/eomALySSGVU), we have successfully reconstructed high-speed, HDR video from events. In this project, we aim for combining information from a standard and event camera to exploit their complementary nature. Applications range from high-speed, HDR video to deblurring and beyond. Contact us for more details.

Contact Details: Mathias Gehrig (mgehrig at ifi.uzh.ch); Daniel Gehrig (dgehrig at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Optimization for Spiking Neural Networks - Available

Description: Spiking neural networks (SNNs) are closely inspired by the extremely efficient computation of brains. Unlike artificial neural networks, it processes information using accurate timing of events/spikes. Together with event-cameras, SNNs show promise to both lower latency and computational burden compared to artificial neural networks. In recent years, researchers have proposed several methods to estimate gradients of SNN parameters in a supervised learning context. In practice, many of these approaches rely on assumptions that lead to unknown consequences in the learning process. Requirements: - Background in machine learning; especially deep learning - Good programming skills; experience in CUDA is a plus.

Goal: In this project we aim to establish a principled framework for gradient-based optimization for spiking neural networks. As a first step, we evaluate recently proposed methods on real-world relevant tasks. Next, we extend previous work to take into previously ignored properties of spiking networks. Finally, the new approach will be compared to previous methods for validation. If progress allows, we will apply this approach to robotics and computer vision problems to demonstrate real-world applicability.

Contact Details: Mathias Gehrig, mgehrig (at) ifi (dot) uzh (dot) ch; Daniel Gehrig, dgehrig (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Efficient Learning-aided Visual Inertial Odometry - Available

Description: Recent works have shown that deep learning (DL) techniques are beneficial for visual inertial odometry (VIO). Different ways to include DL in VIO have been proposed: end-to-end learning from images to poses, replacing one/more block/-s of a standard VIO pipeline with learning-based solutions, and include learning in a model-based VIO block. The project will start with a study of the current literature on learning-based VIO/SLAM algorithms and an evaluation of how/where/when DL is beneficial for VIO/SLAM. We will use the results of this evaluation to enhance a current state-of-the-art VIO pipeline with DL, focusing our attention on algorithm efficiency at inference time. The developed learning-aided VIO pipeline will be compared to existing state-of-the-art model-based algorithms, with focus on robustness, and deployed on embedded platforms (Nvidia Jetson TX2 or Xavier).

Goal: Enhance standard VIO algorithms with DL techniques to improve robustness. Benchmark the proposed algorithm against existing state-of-the-art standard VIO algorithms. Deploy the proposed algorithm on embedded platforms. We look for students with strong computer vision background and familiar with common software tools used in DL (for example, PyTorch or TensorFlow).

Contact Details: Giovanni Cioffi (cioffi@ifi.uzh.ch), Manasi Muglikar (muglikar@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

State Estimation for Drone Racing - Available

Description: In drone racing, human pilots navigate quadrotor drones as fast as possible through a sequence of gates. To achieve the performances of world-class pilots with an autonomous system, the main challenge is to estimate the drone’s state from onboard sensors accurately. This project aims to implement a state estimation pipeline for autonomous racing on real hardware.

Goal: This project has three goals: First, extend an existing communication protocol (ROS) with sensor readings from the low-level flight controller (Betaflight). Second, integrate the sensor readings in an existing state estimation pipeline and validate its performance against ground truth data (real-world motion capture). Third, demonstrate state estimation for closed-loop control using an available flight stack.

Contact Details: Please send your CV and transcripts (bachelor and master) to Christian Pfeiffer (cpfeiffe AT ifi DOT uzh DOT ch) and Giovanni Cioffi (cioffi AT ifi DOT uzh DOT ch).

Thesis Type: Semester Project

See project on SiROP

Deep Learning for Estimation using New Sensors in our Drones - Available

Description: Quadcopter platforms have been gaining popularity in recent years due to their maneuverability and uncomplicated design. Recent advances in hardware components, such as motors and electronic speed controllers (ESCs), unlock different possible extensions of the classical quadcopter design to gain new capabilities. In this project, we aim to modify the current design of our platform to build a quadcopter equipped with new sensing modalities. This will allow new ways of performing control, modeling and even state estimation.

Goal: The student will first modify our current drone design by integrating the new sensors into the platform. The student will extend the existing low-level flight controller in C language to read via UART these new sensors. The student can then investigate potential improvements to state estimation by combining a first-principles model and learning-based approaches to propagate the estimated state forward in time. As a proof of concept, the student would test this new drone design in the task of drone racing and compare the performance with state-of-the-art controllers and estimation pipelines developed within the lab. Applicants should have a strong background in classical control and estimation techniques, programming in C, and a good understanding of nonlinear dynamic systems. Additional experience in signal processing and machine learning and being comfortable operating in a hands-on environment are highly desired.

Contact Details: Please send your CV and transcripts (bachelor and master), and any projects you have worked on that you find interesting to Angel Romero (roagui AT ifi DOT uzh DOT ch) and Drew Hanover (hanover AT ifi DOT uzh DOT ch)

Thesis Type: Master Thesis

See project on SiROP

Reinforcement Learning for Drone Racing - Available

Description: In drone racing, human pilots navigate quadrotor drones as quickly as possible through a sequence of gates arranged in a 3D track. Inspired by the impressive flight performance of human pilots, the goal of this project is to train a deep sensorimotor policy that can complete a given track as fast as possible. To this end, the policy directly predicts low-level control commands from noisy odometry data. Provided with an in-house drone simulator, the student investigates state-of-the-art reinforcement learning algorithms and reward designs for the task of drone racing. The ultimate goal is to outperform human pilots on a simulated track. Applicants should have strong experience in C++ and python programming. Reinforcement learning and robotics background are required.

Goal: Find the fastest possible trajectory through a drone racing track using reinforcement learning. Investigate different reward formulations for the task of drone racing. Compare the resulting trajectory with other trajectory planning methods, e.g., model-based path planning algorithms or optimization-based algorithms.

Contact Details: Yunlong Song (song (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project

See project on SiROP

Multi-agent Drone Racing via Self-play and Reinforcement Learning - Available

Description: Drone racing requires human pilots to not only complete a given race track in minimum-time, but also to compete with other pilots through strategic blocking, or to overtake opponents during extreme maneuvers. Single-player RL allows autonomous agents to achieve near-time-optimal performance in time trial racing. While being highly competitive in this setting, such training strategy can not generalize to the multi-agent scenario. An important step towards artificial general intelligence (AGI) is versatility -- the capability of discovering novel skills via self-play and self-supervised autocurriculum. In this project, we tackle multi-agent drone racing via self-play and reinforcement learning.

Goal: Create a multi-agent drone racing system that can discover novel racing skills and compete against each other. Applicants should have strong experience in C++ and python programming. Reinforcement learning and robotics background are required.

Contact Details: Yunlong Song (song (at) ifi (dot) uzh (dot) ch), Drew Hanover (hanover (at) ifi (dot) uzh (dot) ch).

Thesis Type: Master Thesis

See project on SiROP