Student Projects


How to apply

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP . Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).


Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).



Autonomous Drone Navigation via Learning from YouTube Videos - Available

Description: The evolving landscape of large vision and language models, paired with the untapped availability of unlabeled internet data, presents new exciting opportunities for training robotic policies. Inspired by how humans learn, this project aims to explore the possibility of learning flight patterns, obstacle avoidance, and navigation strategies by simply watching drone flight videos available on YouTube. State-of-the-art methods for processing and encoding videos, as well as unsupervised training techniques, will be evaluated and designed during the project. Applicants should have a strong background in machine learning, computer vision, and proficiency in Python programming. Familiarity with deep learning frameworks such as PyTorch is desirable.

Goal: Investigate the feasibility and effectiveness of using large vision models along with self-supervised learning techniques to teach drones to navigate autonomously by analyzing YouTube videos. Develop a prototype system capable of learning from online videos and demonstrate its effectiveness in simulated and real-world environments.

Contact Details: Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to Marco Cannici (cannici AT ifi DOT uzh DOT ch) and Angel Romero (roagui AT ifi DOT uzh DOT ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Efficient Neural Scene Reconstruction with Event Cameras - Available

Description: Building upon the success of learning-based methods in scene reconstruction and synthesis, this project aims to advance the field forward by enhancing the efficiency and speed of existing formulations in the context of event cameras. While learning-based methods have already showcased the potential of event cameras in neural scene reconstruction, they often require extensive training to achieve top-quality results. This project seeks to address this limitation by leveraging the sparse nature of events to accelerate the training of radiance fields.

Goal: The primary objective of this project is to explore innovative strategies for neural scene reconstruction using event cameras, with a focus on optimizing the training and inference speed. Applicants with a background in programming (Python/Matlab), computer vision, and familiarity with machine learning frameworks (PyTorch) are encouraged to apply.

Contact Details: Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to Marco Cannici (cannici AT ifi DOT uzh DOT ch) and Manasi Muglikar (muglikar AT ifi DOT uzh DOT ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Foundation Models for Event-based Segmentation - Available

Description: In the field of event-based vision, the key challenge lies in efficiently processing the asynchronous stream of data generated by event-based sensors. These sensors, inspired by the biological mechanisms of the human retina, capture the dynamics of a scene with high temporal resolution and low latency. The project proposes to work on foundation models for Event-based Segmentation. This approach is aimed at mitigating the challenges posed by the scarcity of labeled data in event-based vision. The project will focus on creating models capable of understanding and segmenting complex visual scenes by using novel learning methodologies. This innovative methodology has the potential to significantly expand the capabilities of event-based vision systems, particularly in dynamic and unstructured environments.

Goal: The primary goal of this project is to design, implement, and validate foundation models (CLIP, SAM) for Event-based Segmentation. Interesting joint usage of both foundation models will be explored. Applicants should have a solid machine learning background, strong programming skills (Python, C++) and experience in frameworks such as PyTorch or JAX.

Contact Details: Nikola Zubic (zubic@ifi.uzh.ch), Manasi Muglikar (muglikar@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

What can Large Language Models offer to Event-based Vision? - Available

Description: Event-based vision algorithms process visual changes in an asynchronous manner akin to how biological visual systems function, while large language models (LLMs) specialize in parsing and generating human-like text. This project aims to explore the intersection of Large Language Models (LLMs) and Event-based Vision, leveraging the unique capabilities of each domain to create a symbiotic framework. By marrying the strengths of both technologies, the initiative aims to develop a novel, more robust paradigm that excels in challenging conditions.

Goal: The primary objective is to devise methodologies that synergize the capabilities of LLMs with Event-Based Vision systems. We intend to address identified shortcomings in existing paradigms by leveraging the inferential strengths of LLMs. Rigorous evaluations will be conducted to validate the efficacy of the integrated system under various challenging conditions.

Contact Details: Nikola Zubic (zubic@ifi.uzh.ch), Nico Messikommer (nmessi@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Hybrid Spiking-Deep Neural Network System for Efficient Event-Based Vision Processing - Available

Description: Event cameras are innovative sensors that capture changes in a scene dynamically, unlike standard cameras that capture images at fixed intervals. They detect pixel-level brightness changes, providing high temporal resolution and low latency. This results in efficient data processing and reduced power consumption, typically just 1 mW. Spiking Neural Networks (SNNs) process information as discrete events or spikes, mimicking the brain's neural activity. They differ from standard Neural Networks (NNs) that process information continuously. SNNs are highly efficient in power consumption and well-suited for event-driven data from event cameras. In collaboration with SynSense, this project aims to integrate the rapid processing capabilities of SNNs with the advanced analytic powers of deep neural networks. By distilling higher-level features from raw event data, we aim to significantly reduce the volume of events needing further processing by traditional NNs, improving data quality and transmission efficiency. System will be tested on computer vision tasks like object detection and tracking, gesture recognition, and high-speed motion estimation.

Goal: The primary goal is to develop a hybrid system that combines Spiking Neural Networks (SNNs) and deep neural networks to process event data efficiently at the sensor level. We will demonstrate its versatility and effectiveness in various computer vision tasks. Rigorous testing in simulation will assess the impact on data quality and processing efficiency, followed by deployment on real hardware to evaluate real-world performance.

Contact Details: Nikola Zubic (zubic AT ifi DOT uzh DOT ch), Marco Cannici (cannici AT ifi DOT uzh DOT ch)

Thesis Type: Master Thesis

See project on SiROP

Gaussian Splatting Visual Odometry - Available

Description: Recent works have shown that Gaussian Splatting (GS) is a compact and accurate map representation. Thanks to their properties GS maps are appealing for SLAM systems. However, recent works including GS maps in SLAM struggle with map-to-frame mapping. In this project, we will investigate the potential of GS maps in VO. The goal is to achieve robust map-to-frame tracking. We will benchmark our solution against feature-based and direct-based tracking baselines. This project will be done in collaboration with Meta.

Goal: The goal is to investigate the use of Gaussian splatting maps in visual-inertial systems. We look for students with strong programming (C++ preferred), computer vision (ideally have taken Prof. Scaramuzza's class), and robotic backgrounds.

Contact Details: Giovanni Cioffi, cioffi (at) ifi (dot) uzh (dot) ch, Manasi Muglikar, muglikar (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Master Thesis

See project on SiROP

IMU-centric Odometry for Drone Racing and Beyond - Available

Description: Our recent work has shown that it is possible to estimate the state of a racer drone only using a low-grade IMU. This project will build upon our previous work and try to extend its applicability to scenarios beyond racing. To achieve this goal, we will investigate an "unconventional" way of using camera images inside the odometry pipeline. The developed VIO pipeline will be compared to existing state-of-the-art model-based algorithms, with a focus on application in agile flights in the wild, and deployed on embedded platforms (Nvidia Jetson TX2 or Xavier).

Goal: Development of an IMU-centric odometry algorithm. Benchmark against state-of-the-art VIO method. A successful thesis will lead to the deployment of the proposed odometry algorithm on the real drone platform. We look for students with strong programming (C++ preferred), computer vision (ideally have taken Prof. Scaramuzza's class), and robotic background. Hardware experience (running code on robotic platforms) is preferred.

Contact Details: Giovanni Cioffi [cioffi (at) ifi (dot) uzh (dot) ch], Jiaxu Xing [jixing (at) ifi (dot) uzh (dot) ch]

Thesis Type: Master Thesis

See project on SiROP

Navigating on Mars - Available

Description: The first ever Mars helicopter Ingenuity flew over a texture-poor terrain and RANSAC wasn’t able to find inliers: https://spectrum.ieee.org/mars-helicopter-ingenuity-end-mission Navigating the Martian terrain poses significant challenges due to its unique and often featureless landscape, compounded by factors such as dust storms, lack of distinct textures, and extreme environmental conditions. The absence of prominent landmarks and the homogeneity of the surface can severely disrupt optical navigation systems, leading to decreased accuracy in localization and path planning.

Goal: This project aims to address these challenges by developing a navigation system that is resilient to Mars' sparse features and dust interference, employing advanced computational techniques to enhance environmental perception and autonomy.

Contact Details: Manasi Muglikar muglikar (at) ifi (dot) uzh (dot) ch, Giovanni Cioffi cioffi (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Data-driven Event Generation from Images - Available

Description: Event cameras represent a significant advancement in imaging technology, capturing scenes based on changes in light intensity rather than at fixed intervals. This project aims to address the challenge of limited event-based datasets by generating synthetic events from traditional frame-based data. By employing data-driven deep learning techniques, we plan to create high-fidelity artificial events that closely mimic real-world occurrences, reducing the gap between simulated and actual event data.

Goal: In this project, the student applies current state-of-the-art deep learning models for image generation to create artificial events from standard frames. In the scope of the project, the student will obtain a deep understanding of event cameras to generate realistic events. Since multiple state-of-the-art deep learning methods will be explored, a good background in deep learning is required. If you are interested, we are happy to provide more details.

Contact Details: Nico Messikommer [nmessi (at) ifi (dot) uzh (dot) ch], Marco Cannici [cannici (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Domain Transfer between Events and Frames for Motor Policies - Available

Description: Recent robotics breakthroughs mainly use motor policies trained in simulation to perform impressive maneuvers in the real world. This project seeks to capitalize on the high-temporal resolution of event cameras to enhance the robustness of motor policies by integrating event data as a sensor modality. However, current methods for generating events in simulation are inefficient, requiring the rendering of multiple frames at a high frame rate. The primary goal of this project is to develop a shared embedding space for events and frames, enabling training on simulated frames and deployment on real-world event data. The project offers opportunities to test the proposed approach on various robotic platforms, such as quadrotors and miniature cars, depending on the project's progress.

Goal: Participants will build upon the foundations laid by previous student projects (published at ECCV22) and leverage insights from the domain of Unsupervised Domain Adaptation (UDA) literature to transfer motor policies from frames to events. The project will involve validating the approach in simulation, with potential real-world experiments conducted in our drone arena. Emphasis will be placed on demonstrating the advantages of event cameras in challenging environments, such as low-light conditions and high-dynamic scenes. Given the use of various deep learning methods for task transfer, a strong background in deep learning is essential for prospective participants. If you are interested, we are happy to provide more details.

Contact Details: Nico Messikommer [nmessi (at) ifi (dot) uzh (dot) ch], Jiaxu Xing [jixing (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Data-driven Keypoint Extractor for Event Data - Available

Description: Neuromorphic cameras, characterized by their robustness to High Dynamic Range (HDR) scenes, high-temporal resolution, and low power consumption, have paved the way for innovative applications in camera pose estimation, particularly for fast motions in challenging environments. This project focuses on enhancing camera pose estimation by exploring a data-driven approach to keypoint extraction, leveraging recent advancements in frame-based keypoint extraction techniques. To achieve this, the project aims to integrate a Visual Odometry (VO) pipeline to provide real-time feedback in an online fashion.

Goal: The primary objective of this project is to develop a data-driven keypoint extractor capable of identifying interest points in event data. Building upon insights from a previous student project (submitted to CVPR23), participants will harness neural network architectures to extract keypoints within an event stream. Furthermore, the project will involve adapting existing Visual Odometry (VO) algorithms to work with the developed keypoint extractor and tracker. Prospective students should possess prior programming experience in a deep learning framework and have completed at least one course in computer vision. This project offers an exciting opportunity to contribute to the cutting-edge intersection of neuromorphic imaging and computer vision. If you're ready to delve into the realm of data-driven keypoint extraction and its application in camera pose estimation, we're excited to provide further details.

Contact Details: Nico Messikommer [nmessi (at) ifi (dot) uzh (dot) ch], Giovanni Cioffi [cioffi (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

HDR NERF: Neural Scene reconstruction in low light - Available

Description: Implicit scene representations, particularly Neural Radiance Fields (NeRF), have significantly advanced scene reconstruction and synthesis, surpassing traditional methods in creating photorealistic renderings from sparse images. However, the potential of integrating these methods with advanced sensor technologies that measure light at the granularity of a photon remains largely unexplored. These sensors, known for their exceptional low-light sensitivity and high dynamic range, could address the limitations of current NeRF implementations in challenging lighting conditions, offering a novel approach to neural-based scene reconstruction.

Goal: his project aims to pioneer the integration of SPAD sensors with neural-based scene reconstruction frameworks, specifically focusing on enhancing Neural Radiance Fields. The primary objective is to investigate how photon derived data can be utilized to improve scene reconstruction fidelity, depth accuracy, and rendering quality under diverse lighting conditions. By extending NeRF to incorporate event-based data from SPADs, we anticipate a significant leap in the performance of neural scene synthesis methodologies, particularly in challenging environments where traditional sensors falter.

Contact Details: Manasi Muglikar muglikar (at) ifi (dot) uzh (dot) ch, Marco Cannici cannici (at) ifi (dot) uzh (dot) ch

Thesis Type: Master Thesis

See project on SiROP

Low Latency Occlusion-aware Object Tracking - Available

Description: In this project, we will develop a low-latency, robust to occlusion, object tracker. Three main paradigms exist in the literature to perform object tracking: Tracking-by-detection, Tracking-by-regression, and Tracking-by-attention. We will start with a deep literature review to evaluate the current solutions to our end goal of being fast and robust to occlusion. Starting from the conclusions of this study, we will design a novel tracker that can achieve our goal. In addition to RBG images, we will investigate other sensor modalities such as inertial measurement units and event cameras. This project is done in collaboration with Meta.

Goal: Develop a low-latency object tracker that is robust to occlusions. We look for students with strong computer vision background and familiar with common software tools used in Deep Learning (for example, PyTorch or TensorFlow).

Contact Details: Giovanni Cioffi [cioffi (at) ifi (dot) uzh (dot) ch], Nico Messikommer [nmessi (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Event-based occlusion removal - Available

Description: Unwanted camera occlusions, such as debris, dust, raindrops, and snow, can severely degrade the performance of computer-vision systems. Dynamic occlusions are particularly challenging because of the continuously changing pattern. This project aims to leverage the unique capabilities of event-based vision sensors to address the challenge of dynamic occlusions. By improving the reliability and accuracy of vision systems, this work could benefit a wide range of applications, from autonomous driving and drone navigation to environmental monitoring and augmented reality.

Goal: The goal of this project is to develop an advanced computational framework capable of identifying and eliminating dynamic occlusions from visual data in real-time, utilizing the high temporal resolution of event-based vision sensors.

Contact Details: Manasi Muglikar, muglikar (at) ifi (dot) uzh (dot) ch, Nico Messikommer nmessi (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Foundation models for vision-based reinforcement learning - Available

Description: Vision-based reinforcement learning (RL) is more sample inefficient and more complex to train compared to state-based RL because the policy is learned directly from raw image pixels rather than from the robot state. In comparison to state-based RL, vision-based policies need to learn some form of visual perception or image understanding from scratch, which makes them way more complex to learn and to generalise. Foundation models trained on vast datasets have shown promising potential in outputting feature representations that are useful for a large variety of downstream tasks. In this project, we investigate the capabilities of such models to provide robust feature representations for learning control policies. We plan to study how different feature representations affect the exploration behavior of RL policies, the resulting sample complexity and the generalisation and robustness to out-of-distribution samples. This will include training different RL policies on various robotics tasks using various intermediate feature representations.

Goal: Study the effect of feature representations from different foundation models on learning robotic control tasks with deep RL and imitation learning.

Contact Details: Elie Aljalbout [aljalbout (AT) ifi (DOT) uzh (DOT) ch], Jiaxu Xing [jixing (AT) ifi (DOT) uzh (DOT) ch], Ismail Geles [geles (AT) ifi (DOT) uzh (DOT) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Sim-to-real transfer of event-camera-based RL policies - Available

Description: This project aims to develop and evaluate drone navigation policies using event-camera inputs, focusing on the challenges of transferring these policies from simulated environments to the real world. Event cameras, known for their high temporal resolution and dynamic range, offer unique advantages over traditional frame-based cameras, particularly in high-speed and low-light conditions. However, the sim-to-real gap—differences between simulated environments and the real world—poses significant challenges for the direct application of learned policies. In this project we will look try to understand the sim-to-real gap for event cameras and how this gap influences downstream control tasks, such as flying in the dark, dynamic obstacle avoidance and, object catching. This would include learning representations for event data ( ideally while reducing the sim-real domain gap) and training navigation policies using either reinforcement or imitation learning methods.

Goal: Train drone navigation policies on various tasks in simulation using event-based images and transfer them to the real-world.

Contact Details: Elie Aljalbout [aljalbout (AT) ifi (DOT) uzh (DOT) ch], Marco Cannici [cannici (AT) ifi (DOT) uzh (DOT) ch], Ismail Geles [geles (AT) ifi (DOT) uzh (DOT) ch]

Thesis Type: Master Thesis

See project on SiROP

Offline-to-Online (model-based) Reinforcement Learning Transfer and Finetuning for Vision-based Robot Control - Available

Description: Vision-based reinforcement learning (RL) is often sample-inefficient and computationally very expensive. One way to bootstrap the learning process is to leverage offline interaction data. However, this approach faces significant challenges, including out-of-distribution (OOD) generalization and neural network plasticity. The goal of this project is to explore methods for transferring offline policies to the online regime in a way that alleviates the OOD problem. By initially training the robot's policies system offline, the project seeks to leverage the knowledge of existing robot interaction data to bootstrap the learning of new policies. The focus is on overcoming domain shift problems and exploring innovative ways to fine-tune the model and policy using online interactions, effectively bridging the gap between offline and online learning. This advancement would enable us to efficiently leverage offline data (e.g. from human or expert agent demonstrations or previous experiments) for training vision-based robotic policies. This would/could involve (but is not limited to) developing methods for uncertainty estimation and handling, domain adaptation for model-based RL, pessimism (during offline training), and curiosity (during finetuning) in RL methods.

Goal: Develop methods for transferring control policies learned offline to the online inference/finetuning regime.

Contact Details: Elie Aljalbout [aljalbout (AT) ifi (DOT) uzh (DOT) ch]

Thesis Type: Master Thesis

See project on SiROP

Hierarchical reinforcement learning for 3D object navigation tasks - Available

Description: This project aims to simplify the learning process for new drone control tasks by leveraging a pre-existing library of skills through reinforcement learning (RL). The primary objective is to define a skill library that includes both established drone controllers and new ones learned from offline data (skill discovery). Instead of teaching a drone to fly from scratch for each new task, the project focuses on bootstrapping the learning process with these pre-existing skills. For instance, if a drone needs to search for objects in a room, it can utilize its already-acquired flying skills. A high-level policy will be trained to determine which low-level skill to deploy and how to parameterize it, thus streamlining the adaptation to new tasks. This approach promises to enhance efficiency and effectiveness in training drones for a variety of complex control tasks by building on foundational skills. In addition it facilitates training multi-task policies for drones.

Goal: Develop a hierarchical RL framework that leverages a library of low-level skills. The latter can be either learned using interaction, discovered from offline data, or designed.

Contact Details: Elie Aljalbout [aljalbout (AT) ifi (DOT) uzh (DOT) ch], Angel Romero [roagui (AT) ifi (DOT) UZH (DOT) CH]

Thesis Type: Master Thesis

See project on SiROP

Meta-model-based-RL for adaptive flight control - Available

Description: Drone dynamics can change significantly during flight due to variations in load, battery levels, and environmental factors such as wind conditions. These dynamic changes can adversely affect the drone's performance and stability, making it crucial to develop adaptive control strategies. The aim of this research is to develop and evaluate a meta model-based reinforcement learning (RL) framework to address these variable dynamics. By integrating dynamic models that account for these variations and employing meta-learning techniques, the proposed method seeks to enhance the adaptability and performance of drones in dynamic environments. The project will involve learning dynamic models for the drone, implementing a meta model-based RL framework, and evaluating its performance in both simulated and real-world scenarios, aiming for improved stability, efficiency, and task performance compared to existing RL approaches and traditional control methods. Successful completion of this project will contribute to the advancement of autonomous drone technology, offering robust and efficient solutions for various applications.

Goal: Develop methods for meta model-based RL to handle variable drone dynamics.

Contact Details: Elie Aljalbout [aljalbout (AT) ifi (DOT) uzh (DOT) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Develop an RL environment for GoKart racing - Available

Description: The student will develop an RL training environment that is able to train an agent to race in a race track. This environment will support different RL algorithm (PPO, SAC, etc). The student will first start with building the environment itself, including the track and a potential ‘car’ with its basic dynamics. After this, the student will develop a reward function that is able to take the car to its limits of handling, similar to what we have in the drones.

Goal: At the end of the project, the created environment will be able to train a car agent that is able to race time-optimally through a track

Contact Details: The project will be supervised by both Professor Emilio Frazzoli and Professor Davide Scaramuzza's groups. The experiments will take place at the Winterthur testing ground, utilizing our fleet of autonomous racing karts. Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to Angel Romero (roagui AT ifi DOT uzh DOT ch), Leonard Bauersfeld (bauersfeld AT ifi DOT uzh DOT ch), Ismail Geles (geles AT ifi DOT uzh DOT ch), Jiaxu Xing (jixing AT ifi DOT uzh DOT ch) and Maurilio di Cicco (mdicicco AT ethz DOT ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

From Floorplan to Flight - Available

Description: Drone racing is considered a proxy task for many real-world applications, including search and rescue missions. In such an application, doorframes, corridors, and other features of the environment could be used to as “gates” the drone needs to pass through. Relevant information on the layout could be extracted from a floor plan of the environment in which the drone is tasked to operate autonomously. To be able to train such navigation policies, the first step is to simulate the environment.

Goal: This project aims to develop a simulation of environments that procedurally generate corridors and doors based on an input floor plan. We will compare model-based approaches (placing objects according to some heuristic/rules) with learning-based approaches, which directly generate the model based on the floorplan. Requirements: - Machine learning experience (PyTorch) - Excellent programming skills in C++ and Python - 3D Modeling experience (CAD, Blender) is a plus

Contact Details: Leonard Bauersfeld (bauersfeld@ifi.uzh.ch), Marco Cannici (cannici@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Vision-based End-to-End Flight with Obstacle Avoidance - Available

Description: Recent progress in drone racing enables end-to-end vision-based drone racing, directly from images to control commands _without explicit state estimation_. In this project, we address the challenge of unforeseen obstacles and changes to the racing environment. The goal is to develop a control policy that can race through a predefined track but is robust to minor track layout changes and gate placement changes. Additionally, the policy should avoid obstacles that are placed on the racetrack, mimicking real-world applications where unforeseen obstacles can be present at any time. Requirements: - Machine learning experience (PyTorch) - Excellent programming skills in C++ and Python

Contact Details: Leonard Bauersfeld (bauersfeld@ifi.uzh.ch), Ismail Geles (geles@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Event-based Particle Image Velocimetry - Available

Description: When drones are operated in industrial environments, they are often flown in close proximity to large structures, such as bridges, buildings or ballast tanks. In those applications, the interactions of the induced flow produced by the drone’s propellers with the surrounding structures are significant and pose challenges to the stability and control of the vehicle. A common methodology to measure the airflow is particle image velocimetry (PIV). Here, smoke and small particles suspended in the surrounding air are tracked to estimate the flow field. In this project, we aim to leverage the high temporal resolution of event cameras to perform smoke-PIV, overcoming the main limitation of frame-based cameras in PIV setups. Applicants should have a strong background in machine learning and programming with Python/C++. Experience in fluid mechanics is beneficial but not a hard requirement.

Goal: The goal of the project is to develop and successfully demonstrate a PIV method in the real world.

Contact Details: Leonard Bauersfeld (bauersfeld@ifi.uzh.ch), Koen Muller (kmuller@ethz.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Vision-based Navigation in Dynamic Environment via Reinforcement Learning - Available

Description: In this project, the goal is to develop a vision-based policy that enables autonomous navigation in complex, cluttered environments. The learned policy should enable the robot to effectively reach a designated target based on visual input while safely avoiding encountered obstacles. Some of the use cases for this approach will be to ensure a safe landing on a moving target in a cluttered environment or to track a moving target in the wild. Applicants should have a solid understanding of reinforcement learning, machine learning experience (PyTorch), and programming experience in C++ and Python.

Goal: Develop such a policy based on an existing reinforcement learning pipeline. Extend the training environment adapted for the task definition. The approach will be demonstrated and validated both in simulated and real-world settings.

Contact Details: Jiaxu Xing (jixing@ifi.uzh.ch), Leonard Bauersfeld (bauersfeld@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Learning Rapid UAV Exploration with Foundation Models - Available

Description: In this project, our objective is to efficiently explore unknown indoor environments using UAVs. Recent research has demonstrated significant success in integrating foundational models with robotic systems. Leveraging these foundational models, the drone will employ learned semantic relationships from large-world-scale data to actively explore and navigate through unknown environments. While most prior research has focused on ground-based robots, this project aims to investigate the potential of integrating foundational models with aerial robots to introduce more agility and flexibility. Applicants should have a solid understanding of mobile robot navigation, machine learning experience (PyTorch), and programming experience in C++ and Python.

Goal: Develop such a framework in simulation and conduct a comprehensive evaluation and analysis. If feasible, deploy such a model in a real-world environment.

Contact Details: Jiaxu Xing (jixing@ifi.uzh.ch), Nico Messikommer (nmessi@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Event Camera based navigation for planetary landing in collaboration with the European Space Agency - Available

Description: Event-based cameras have remarkable advantages in challenging robotics conditions involving high-dynamic range and very fast motion. These are exactly the conditions a spacecraft encounters during descent on celestial bodies such as Mars or the Moon, where abrupt changes in illumination and fast dynamics relative to the ground can affect vision-based navigation systems relying on standard cameras. In this work, we want to design novel spacecraft navigation methods for descent and landing phases, exploiting the power efficiency and sparsity of event cameras. The project is in collaboration with European Space Agency at the European Space Research and Technology Centre (ESTEC) in Noordwijk (NL). We look for students with strong programming (Pyhton/Matlab) and computer vision background. Additionally, knowledge in machine learning frameworks (pytorch, tensorflow) is required.

Goal: Help build the next generation of high-speed event camera-based spacecraft navigation in challenging illumination conditions.

Contact Details: Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to Marco Cannici (cannici AT ifi DOT uzh DOT ch), Nikola Zubic (zubic AT ifi DOT uzh DOT ch)

Thesis Type: Master Thesis

See project on SiROP

Reinforcement Learning for Go-Kart Racing - Available

Description: Reinforcement learning (RL) models devoid of explicit models have showcased remarkable superiority over classical planning and control strategies. This advantage is attributed to their advanced exploration capabilities, enabling them to efficiently discover new optimal trajectories. Leveraging RL, our aim is to create an autonomous racing system capable of swiftly learning optimal racing strategies and navigating tracks more effectively (faster) than traditional methods and human drivers.

Goal: Objective: The primary objective of this project is to design and implement an RL-based system capable of autonomously racing a real go-kart around a track. Specifically, we aim to achieve the following goals: 1. Create a realistic simulation environment that accurately captures the dynamics of the autonomous go-kart platform, including its sensor readings, and its interactions with the racing track. 2. Implement and train the RL algorithms to learn optimal racing trajectories, braking points, to maximize its lap time performance. (No overtaking policies will be explored in this phase) 3. Deploy the RL algorithm on the real platform. 4. Design an experimental campaign to evaluate the autonomous agent's performance compared to classical planning and control strategies, and human drivers. Required Background and Knowledges ROS, Python, C++ is a plus RL (simulation, deployment) Sensor modeling and system identification Hands on experience with real robots and RL

Contact Details: The project will be supervised by both Professor Emilio Frazzoli and Professor Davide Scaramuzza's groups. The experiments will take place at the Winterthur testing ground, utilizing our fleet of autonomous racing karts. Interested candidates should send their CV, transcripts (bachelor and master), and descriptions of relevant projects to Angel Romero (roagui AT ifi DOT uzh DOT ch), Leonard Bauersfeld (bauersfeld AT ifi DOT uzh DOT ch), Ismail Geles (geles AT ifi DOT uzh DOT ch), Jiaxu Xing (jixing AT ifi DOT uzh DOT ch) and Maurilio di Cicco (mdicicco AT ethz DOT ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Inverse Reinforcement Learning from Expert Pilots - Available

Description: Drone racing demands split-second decisions and precise maneuvers. However, training drones for such races relies heavily on crafted reward functions. These methods require significant human effort in design choices and limit the flexibility of learned behaviors. Inverse Reinforcement Learning (IRL) offers a promising alternative. IRL allows an AI agent to learn a reward function by observing expert demonstrations. Imagine an AI agent analyzing recordings of champion drone pilots navigating challenging race courses. Through IRL, the agent can infer the implicit factors that contribute to success in drone racing, such as speed and agility.

Goal: We want to explore the application of Inverse Reinforcement Learning (IRL) for training RL agents performing drone races or FPV freestyle to develop methods that extract valuable knowledge from the actions and implicit understanding of expert pilots. This knowledge will then be translated into a robust reward function suitable for autonomous drone flights.

Contact Details: Ismail Geles [geles (at) ifi (dot) uzh (dot) ch], Elie Aljalbout [aljalbout (at) ifi (dot) uzh (dot) ch], Angel Romero [roagui (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Language-guided Drone Control - Available

Description: Imagine controlling a drone with simple, natural language instructions like "fly through the gap" or "follow that red car” – this is the vision behind language-guided drone control. However, translating natural language instructions into precise drone maneuvers presents a unique challenge. Drones operate in a dynamic environment, requiring real-time interpretation of user intent and the ability to adapt to unforeseen obstacles.

Goal: This project focuses on developing a novel system for language-guided drone control using recent advances in Vision Language Models (VLMs). Our goal is to bridge the gap between human language and drone actions. We aim to create a system that can understand natural language instructions, translate them into safe and efficient flight instructions, and control the drone accordingly, making it accessible to a wider range of users and enabling more intuitive human-drone interaction.

Contact Details: Ismail Geles [geles (at) ifi (dot) uzh (dot) ch], Elie Aljalbout [aljalbout (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Reinforcement Learning for Drone Maneuvers from Human Preferences - Available

Description: Traditionally, training drones for specific maneuvers rely on pre-defined reward functions meticulously crafted by domain experts. This approach limits the flexibility of learned behaviors and requires significant human effort. Additionally, defining reward functions for complex maneuvers like obstacle avoidance or acrobatics can be challenging. Recent works have demonstrated the effectiveness of utilizing human preferences for significant efficiency gains and fine-tuning complex models, such as Large Language Models (LLMs). This approach allows the model to incorporate human feedback into their learned behavior.

Goal: This project aims to find novel methods for training drones to perform difficult maneuvers (e.g., obstacle avoidance, aerial acrobatics,..) with minimal human supervision and without pre-defining reward functions. We propose leveraging human preferences to guide the learning process, allowing the drone to learn desirable behaviors directly from human feedback.

Contact Details: Ismail Geles [geles (at) ifi (dot) uzh (dot) ch], Angel Romero [roagui (at) ifi (dot) uzh (dot) ch], Jiaxu Xing [jixing (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

gpuFlightmare: High-Performance GPU-Based Physics Simulation and Image Rendering for Flying Robots - Available

Description: gpuFlightmare is a next-generation GPU-accelerated framework designed to enhance the capabilities of Flightmare, a CPU-based physics simulation tool. By transitioning to GPU processing, this project addresses two main limitations of the existing system: the inability to scale simulations to larger, more complex environments and the slow image rendering speeds that hinder efficient policy training for flying robots.

Goal: The goal of gpuFlightmare is to provide a more efficient and effective platform for developing and testing vision-based navigation policies. By improving simulation and rendering speeds, the project will facilitate faster iterations of policy training and validation, making it a valuable tool for researchers and developers in the field of aerial robotics.

Contact Details: Yunlong Song (song@ifi.uzh.ch), Nico Messikommer ((nmessi@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Autonomous Flight Using A Camera - Available

Description: In First-Person View (FPV) drone flying, professional pilots demonstrate remarkable skill, navigating through complex environments with precision and flair. The essence of FPV flight lies not just in efficiency or speed, but in the "cool" factor — the ability to perform dynamic, agile maneuvers that captivate and impress. This project explores the challenge of capturing this "coolness" factor in optimization, enabling the development of an autonomous flight system capable of replicating the nuanced flight patterns of expert human pilots. Our research focuses on formulating these advanced maneuvers and implementing them through a vision-based system, allowing drones to autonomously navigate through cluttered spaces like forests with the same level of skill and style as their human counterparts.

Goal: To create a sophisticated autonomous FPV flight system that integrates advanced computer vision and control algorithms, enabling drones to autonomously execute complex, human-like maneuvers in cluttered and dynamically changing environments.

Contact Details: Yunlong Song (song@ifi.uzh.ch), Nico Messikommer (nmessi@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP