Student Projects


How to apply

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP . Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).


Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).



Reinforcement Learning for Offboard Control of a Racing Drone - Available

Description: Autonomous drone racing using offboard control is very challenging because a limited amount of sensor data are available to the offboard computer. Offboard control however, allows using extremely lightweight drones that can reach much higher performances than heavier drones using onboard control. This project has two goals: First, implement and test a communication interface between a ROS-based flight stack and a C/Python-based codebase for low-latency high-bandwidth communication between the drone and an offboard computer. Second, use reinforcement learning to develop a policy that can successfully fly an offboard-controlled racing drone in the real world. Requirements: Strong background in robotics and machine learning is required. ROS, C++, and Python skills. Experience with quadrotor hardware and drone flight is a plus.

Contact Details: Please send your CV and transcripts (bachelor and master) to Christian Pfeiffer (cpfeiffe AT ifi DOT uzh DOT ch), Angel Romero (roagui AT ifi DOT uzh DOT ch), and Yunlong Song (song AT ifi DOT uzh DOT ch).

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Benchmarking Algorithms for Autonomous Drone Racing - Available

Description: Drone racing is an increasingly popular sport and many world-class pilots use simulators for practicing their skills. This project aims to deploy control algorithms for autonomous drone racing in the Liftoff Drone Racing simulator (https://www.liftoff-game.com/), collect data from the autonomous and human-piloted flight, and perform a benchmark comparison between both. The student will implement a control interface between the Liftoff simulator and an existing ROS-based flight stack, collect and analyze flight performance data. Requirements: Strong programming experience in ROS, C++, and Python. Experience in Unity3D is a plus.

Contact Details: Please send your CV and transcripts (bachelor and master) to Christian Pfeiffer (cpfeiffe AT ifi DOT uzh DOT ch) and Leonard Bauersfeld (bauersfeld AT ifi DOT uzh DOT ch).

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Study on the effects of camera resolution in Visual Odometry - Available

Description: Visual Odometry (VO) algorithms have gone beyond academic research and are now widely used in the real world. Robotics and AR/VR applications, among many others, rely on VO to estimate the ego motion of the camera. Hardware and software co-design is key to develop accurate and robust algorithms. In this project, we will investigate how design choices at the hardware level affect the VO performance. In particular, we will study how the camera resolution affects the accuracy and robustness of some of the state-of-art VO pipelines. We believe that the results of this project will help academic research and companies in the hardware and software co-design of VO solutions and expand the use of VO algorithms in commercial products.

Goal: Get familiar with VO pipelines and simulation tools. Generate a high-resolution dataset including different camera motions. Benchmark some of the state-of-the-art VO pipelines on this dataset as well as real-world ones. We look for students with strong programming (C++ preferred) and computer vision (ideally have taken Prof. Scaramuzza's class) background.

Contact Details: Giovanni Cioffi (cioffi@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Perception Aware Minimum-time Planning in Cluttered Environments - Available

Description: Autonomous drone racing requires planning trajectories that pass through a sequence of gates as fast as possible to beat other competitors. However, planning high-speed trajectories in obstacle-cluttered environments for dynamic systems like drones is a challenging problem. Moreover, the planning algorithms should support autonomous vision-based flight by creating trajectories with respect to the onboard camera used for both state estimation and gate detection. The goal of this project is to develop planning algorithms for drones that consider known cluttered environments and plan perception aware minimum-time trajectories. Applications should have strong experience in C++ and experience with ROS.

Contact Details: Please send your CV and transcripts (bachelor and master) to Robert Penicka (penicka AT ifi DOT uzh DOT ch) and Giovanni Cioffi (cioffi AT ifi DOT uzh DOT ch).

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Building and controlling a negative thrust quadcopter - Available

Description: Quadcopter platforms have been gaining popularity in recent years due to their maneuverability and uncomplicated design. Recent advances in hardware components, such as motors and electronic speed controllers (ESCs) unlock different possible extensions of the classical quadcopter design in order to gain new capabilities. In this project, we aim to modify the current design of our platform to build a quadcopter that is able to generate thrust in both positive and negative directions by changing the rotation direction of the motors on the fly. This will provide the quadcopter with new ways of performing complex maneuvers in ways unseen until now.

Goal: The student will first modify our current drone design to include motors and ESCs that support changes in directions. Then, the student will design a control architecture, based on the existing ones, that take into account the new input space. As a proof of concept, the student would test this new drone design by performing highly agile maneuvers and compare them with state-of-the-art platforms and algorithms.

Contact Details: Please send your CV and transcripts (bachelor and master) to Angel Romero (roagui AT ifi DOT uzh DOT ch) and Leonard Bauersfeld (bauersfeld AT ifi DOT uzh DOT ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Deep Learning for Model Predictive Contouring Control - Available

Description: Model Predictive Contouring Control (MPCC) has shown to achieve very good results in the task of time-optimal multi-waypoint flight. MPCC methods have the freedom to select the optimal states of the system at runtime, dropping the need for a computationally expensive reference trajectory. Our recent work shows MPCC can achieve better lap times than state-of-the-art planning+tracking approaches, and that the method can be run in real-time.

Goal: One of the extra benefits of the MPCC approach is that there are only two relevant parameters to be tuned in the cost function: contour weight and progress weight. In this project, we aim to exploit the low dimensionality of this tuning parameter space and apply learning techniques to find a mapping from a high-level task (track waypoints in a certain order in minimum time, for example) to MPCC tuning parameters.

Contact Details: Please send your CV and transcripts (bachelor and master) to Angel Romero (roagui AT ifi DOT uzh DOT ch) and Yunlong Song (song AT ifi DOT uzh DOT ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Online Replanning for Autonomous Drone Racing - Available

Description: Online trajectory replanning is necessary for deploying autonomous drones in only partially known racing environments, or with moving obstacles. In drone racing, the goal is to minimize the time of flying the drone through a race track. Therefore, the resulting trajectories have to exploit the full actuation of the quadrotor to minimize trajectory duration and be feasible for the quadrotor dynamics. Furthermore, the planning algorithm has to be computationally simple for fast online replanning for avoiding collisions. The goal of this project is to pursue research on online trajectory replanning for quadrotors with a minimum-time objective. Applications should have experience in C++ and ROS.

Contact Details: Please send your CV and transcripts (bachelor and master) to Robert Penicka (penicka AT ifi DOT uzh DOT ch) and Angel Romero (roagui AT ifi DOT uzh DOT ch).

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Planning for Multi-player Competitive Drone Racing - Available

Description: Drone racing is a competitive sport where pilots have to maneuver their platforms through a race track while avoiding and overtaking other drones. Therefore, autonomous drones have to plan with respect to other competitors while flying the track in minimum time. This requires predicting opponent drones’ trajectories and planning own trajectories that avoid collisions with opponent drones. Moreover, the planner has to consider overtaking strategies and plan such maneuvers to beat other players. The goal of this project is to pursue research on planning for multi-player competitive drone racing. Applications should have experience in C++ and ROS.

Contact Details: Please send your CV and transcripts (bachelor and master) to Robert Penicka (penicka AT ifi DOT uzh DOT ch) and Yunlong Song (song AT ifi DOT uzh DOT ch).

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Tracking with Spiking Neural Networks and Event Cameras - Available

Description: This project aims at developing a camera tracking approach with sparse input (events from event cameras) and sparse computation (spiking neural networks). Conventional approaches are built on visual inertial odometry using image and imu data. Ideally, this approach would process image data at high frequency for maximum accuracy. However, this is not attainable on resource constraint devices such as mobile phones or wearables. Event data, in combination with spiking neural networks, can overcome this trade-off by leveraging sparse computation by design. To achieve this goal, we will investigate ego-motion tracking for rotational motion and subsequently investigate 6DoF ego-motion tracking. This project will be done in collaboration with Synsense ( https://www.synsense-neuromorphic.com ) and benefit from their experience as well as our own prior work in this research space.

Contact Details: Mathias Gehrig, mgehrig (at) ifi.uzh.ch

Thesis Type: Master Thesis

See project on SiROP

Learning to calibrate an event camera - Available

Description: Camera calibration is an important prerequisite for 3D computer vision tasks. Calibration techniques currently used for event cameras require a special calibration target with blinking pattern. This project focuses on developing a toolkit to calibrate an event camera using deep learning methods. The project will build on state of the art deep learning techniques for events and evaluate on camera calibration task.

Goal: The goal of this project is to develop and evaluate deep learning tools for event camera for the task of calibration.

Contact Details: Manasi Muglikar, muglikar (at) ifi (dot) uzh (dot) ch , Mathias Gehrig, mgehrig (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Bachelor Thesis / Master Thesis

See project on SiROP

Event-based Feature Tracking on an Embedded Platform - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications, such as fast obstacle avoidance. In particular, event cameras can be used to track features or objects in the blind time between two frames which makes it possible to react quickly to changes in the scene.

Goal: In this project we want to deploy an event-based feature tracking algorithm on a resource constrained platform such as a drone. Applicants should have a strong background in C++ programming and low-level vision. Experience with embedded programming is a plus.

Contact Details: Daniel Gehrig (dgehrig (at) ifi.uzh.ch), Elia Kaufmann (ekaufmann (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project

See project on SiROP

Generating High-Speed Video with Event Cameras - Available

Description: Event cameras have shown amazing capabilities in slowing down video as was shown in our previous work, TimeLens (https://www.youtube.com/watch?v=dVLyia-ezvo). This is because, compared to standard cameras, event cameras only capture a highly compressed representation of the visual signal, and do this with high dynamic range and very low latency. It is this signal that can be decoded into intermediate frames. In this project we want to push the limits of what is possible using such a method and explore new extensions.

Goal: In this project we want to explore new extensions of video frame interpolation using an event camera.

Contact Details: Daniel Gehrig (dgehrig (at) ifi.uzh.ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learning an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with a lot of potential for high-speed and high dynamic range robotic applications. They have been successfully applied in many applications, such as high speed video and high speed visual odometry. In spite of this success, the exact operating principle of event cameras, that is, how events are generated from a given visual signal and how noise is generated, is not well understood. In his work we want to explore new techniques for modelling the generation of events in an event camera, which would have wide implications for existing techniques. Applicants should have a background in C++ programming and low-level vision. In addition, familiarity with learning frameworks such as pytorch or tensorflow are required.

Goal: The goal of this project is to explore new techniques for modelling an event camera.

Contact Details: Daniel Gehrig (dgehrig (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Internship / Master Thesis

See project on SiROP

Deep Learning for Vision-Based State Estimation in Drone Racing - Available

Description: Drone pilots use eye movement to extract relevant visual information from a first-person video stream. How eye movements affect drone state estimation and piloting behaviour is poorly understood. The goal of this study is to investigate the relationship of eye gaze, optical flow, and drone state using deep learning and statistical modeling. The student will be provided with a large dataset of eyetracking and optical flow data of human pilots in a drone race. The student will use statistical methods (e.g., general linear mixed models), machine learning (e.g., LSTM, deep learning), and data visualization techniques to clarify the relationship between optical flow, eye gaze, and piloting behavior in various drone racing maneuvers. Requirements: Strong programming skills in Python or Matlab. Background in machine learning and statistics. Previous experience with optical flow and eyetracking is a plus.

Contact Details: Please send your CV and transcripts (bachelor and master) to Christian Pfeiffer (cpfeiffe AT ifi DOT uzh DOT ch) and Yunlong Song (song AT ifi DOT uzh DOT ch).

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Hardware and Software Codesign for Visual-Inertial Odometry - Available

Description: Visual inertial odometry algorithms have matured in the last decade, but still rely on high-quality data and struggle with fast motion. While industry spends vast resources in software and hardware co-design, robotic research often relies on generic off-the-shelf cameras and inertial measurement units, which typically timestamp their data from unsynchronized clocks and, therefore, suffer from timestamp drift and sensor-to-sensor timestamp offsets. To overcome these shortcomings, we are developing OBVIO, a sensor board to democratize robust visual-inertial odometry. OBVIO provides global shutter VGA images at a rate of 100 Hz and inertial acceleration and angular velocity measurements at a rate of 500 Hz. The data is accurately timestamped from a single onboard clock, which is synchronized with the host computer’s clock through a Kalman Filter.

Goal: The goal of the project is to, first, do a thorough revision of the current PCB design (choice of the camera type, communication protocols, and architectures, etc.) in order to identify the bottlenecks that are present in the implementation. Then, the student will extend the current design with different solutions, including changes in the PCB design and in the firmware, in both the sensor board and the host computer. Finally, the student will compare the performance of the sensor board by running current state-of-the-art VIO algorithms, and test the board in real-time applications, such as vision-based drone racing research. Requirements: Experience with PCB design, strong programming skills in C, C++, and familiarity with Linux environments. Previous experience with visual-inertial odometry is a plus.

Contact Details: Please send your CV and transcripts (bachelor and master) to Angel Romero (roagui AT ifi DOT uzh DOT ch), Philipp Foehn (foehn AT ifi DOT uzh DOT ch) and Thomas Laengle (tlaengle AT ifi DOT uzh DOT ch)

Thesis Type: Master Thesis

See project on SiROP

Perception Aware Model Predictive Control for Autonomous Power Line Tracking - Available

Description: Classical power line inspection and maintenance are dangerous, costly and time consuming. Drones could mitigate the risk for humans and minimize the cost for the direct benefit of the power line infrastructure. Coupling perception and path planning with control has become increasingly popular in aerial vehicles. This project will continue to investigate vision-based navigation tightly couple approaches of perception and control to satisfy the drone dynamics and compute feasible trajectories with respect to the input saturation. This involves to further development the research on perception aware Model Predictive Control (MPC) for quadrotors, solving the challenging aspects of a power line inspection scenario. A perception aware MPC approach would ideally improve the aerial vehicle's behavior during an approaching maneuver.

Goal: Pursue research on a unified control and planning approach that integrates the action and perception objectives. We look for students with strong computer vision background and hands-on control experience with MPC. This work involves a final demonstration to accomplish field testing results in challenging conditions (e.g.: HDR, high speed).

Contact Details: Giovanni Cioffi (cioffi@ifi.uzh.ch), Javier Hidalgo-Carrió (jhidalgocarrio@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Data-Driven Quadrotor Simulation - Available

Description: Current quadrotor simulators used in research only model the simplified dynamics of quadrotors and typically don’t account for aerodynamic effects encountered at high speeds. To push the limits in fast autonomous flight, access to a simulator that accurately models the platform in those regimes is very important. With having access to the largest optitrack-space in Zurich, the goal of this thesis is to record a dataset of very agile maneuvers and use it to accurately identify the quadrotor platform. After analyzing different identification methods, the best performing candidate is used to build an accurate quadrotor dynamics model that is then integrated in a simulator. Applicants should have strong experience in C++ and Python programming and a robotics background.

Goal: Collect a dataset of high-speed maneuvers in the optitrack and identify the quadrotor platform. Use this model to create a fast and accurate quadrotor simulation. Verify the simulator by comparing it to real-world data.

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch), Yunlong Song (song@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Continuous-time Online Visual-Inertial Odometry for Fast Flights - Available

Description: The continuous time (CT) trajectory representation in visual inertial odometry (VIO) has the advantage of facilitating the fusion of the asynchronous, and potentially shifted, camera and IMU measurements. This is beneficial in the case of a hardware synchronized sensor suite is not available. CT-VIO introduces a prior encoding the smoothness of the trajectory to be estimated. This prior can help the pose estimations of fast flying drones whose trajectories are expected to be smooth. Temporal basis functions, e.g. B-splines, are the most common choice for CT-VIO / CT-SLAM. Recent works have proposed algorithms to speed up the computation of the spline derivatives. Also, other efficient spline functions, like Hermite spline, exist. In this project, we will start with studying different spline functions in terms of efficiency. We will select the best candidate and use it to develop an efficient CT-VIO algorithm which runs online on resource-constrained quadrotors.

Goal: Develop an efficient CT-VIO algorithm capable to run online on our quadrotors (target platform is the Nvidia Jetson TX2). Benchmark the proposed algorithm against existing state-of-the-art VIO algorithms. We look for students with strong computer vision and programming background (C++ preferred). This work involves a final demonstration of the proposed CT-VIO algorithm in a closed-loop controller to track fast drone trajectories.

Contact Details: Giovanni Cioffi (cioffi@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Efficient Learning-aided Visual Inertial Odometry - Available

Description: Recent works have shown that deep learning (DL) techniques are beneficial for visual inertial odometry (VIO). Different ways to include DL in VIO have been proposed: end-to-end learning from images to poses, replacing one/more block/-s of a standard VIO pipeline with learning-based solutions, and include learning in a model-based VIO block. The project will start with a study of the current literature on learning-based VIO/SLAM algorithms and an evaluation of how/where/when DL is beneficial for VIO/SLAM. We will use the results of this evaluation to enhance a current state-of-the-art VIO pipeline with DL, focusing our attention on algorithm efficiency at inference time. The developed learning-aided VIO pipeline will be compared to existing state-of-the-art model-based algorithms, with focus on robustness, and deployed on embedded platforms (Nvidia Jetson TX2 or Xavier).

Goal: Enhance standard VIO algorithms with DL techniques to improve robustness. Benchmark the proposed algorithm against existing state-of-the-art standard VIO algorithms. Deploy the proposed algorithm on embedded platforms. We look for students with strong computer vision background and familiar with common software tools used in DL (for example, PyTorch or TensorFlow).

Contact Details: Giovanni Cioffi (cioffi@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Probabilistic System Identification of a Quadrotor Platform - Available

Description: Most planning & control algorithms used on quadrotors make use of a nominal model of the platform dynamics to compute feasible trajectories or generate control commands. Such models are derived using first principles and typically cannot fully capture the true dynamics of the system, leading to sub-optimal performance. One appealing approach to overcome this limitation is to use Gaussian Processes for system modeling. Gaussian Process regression has been widely used in supervised machine learning due to its flexibility and inherent ability to describe uncertainty in the prediction. This work investigates the usage of Gaussian Processes for uncertainty-aware system identification of a quadrotor platform. Requirements: - Machine learning experience preferable but not strictly required - Programming experience in C++ and Python

Goal: Implement an uncertainty-aware model of the quadrotor dynamics, train and evaluate the model on simulated and real data.

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learning-Guided MPC Flight - Available

Description: Model predictive control (MPC) is a versatile optimization-based control method that allows to incorporate constraints directly into the control problem. The advantages of MPC can be seen in its ability to accurately control dynamical systems that include large time delays and high-order dynamics. Recent advances in compute hardware allow to run MPC even on compute-constrained quadrotors. While model-predictive control can deal with complex systems and constraints, it still assumes the existence of a reference trajectory. With this project we aim to guide the MPC to a feasible reference trajectory by using a neural network that directly predicts from camera images an expressive intermediate representation. Such tight coupling of perception and control would allow to push the speed limits of autonomous flight through cluttered environments. Requirements: - Machine learning experience (TensorFlow and/or PyTorch) - Experience in MPC preferable but not strictly required - Programming experience in C++ and Python

Goal: Evaluate different intermediate representations for autonomous flight. Implement the learned perception system in simulation and integrate the predictions into an existing MPC pipeline. If possible, deploy on a real system.

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch) Philipp Föhn (foehn@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Deep learning based motion estimation from events - Available

Description: Optical flow estimation is the mainstay of dynamic scene understanding in robotics and computer vision. It finds application in SLAM, dynamic obstacle detection, computational photography, and beyond. However, extracting the optical flow from frames is hard due to the discrete nature of frame-based acquisition. Instead, events from an event camera indirectly provide information about optical flow in continuous time. Hence, the intuition is that event cameras are the ideal sensors for optical flow estimation. In this project, you will dig deep into optical flow estimation from events. We will make use of recent innovations in neural network architectures and insights of event camera models to push the state-of-the-art in the field. Contact us for more details.

Goal: The goal of this project is to develop a deep learning based method for dense optical flow estimation from events. Strong background in computer vision and machine learning required.

Contact Details: Mathias Gehrig, mgehrig (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Event-based depth estimation​ - Available

Description: Depth estimation plays an important role in many computer vision and robotics applications, such as augmented reality, navigation, or industrial inspection. Structured light (SL) systems estimate depth by actively projecting a known pattern on the scene and observing with a camera how light interacts (i.e., deforms and reflects) with the surfaces of the objects. This project will focus on event-based depth estimation using structured light systems. The resulting approach would make structured light systems suitable for generating high-speed scans.

Goal: The goal is to develop a system for 3D depth maps with event cameras. Preferred candidate should have knowledge of computer vision and strong programming skills in Python, C++.

Contact Details: Manasi Muglikar, muglikar (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Vision-based Dynamic Obstacle Avoidance - Available

Description: Dynamic obstacle avoidance is a grand challenge in vision-based drone navigation. The classical mapping-planning-control pipeline might have difficulties when facing dynamic objects. Learning-based systems, such as end-to-end neural network policies, are gaining popularity in robotics for dynamic objects, due to their powerful performance and versatility in handling high-dimensional state representations. Particularly, deep reinforcement learning allows for optimizing neural network policies via trial-and-error, forgoing the need for demonstrations.

Goal: The goal is to develop an autonomous vision-based navigation system that can avoid dynamic obstacles using deep reinforcement learning. Applicants should have strong experience in C++ and python programming. Reinforcement learning and robotics background are required.

Contact Details: Yunlong Song (song (at) ifi (dot) uzh (dot) ch), Antonio Loquercio (loquercio (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Eyetracking Toolbox for Drone Racing Research - Available

Description: Eyetracking is one of the most important tools in human-machine interaction research. In our lab, we use eyetracking to study visual attention in drone racing pilots and to develop assistive technology for drone pilots. State-of-the-art eyetracker software lacks the ability to automatically calibrate the eye tracker, compute and visualize data quality metrics, extract features of interest, process data in parallel, and stream data in real-time to ROS software. The goal of this project is thus to extend the pupil-labs codebase (https://github.com/pupil-labs/pupil) with a custom toolbox that solves the aforementioned tasks. The student will implement the toolbox in Python and C++, compare the performances to state-of-the-art eyetracker software, and develop a demonstrator for real-time applications for drone racing research. Requirements: Strong programming skills in Python, C++. Previous experience with ROS, and eyetracking is a plus.

Contact Details: Please send your CV and transcripts (bachelor and master) to Christian Pfeiffer (cpfeiffe AT ifi DOT uzh DOT ch) and Manasi Muglikar (muglikar AT ifi DOT uzh DOT ch).

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Data-driven Keypoint Extractor for Event Data - Available

Description: Neuromorphic cameras exhibit several amazing properties such as robustness to HDR scenes, high-temporal resolution, and low power consumption. Thanks to these characteristics, event cameras are applied for camera pose estimation for fast motions in challenging scenes. A common technique for camera pose estimation is the extraction and tracking of keypoints on the camera plane. In the case of event cameras, most existing keypoint extraction methods are handcrafted manually. As a new promising direction, this project tackles the keypoint extraction in a data-driven fashion based on recent advances in frame-based keypoint extractors.

Goal: The project aims to develop a data-driven keypoint extractor, which computes interest points in event data. Based on the current advances of learned keypoint extractors for traditional frames, the approach will leverage neural network architectures to extract and describe keypoints in an event stream. The student should have prior programming experience in a deep learning framework and completed at least one course in computer vision.

Contact Details: Contact Details: Nico Messikommer [nmessi (at) ifi (dot) uzh (dot) ch], Mathias Gehrig [mgehrig (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Domain Transfer between Events and Frames - Available

Description: During the last years, a vast collection of frame-based datasets was collected for countless tasks. In comparison, event-based datasets represent only a tiny fraction of the available datasets. Thus, it is highly promising to use labelled frame datasets to train event-based networks as current data-driven approaches heavily rely on labelled data.

Goal: In this project, the student extends current advances from the UDA literature for traditional frames to event data in order to transfer multiple tasks from frames to events. The approach should be validated on several tasks (segmentation, object detection, etc.) in challenging environments (night, high-dynamic scenes) to highlight the benefits of event cameras. As several deep learning methods are used as tools for the task transfer, a strong background in deep learning is required. If you are interested, we are happy to provide more details.

Contact Details: Nico Messikommer [nmessi (at) ifi (dot) uzh (dot) ch], Daniel Gehrig [dgehrig (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Efficient Asynchronous Event-based CNN Processing - Available

Description: One of the amazing properties of event cameras is the high temporal resolution of the visual signal, which is in the range of microseconds. As a result, event cameras do not suffer from motion blur and can capture information in highly dynamic scenes, such as shooting a bullet at a gnome (https://www.youtube.com/watch?v=eomALySSGVU). This makes them extremely promising in critical applications such as autonomous driving. However, it remains challenging to efficiently process the sparse event stream to achieve low latency in vision algorithms.

Goal: In this project, we seek to either port existing event-based networks to achieve low latency inference or extend a current approach to make it more efficient. The student should have strong self-motivation and should be curious about tackling research challenges in a principled way. Good Python programming skills in one deep learning framework are a must. Please contact us for more details.

Contact Details: Nico Messikommer [nmessi (at) ifi (dot) uzh (dot) ch], Daniel Gehrig [dgehrig (at) ifi (dot) uzh (dot) ch]

Thesis Type: Semester Project / Master Thesis

See project on SiROP

GPU-based Simulator for Event-Based Computer Vision - Available

Description: This project aims at implementing a fast GPU-based simulator to generate event data with correpsonding ground truth data. The bread and butter of modern computer vision are large, high quality datasets on which neural networks can be trained. Many such datasets are available for image or video-based computer vision research. However, this is not the case for researchers working with event cameras. Instead, event camera datasets are rather small which limits performance of data centric approaches. The goal of this project is to solve this problem with a simple but fast event camera simulator. Based on our previous work and experience in this direction, we will create framework for generating realistic event data for low level computer vision tasks on high-end GPUs. If you are interested in this project, contact us for more details. Requirements: * Strong self-motivation and curiosity for tackling research challenges * Excellent programming skills (Python, C++) * Experience in CUDA programming preferred * At least one course or project in both computer vision and machine learning * Outstanding academic record is preferred but may be compensated by a strong background related to this project.

Contact Details: Mathias Gehrig, mgehrig (at) ifi.uzh.ch Daniel Gehrig, dgehrig (at) ifi.uzh.ch Send transcripts (bachelor & master) and CV

Thesis Type: Master Thesis

See project on SiROP