Student Projects


How to apply

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP . Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).


Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).



Reinforcement learning for 3D surgery planning of Femoral Head Reduction Osteotomy (in collaboration with Balgrist hospital) - Available

Description: Morbus Legg-Calvé-Perthes is a paediatric disorder of the lower extremities, causing deformities of the femoral head. Surgical treatment for this bone deformity can be achieved by a procedure known as femoral head reduction osteotomy (FHRO), which involves the resection of a wedge from the femoral head to restore the function of the joint. The preoperative planning of this procedure is a complex three-dimensional (3D) optimization problems involving more than 20 degrees of freedom (DoF) as it comprises the calculation of the surgical cuts and the reposition of the resected fragment to the desired anatomical position. This process is currently done manually in collaboration between engineers and surgeons.

Goal: In the course of this master thesis, you will help us to improve our current surgery planning methods by developing an approach to predict the reposition of the fragment and the pose of the cutting planes defining the bone wedge. The objective of this master thesis is to apply deep (reinforcement) learning techniques to automatically find an optimal solution for the preoperative planning of FHRO. We will start by solving a simplified version of the optimization problem, with a reduced DoF involving only the calculation of the bone fragment reposition and we will gradually increase the DoF and the complexity of the task. This project is part of a bigger framework, which is currently under development in our clinic for optimal surgical outcomes. (The student will mainly work at the Balgrist CAMPUS) **Requirements:** Hands-on experience in reinforcement learning, deep learning. Strong coding skills in Python. Experience in mathematical optimization and spatial transformation is a plus.

Contact Details: Yunlong Song (song@ifi.uzh.ch), Ackermann Joelle (joelle.ackermann@balgrist.ch) Prof. Philipp Fuernstahl (philipp.fuernstahl@balgrist.ch)

Thesis Type: Master Thesis

See project on SiROP

Machine Learning for Feature-tracking with Event Cameras - Available

PLEASE LOG IN TO SEE DESCRIPTION: This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions: - Click link "Open this project..." below. - Log in to SiROP using your university login or create an account to see the details. If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Developing a platform for multi-camera SLAM (collaboration with Sony) - Available

Description: Stereo cameras are used in SLAM pipelines to provide robustness and accurate scale even under challenging motions such as pure rotations. Integrating IMU measurements with stereo camera allows for accurate tracking performance in SLAM, especially when the platform undergoes fast motion. There have been successful commercial applications of visual SLAM with stereo sensors. For example, Skydio r1 drone uses 12 cameras for navigation to ensure robustness. In this project, we will develop a hardware synchronized system of stereo cameras and IMU to process the camera images from multiple stereo pairs and IMU measurements on-board in real time.

Goal: The goal of this project is to develop a pipeline that can simultaneously record images from multiple stereo cameras and IMU and perform some basic image processing on board in real time. Applicants should have experience with designing real-time systems, ROS and a strong background in C/C++. Experience with designing drivers for embedded platforms is a plus.

Contact Details: Manasi Muglikar (muglikar(at)ifi(dot)uzh(dot)ch)

Thesis Type: Semester Project

See project on SiROP

Bringing Thermal Cameras into Robotics - Available

Description: Thermographic cameras can capture detailed images regardless of ambient lighting conditions.They use an infrared (IR) sensing technology to map heat variations within the sensor’s range and field-of-view, providing movement detection and hot-spot mapping even in total darkness. Visible range covers wavelengths of approximately 400 – 700 nanometres (nm) in length. However, thermographic cameras generally sample thermal radiation from within the longwave infrared range(approximately 7,000 – 14,000 nm) with a great potential in robotics. Thermography images are useful to identify week points on the power line, along the cable and on the isolators or containers. However, current lightweight thermal cameras are unexplored, with limited in pixel resolution (32x32 pixels) unable to deliver exceptional sensitivity, resolution and image quality for meaningful applications. This work aims to expand the frontiers of computer vision by using thermographic cameras and investigate their application in robotics i.e. perception, state estimation and path planning. The project will combine traditional computer vision techniques together with deep-learning approaches to bring thermography images into the field of robotics. Requirements: Background in computer vision and machine learning - Deep learning experience preferable – Excellent programming experience in C++ and Python

Goal: Perception, state estimation or path planning using thermographic cameras.

Contact Details: Javier Hidalgo-Carrió (jhidalgocarrio@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Super Resolve Event-based Imaging - Available

Description: Event cameras are bio-inspired vision sensors that work radically differently from conventional cameras. Instead of capturing intensity images at a fixed rate, event cameras measure changes of intensity asynchronously at the time they occur. This results in a stream of events, which encode the time, location, and polarity (sign) of brightness change. They have a very high dynamic range (140 dB versus 60 dB), do not suffer from motion blur, and provide measurements with a latency as low as one microsecond. Event cameras provide a viable alternative (or complementary) in conditions that are challenging for conventional cameras. This student work will investigate super resolution techniques to process a stream of events with application to drones and/or autonomous driving scenarios. Super resolved intensity imaging generates a visually high-resolution (HR) output from its low-resolution (LR) input. However, this inverse problem is ill-posed since multiple HR solutions can map to any LR input. CNN makes a promising approach to SISR proposing super-resolution convolutional neural network (SRCNN). The technique requires to explore the usability in a stream of events and the different network structures to achieve a balance between performance and speed. Requirements: Background in computer vision and machine learning - Deep learning experience preferable – Excellent programming experience in C++ and Python

Goal: Generate a high-resolution (HR) image from a low-resolution (LR) stream of events.

Contact Details: Javier Hidalgo-Carrió (jhidalgocarrio@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Probabilistic System Identification of a Quadrotor Platform - Available

Description: Most planning & control algorithms used on quadrotors make use of a nominal model of the platform dynamics to compute feasible trajectories or generate control commands. Such models are derived using first principles and typically cannot fully capture the true dynamics of the system, leading to sub-optimal performance. One appealing approach to overcome this limitation is to use Gaussian Processes for system modeling. Gaussian Process regression has been widely used in supervised machine learning due to its flexibility and inherent ability to describe uncertainty in the prediction. This work investigates the usage of Gaussian Processes for uncertainty-aware system identification of a quadrotor platform. Requirements: - Machine learning experience preferable but not strictly required - Programming experience in C++ and Python

Goal: Implement an uncertainty-aware model of the quadrotor dynamics, train and evaluate the model on simulated and real data.

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Event-based Feature Tracking on an Embedded Platform - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications, such as fast obstacle avoidance. In particular, event cameras can be used to track features or objects in the blind time between two frames which makes it possible to react quickly to changes in the scene.

Goal: In this project we want to deploy an event-based feature tracking algorithm on a resource constrained platform such as a drone. Applicants should have a strong background in C++ programming and low-level vision. Experience with embedded programming is a plus.

Contact Details: Daniel Gehrig (dgehrig (at) ifi.uzh.ch), Elia Kaufmann (ekaufmann (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project

See project on SiROP

MPC for high speed trajectory tracking - Available

Description: Many algorithms exist for model predictive control for trajectory tracking for quadrotors and equally many implementation advantages and disadvantages can be listed. This thesis should find the main influence factors on high speed/high precision trajectory tracking such as: modell accuracy, aerodynamic forces modelling, execution speed, underlying low-level controllers, sampling times and sampling strategies, noise sensitivity or even come up with a novel implementation.

Goal: The end-goal of the thesis should be a comparison of the influence factors and based on that a recommendation or even implementation of an improved solution.

Contact Details: Philipp Föhn (foehn at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Generation of Fast or Time-Optimal Tracjectories for Quadrotor Flight - Available

Description: With the rise of complex control and planning methods, quadrotors are capable of executing astonishing maneuvers. While generating trajectories between two known poses or states is relatively simple, planning through multiple waypoints is rather complicated. The master class of this problem is the task of flying as fast as possible through multiple gates, as done in drone racing. While humans can perform such fast racing maneuvers at extreme speeds of more than 100 km/h, algorithms struggle with even planning such trajectories. Within this thesis, we want to research methods to generate such fast trajectories and work towards a time-optimal planner. This requires some prior knowledge in at least some of the topics including: planning for robots, optimization techniques, model predictive control, RRT, and quadrotors or UAVs in general. The tasks will reach from problem analysis, approximation, and solution concepts to implementation and testing in simulation with existing software tools.

Goal: The goal would be to analyse the planning problem, develop approximation techniques and solve it as time-optimal as possible during thesis.

Contact Details: Philipp Föhn (foehn at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Data-Driven Visual Inertial Odometry for Quadrotor Flight - Available

Description: Classical VIO pipelines use geometric information to infer the ego-motion of the camera and couple this information with measurements from the IMU. While these pipelines have shown very good performance in controlled, structured environments, their performance decreases when applied in low-texture or dynamic environments or when applied to high-speed motion. Recent works propose the usage of data-driven approaches for camera ego-motion estimation. While such approaches could potentially learn a VIO pipeline end-to-end, their generalizability is not good enough for real-world deployment. This work investigates the usage of a hybrid VIO pipeline featuring a learned visual frontend. Requirements: - Background in computer vision and machine learning - Deep learning experience preferable but not strictly required - Programming experience in C++ and Python

Goal: Based on results from a previous student project, the goal is to deploy a hybrid VIO pipeline on a quadrotor equipped with a GPU (Jetson TX2).

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch); Philipp Foehn (foehn@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learning-Guided MPC Flight - Available

Description: Model predictive control (MPC) is a versatile optimization-based control method that allows to incorporate constraints directly into the control problem. The advantages of MPC can be seen in its ability to accurately control dynamical systems that include large time delays and high-order dynamics. Recent advances in compute hardware allow to run MPC even on compute-constrained quadrotors. While model-predictive control can deal with complex systems and constraints, it still assumes the existence of a reference trajectory. With this project we aim to guide the MPC to a feasible reference trajectory by using a neural network that directly predicts from camera images an expressive intermediate representation. Such tight coupling of perception and control would allow to push the speed limits of autonomous flight through cluttered environments. Requirements: - Machine learning experience (TensorFlow and/or PyTorch) - Experience in MPC preferable but not strictly required - Programming experience in C++ and Python

Goal: Evaluate different intermediate representations for autonomous flight. Implement the learned perception system in simulation and integrate the predictions into an existing MPC pipeline. If possible, deploy on a real system.

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch) Philipp Föhn (foehn@ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Simulation to Real World Transfer - Available

Description: Recent techniques based on machine learning enabled robotics system to perform many difficult tasks, such as manipulation or navigation. Those techniques are usually very data-intensive, and require simulators to generate enough training data. However, a system only trained in simulation (usually) fails when deployed in the real world. In this project, we will develop techniques to maximally transfer knowledge from simulation to the real world, and apply them to real robotics systems.

Goal: The project aims to develop techniques based on machine learning to have maximal knowledge transfer between simulated and real world on a navigation task.

Contact Details: **Antonio Loquercio** loquercio@ifi.uzh.ch

Thesis Type: Semester Project / Bachelor Thesis / Master Thesis

See project on SiROP

Unsupervised Obstacle Detection Learning - Available

Description: Supervised learning is the gold standard algorithm to solve computer vision tasks like classification, detection or segmentation. However, for several interesting tasks (e.g. moving object detection, depth estimation, etc.) collecting the large annotated datasets required by the aforementioned algorithms is a very tedious and costly process. In this project, we aim to build a self-supervised depth estimation and segmentation algorithm by embedding classic computer vision principles (e.g. brightness constancy) into a neural network. **Requirements**: Computer vision knowledge; programming experience with python. Machine learning knowledge is a plus but it is not required.

Goal: The goal of this project consists of building a perception system which can learn to detect obstacles without any ground truth annotations.

Contact Details: Antonio Loquercio, _loquercio@ifi.uzh.ch_

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Target following on nano-scale UAV - Available

Description: Autonomous Unmanned Aerial Vehicles (UAVs) have numerous applications due to their agility and flexibility. However, navigation algorithms are computationally demanding, and it is challenging to run them on-board of nano-scale UAVs (i.e., few centimeters of diameter). This project focuses on the object tracking, (i.e., target following) on such nano-UAVs. To do this, we will first train a Convolutional Neural Network (CNN) with data collected in simulation, and then run the aforementioned network on a parallel ultra-low-power (PULP) processor, enabling flight with on-board sensing and computing only. **Requirements**: Knowledge of python, cpp and embedded programming. Machine learning knowledge is a plus but it is not strictly required.

Contact Details: Antonio Loquercio, _loquercio@ifi.uzh.ch_

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learning to Deblur Images with Events - Available

Description: Images suffer from motion blur due to long exposure in poor light condition or rapid motion. Unlike conventional cameras, event-cameras do not suffer from motion blur. This is due to the fact that event-cameras provide events together with the exact time when they were triggered. In this project, we will make use of hybrid sensors which provide both conventional images and events such that we can exploit the advantages of both. By the end of this project you will have developed a great amount of experience in event-based vision, deep learning and computational photography. Requirements: - Background in computer vision and machine learning - Deep learning experience preferable but not strictly required - Programming experience in C++ and Python

Goal: The goal is to develop an algorithm capable producing a blur-free image from the captured, blurry image, and events within the exposure time. To this end, synthetic data can be generated by our simulation framework which is able to generate both synthetic event data and motion blurred images. This data can be used by machine learning algorithms designed to solve the task at hand. At the end of the project, the algorithm will be adapted to perform optimally with real-world data.

Contact Details: Mathias Gehrig (mgehrig at ifi.uzh.ch); Daniel Gehrig (dgehrig at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Optimization for Spiking Neural Networks - Available

Description: Spiking neural networks (SNNs) are neural networks that process information with timing of events/spikes rather than numerical values. Together with event-cameras, SNNs show promise to both lower latency and computational burden compared to artificial neural networks. In recent years, researchers have proposed several methods to estimate gradients of SNN parameters in a supervised learning context. In practice, many of these approaches rely on assumptions that might not hold in all scenarios. Requirements: - Background in machine learning; especially deep learning - Good programming skills; experience in CUDA is a plus.

Goal: In this project we explore state-of-the-art optimization methods for SNNs and their suitability to solve the temporal credit-assignment problem. As a first step, an in-depth evaluation of a selection of algorithms is required. Based on the acquired insights, the prospective student can propose improvements and implement their own method.

Contact Details: Mathias Gehrig, mgehrig (at) ifi (dot) uzh (dot) ch

Thesis Type: Master Thesis

See project on SiROP

Learning features for efficient deep reinforcement learning - Available

Description: The study of end-to-end deep learning in computer vision has mainly focused on developing useful object representations for image classification, object detection, or semantic segmentation. Recent work has shown that it is possible to learn temporally and geometrically aligned keypoints given only videos, and the object keypoints learned via unsupervised learning manners can be useful for efficient control and reinforcement learning.

Goal: The goal of this project is to find out if it is possible to learn useful features or intermediate representation s for controlling mobile robots in high-speed. For example, can we use the Transporter (a neural network architecture) for finding useful features in an autonomous car racing environment? if so, can we use these features for discovering an optimal control policy via deep reinforcement learning? **Required skills:** Python/C++ reinforcement learning, and deep learning skills.

Contact Details: Yunlong Song (song@ifi.uzh.ch) and Titus Cieslewski ( titus at ifi.uzh.ch )

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Building a simulator for quadrotors - Available

Description: Building a simulator that combines the photorealistic image rendering engine with the ROS framework could greatly help the robotics research community to develop algorithms with the simulator. For example, both two popular open-source simulators, CARLA and AirSim, are supporting ROS.

Goal: The goal of this project is to develop a quadrotor simulator that combines our current photorealistic image rendering engine with our existing sensor simulators, e.g., the UAV dynamics simulator and the event camera simulator, that was developed with ROS. You will first familiarize yourself with our existing simulator that already has all the basic components and is ready to run. Then, you have to figure out the most efficient way of organizing the code such that it runs fast. For example, how to retrieve images from the rendering engine and save it? You will be required to write C++ code and compile the simulator in ROS.

Contact Details: Yunlong Song (song AT ifi DOT uzh DOT ch), Attach CV and transcripts. Programming skills: C/C++ and ROS.

Thesis Type: Semester Project / Bachelor Thesis / Master Thesis

See project on SiROP

Designing a New Event Camera with Events and Images - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with a lot of potential for high-speed and high dynamic range robotic applications. They have been successfully applied in many applications, such as high speed video and high speed visual odometry. Due to their high speed and

Goal: The goal of this project is to design a new event camera that combines events and standard images.

Contact Details: Daniel Gehrig (dgehrig (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Designing an Event Camera for Learning - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with a lot of potential for high-speed and high-dynamic-range robotic applications.They have been successfully applied in many applications, such as high speed video and high speed visual odometry. Recently, many new event cameras have been commercialized with higher and higher spatial resolutions and high temporal resolution. However, these developments steadily increase the the computational requirements for downstream algorithms, increasing the necessary bandwidth and reducing the time available to process events. In this work we want to find out how important these design parameters are for deep learning applications. Applicants should have experience in coding image processing algorithms in C++ and experience with learning frameworks in python such as tensorflow or pytorch.

Goal: The goal of this project is to find out how important the design parameters of event cameras, such as spatial and temporal resolution, are for deep learning applications.

Contact Details: Daniel Gehrig (dgehrig (at) ifi (dot) uzh (dot) ch), Antonio Loquercio (antonilo (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Internship / Master Thesis

See project on SiROP

Learning 3D Reconstruction using an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. In particular, they have been used to generate high speed video and for high speed visual odometry. In this project we want to explore the possibility using an event camera to do asynchronous 3D reconstruction with very high temporal resolution. These properties are critical in applications such as fast obstacle avoidance and fast mapping. Applicants should have a background in C++ programming and low-level vision. In addition, familiarity with learning frameworks such as pytorch or tensorflow are required.

Goal: The goal of this project is to explore a learning-based 3D reconstruction method with an event camera.

Contact Details: Daniel Gehrig (dgehrig (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Collaboration / Master Thesis

See project on SiROP

Learning an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with a lot of potential for high-speed and high dynamic range robotic applications. They have been successfully applied in many applications, such as high speed video and high speed visual odometry. In spite of this success, the exact operating principle of event cameras, that is, how events are generated from a given visual signal and how noise is generated, is not well understood. In his work we want to explore new techniques for modelling the generation of events in an event camera, which would have wide implications for existing techniques. Applicants should have a background in C++ programming and low-level vision. In addition, familiarity with learning frameworks such as pytorch or tensorflow are required.

Goal: The goal of this project is to explore new techniques for modelling an event camera.

Contact Details: Daniel Gehrig (dgehrig (at) ifi (dot) uzh (dot) ch), Mathias Gehrig (mgehrig (at) ifi (dot) uzh (dot) ch)

Thesis Type: Semester Project / Internship / Master Thesis

See project on SiROP

Asynchronous Processing for Event-based Deep Learning - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. Since their output is sparse traditional algorithms, which are designed for dense inputs such as frames, are not well suited. The goal of this project is explore ways to adapt existing deep learning algorithms to handle sparse asynchronous data from events. Applicants should have experience in C++ and python deep learning frameworks (tensorflow or pytorch), and have a strong background in computer vision.

Goal: The goal of this project is explore ways to adapt existing deep learning algorithms to handle sparse asynchronous data from events.

Contact Details: Daniel Gehrig (dgehrig at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Pushing hard cases in tag detection with a CNN - Available

Description: Visual Tags such as April or Aruco tags are nowadays detected with a handcrafted algorithm. This algorithm has its limitations in special cases, such as when the tag is far away from the camera, when the tag is partially occluded or when a camera with high distortion is used.

Goal: In this project, you will train a CNN to handle these special cases. We will first brainstorm a meaningful architecture that will allow a CNN to complement classical tag detection in the most effective way. You will then figure out the most effective way to create meaningful training data (hybrid of synthetic and real data?). Finally, you will use that data to train the desired detector.

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), APPLY VIA EMAIL, ATTACH CV AND TRANSCRIPT! Required skills: Linux, Python, ability to read C++ code. Desirable skill: Tensorflow or similar.

Thesis Type: Semester Project / Bachelor Thesis / Master Thesis

See project on SiROP

Teach and Aggressive Repeat - Available

Description: When we think of robot path planning, we often think of fitting optimal trajectories into dense 3D maps. This requires high quality 3D maps in the first place, which are often hard to obtain. An alternative approach, called Teach and Repeat, is to retrace previously traversed paths. Teach and Repeat maps are easier to create, as no globally consistent pose estimate is required. They can also be very compact, as the environment only needs to be sampled at sparse, visually salient locations. In this project, you will do T&R with a twist: Try to fly the repeat as fast as possible.

Goal: Start by building a basic teach and repeat based on existing components. Then, start increasing the repeat speed. Find out what the limitations are. Perceptual limitations like motion blur? If so, can this be solved with event cameras ( https://goo.gl/itzpJN ) ? Or is it avoiding collisions, as potentially tight maneuvers from the slow teach phase cannot be repeated at high velocities? You will most likely start with deployment on a real quadrotor very soon.

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), APPLY VIA EMAIL, ATTACH CV AND TRANSCRIPT (also Bachelor)! Required skills: Linux, C++, ROS. Students who took the Vision Algorithms for Mobile Robots class are at an advantage.

Thesis Type: Master Thesis

See project on SiROP

Decentralized Visual Map Building - Available

Description: In State-of-the-Art decentralized mapping methods, optimization (correcting odometry drift) is typically done using pose graph optimization due to the fact that a pose graph is a very compact representation. Unfortunately, this compression in data results in limitations in precision and robustness. Bundle Adjustment is a map optimization method for visual maps which is much more precise and robust, but also much more data intensive.

Goal: In this work, you will figure out a way to achieve the superior precision of Bundle Adjustment while minimizing the amount of data that needs to be exchanged between robots in a decentralized setting.

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), APPLY VIA EMAIL, ATTACH CV AND TRANSCRIPT! Required skills: Matlab or C++, with a preference for the latter. Desirable: Background in optimization (Nonlinear least squares, Gauss-Newton or similar)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learning minimal representations of places - Available

Description: Place recognition and 6DoF localization has a wide range of applications, whether in robot autonomy, VR/AR or navigation interfaces. Given sensor readings (we focus on images), the goal is to establish position and orientation of a robot/device with respect to a previously recorded map. Recently, this is generally solved with a mixture of machine learning and geometry (NetVLAD, SuperPoint, LF-NET, PoseNet). Our focus in particular will be to solve this problem with a minimal representation.

Goal: Given query agent A and map agent B, have B establish a pose of A within its map, with minimal data transmission from A to B. We have a couple of ideas on how to solve this (see our most recent publication on this: https://arxiv.org/abs/1811.10681 ), but you are encouraged to bring your own ideas to the table.

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), APPLY VIA EMAIL, ATTACH CV AND TRANSCRIPT (also Bachelor)! Preferred skills: Linux, Python, “Vision Algorithms for Mobile Robots” class or equivalent, TensorFlow/PyTorch or equivalent.

Thesis Type: Master Thesis

See project on SiROP

High-Performance Simulation of Spiking Neural Network on GPUs - Available

Description: One major complication in research of biologically-inspired spiking neural Networks (SNNs) is simulation performance on conventional hardware (CPU/GPU). Computation in SNNs is dominated by operations on sparse tensors but usually this potential benefit is ignored to save development time. However, the exploitation of sparsity could be beneficial to scale simulation of SNNs to larger datasets. Requirements: - Experience with deep learning frameworks (e.g. TensorFlow or PyTorch) - Excellent programming skills and experience in CUDA

Goal: In this project, you will leverage sparse computation to develop high-performance simulations of SNNs that can be used for optimization. This will help to scale experiments and drastically improve results obtained by SNNs.

Contact Details: Mathias Gehrig, mgehrig (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project

See project on SiROP