Student Projects


How to apply

To apply, please send your CV, your Ms and Bs transcripts by email to all the contacts indicated below the project description. Do not apply on SiROP . Since Prof. Davide Scaramuzza is affiliated with ETH, there is no organizational overhead for ETH students. Custom projects are occasionally available. If you would like to do a project with us but could not find an advertized project that suits you, please contact Prof. Davide Scaramuzza directly to ask for a tailored project (sdavide at ifi.uzh.ch).


Upon successful completion of a project in our lab, students may also have the opportunity to get an internship at one of our numerous industrial and academic partners worldwide (e.g., NASA/JPL, University of Pennsylvania, UCLA, MIT, Stanford, ...).



Exploring Adaptive Control Methods for High-Performance Quadrotor Control - Available

Description: Standard control algorithms require a very accurate model of the system to be controlled in order to take full advantage of system dynamic capabilities. However, such an accurate model is often not available (e.g., due to time-varying parameters or due to hard-to-model effects such as aerodynamic drag). One method to overcome this problem is to apply adaptive nonlinear control where model uncertainties and unknowns are concurrently learned while controlling the system. The goal of this project is to explore both vintage and recent adaptive control methods for the in-flight identification of a quadrotor's physical parameters (e.g., mass, inertia, thrust coefficients, drag) and adaptation of its controller.

Contact Details: Please send your CV and transcript to: Dario Brescianini, brescianini (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Master Thesis

See project on SiROP

An Open-Source Real-Time Event Camera Simulator for Robot Applications - Available

Description: Event cameras are revolutionary sensors that work radically differently from standard cameras. Instead of capturing intensity images at a fixed rate, event cameras measure per-pixel intensity changes asynchronously at the time they occur [1]. Since these sensors are not readily available and expensive, we have recently published an open source event camera simulator [2] that allows simulating arbitrary 3D camera motions in 3D scenes. The goal of this project is to close the loop and integrate the event camera simulator in a real-time robot simulation framework such that different robotic platforms with event camera sensors can be simulated and react based upon the measured events. [1]: D. Scaramuzza, Tutorial on Event-based Vision for High-Speed Robotics, http://www.rit.edu/kgcoe/iros15workshop/papers/IROS2015-WASRoP-Invited-04-slides.pdf [2]: H. Rebecq, D. Gehrig, D. Scaramuzza, ESIM: an Open Event Camera Simulator, Conference on Robot Learning, 2018

Contact Details: Please send your CV and transcript to: Dario Brescianini, brescianini (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Bachelor Thesis

See project on SiROP

Self-Supervised Learning for Robotic Navigation - Available

Description: While there are massive labeled data sets available for computer vision researchers, they cannot directly be applied to robotics. However, robots can interact with their environment and learn about the consequences of their actions. Your task is to formalize this abstract notion such that the robot can learn representations for navigation in a self-supervised fashion. Depending on your progress you will use the learned representation to facilitate reinforcement learning of navigation tasks.

Contact Details: Please send your CV and transcript to: Mathias Gehrig, mgehrig (at) ifi (dot) uzh (dot) ch

Thesis Type: Master Thesis

See project on SiROP

Deep Visual Odometry - Available

Description: Classical VIO pipelines use geometric information to infer the ego-motion of the camera and couple this information with measurements from the IMU. While these pipelines have shown very good performance in controlled, structured environments, their performance decreases when applied in low-texture or dynamic environments or when applied to high-speed motion. Recent works propose the usage of data-driven approaches for camera ego-motion estimation. This exploratory work investigates the usability of a learned odometry pipeline for quadrotor flight.

Goal: Implement a data-driven visual-inertial odometry pipeline. Test (in simulation, maybe real world) if quadrotor flight is possible with the learned pose estimate.

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Event-based Feature Tracking using Deep Learning - Available

Description: Contemporary visual odometry pipelines cannot cope with very aggressive manoeuvres performed by agile aerial robots. One possible way forward is improving the front-end (feature detection and tracking) of these pipelines. To achieve this, your task is to explore a data-centered approach to enable feature detection and tracking in challenging conditions using only events from an event camera.

Contact Details: Please send your CV and transcript to: Mathias Gehrig, mgehrig (at) ifi (dot) uzh (dot) ch

Thesis Type: Master Thesis

See project on SiROP

Implementation of Feature Detector, Tracker, Matcher on CUDA - Available

Description: The goal of this thesis is to implement a Feature Detector (ex. Harris, Fast, SUSAN, LoG, DoG), Descriptor (SURF, BRISK, BRIEF, ORB, FREAK) and Matcher/Tracker using the nVidia Cuda Framework for high-efficiency real-time execution on a nVidia Jetson TX2 computer. Focus should be laid on exploiting new computational architectures arising from Machine Learning, such as fast convolutions and GPU-aided computation.

Goal: The thesis should provide a comparison of execution speed with respect to execution on an ARM CPU. The end goal would be a clean implementation for future use in VIO pipelines with the possibility of a theoretical contribution if any improvements to the state-of-the-art algorithms is found.

Contact Details: Philipp Föhn (foehn at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Fly like a human pilot - Available

Description: Today, autonomous drones can fulfill complex tasks by estimating their 6DoF pose using visual-inertial odometry. However, state-of-the-art VIO methods rely on strong assumptions about the environment and sensors to work properly. If any of the assumptions is violated in practice, the performance will degrade rapidly, and VIO systems may fail completely, leading to a certain crash of the drone. On the other hand, human pilots can control drones with astonishing agility through complex race tracks without the need of an accurate or globally consistent state estimate. Could an autonomous drone do the same?

Goal: In simulation, teach a drone to perform simple navigation tasks only conditioned on the image from a forward facing camera.

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Sampling-Based High-Performance Motion Planner for Autonomous Drone Racing - Available

Description: Drone racing is a fast growing sport where human pilots steer a small drone through a race circuit using only an onboard camera. More recently, several robotic labs and tech companies around the world have started to compete in autonomous drone races (i.e., without any human input) due to the many interesting research challenges. We have recently developed different drone racing approaches exploiting both state-of-the-art machine learning and control methods to complete a drone racing circuit as fast as possible [1,2] and have won the 2018 IROS Autonomous Drone Race [3]. In order to increase the flight speed and benchmark current drone racing approaches, we are interested in computing time-optimal trajectories through race circuits. The goal of this project is to implement a sampling-based motion planner to find the time-optimal trajectory through a race circuit, and possibly extend the algorithm to cope with disturbances in real time. [1] E. Kaufmann et al., Deep Drone Racing: Learning Agile Flight in Dynamic Environments, Conference on Robot Learning, 2018 [2] E. Kaufmann et al., Beauty and the Beast: Optimal Methods Meet Learning for Drone Racing, submitted to International Conference on Robotics and Automation, 2019 [3] http://rise.skku.edu/iros2018racing/index.php/team-gate-time-rank/

Contact Details: Please send your CV and transcript to: Dario Brescianini, brescianini (at) ifi (dot) uzh (dot) ch; Philipp Foehn, foehn (at) ifi (dot) uzh (dot) ch

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Deep Learning Framework for Event Data - Available

Description: Write your own machine learning framework from ground-up, designed especially for event-based data. During this project you will acquire a deep understanding of each component including: - time series data storage and pre-processing - neural network design - automatic differentiation - optimization

Contact Details: Please send your CV, transcript and link to any related work to: Mathias Gehrig, mgehrig (at) ifi (dot) uzh (dot) ch

Thesis Type: Master Thesis

See project on SiROP

MPC for high speed trajectory tracking - Available

Description: Many algorithms exist for model predictive control for trajectory tracking for quadrotors and equally many implementation advantages and disadvantages can be listed. This thesis should find the main influence factors on high speed/high precision trajectory tracking such as: modell accuracy, aerodynamic forces modelling, execution speed, underlying low-level controllers, sampling times and sampling strategies, noise sensitivity or even come up with a novel implementation.

Goal: The end-goal of the thesis should be a comparison of the influence factors and based on that a recommendation or even implementation of an improved solution.

Contact Details: Philipp Föhn (foehn at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Analysis of Task Scheduling in a Visual Inertial Odometry Pipeline - Available

Description: Many VIO pipelines exist nowadays, but none are thoroughly implemented and optimized. Most pipelines execute a series of tasks, such as feature processing, inertial preintegration, optimization and estimate update and marginalization in a fixed sequence. Those pipelines normally run at low and fixed measurement rates with varying execution times. By parallelizing the tasks, and changing the underlying optimization schemes, the execution speed of such a pipeline could possibly be greatly enhanced.

Goal: This thesis should focus on finding the bottlenecks and parallelizing the VIO tasks in a first step. Once this is sufficiently achieved, the underlying optimization or filter method could be abstracted to work more efficiently.

Contact Details: Philipp Föhn (foehn at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learn Depth from Multiple Input Modalities - Available

Description: Classical depth sensors work by computing the disparity from two images recorded from cameras with known relative pose. These systems typically fail when used in low-texture scenes. Learning-based approaches to infer depth from single images have already shown promising results. Mobile robots often feature sensors that provide sparse depth information. This project aims to combine both input modalities (RGB frames + sparse depth information) in a learned way.

Goal: Combine both RGB and (sparse) depth data from an RGBD sensor to regress a high-quality depth image.

Contact Details: Elia Kaufmann (ekaufmann@ifi.uzh.ch)

Thesis Type: Semester Project

See project on SiROP

Smart Feature Selection In Visual Odometry - Available

Description: For most robotic platforms, computational resources are usually limited. Therefore, ideally, algorithms running onboard should be adaptive to the available computational power. For visual odometry, the number of features largely decides the resource the algorithm needs. By using a selected subset of features, we can reduce the required computational resource without losing accuracy significantly.

Goal: The project aims to study the problem of smart feature selection for visual odometry. The student is expected to study how motion estimation is affected by feature selection (e.g., number of features, different feature locations). The ultimate goal will be to implement a smart feature selection mechanism in our visual odometry framework.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Online time offset estimation for visual-inertial system - Available

Description: Visual-inertial odometry (VIO) has progressed significantly recently and finds a lot of real-world applications. One of the crucial requirement for good performance is to have a synchronized camera and inertial measurement unit. However, many low-cost systems do not have good synchronization, which limits the use of VIO. As an alternative, the time offset can be estimated by software. Existing methods to estimate the time offset either operate offline or only applies to specific algorithms. A lightweight algorithm that can estimate the camera-IMU offset will greatly extend the application scenarios of VIO.

Goal: The goal of the project is to develop an efficient and flexible algorithm to estimate the time offset between a camera and an IMU.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Safe Reinforcement Learning for Robotics - Available

Description: Reinforcement Learning (RL) has recently emerged has a technique to let robots learn by their own experience. Current methods for RL are very data-intensive, and require a robot to fail many times before actually accomplishing their goal. However some systems, such as flying robots, require to respect safety constraints during learning and/or deployment. While maximizing performance, those methods usually aim to minimize the number of system failures and overall risk.

Goal: During this project, we will develop machine learning based techniques to let a (real) drone learn to fly nimbly through gaps and gates, while minimizing the risk of critical failures and collisions.

Contact Details: **Antonio Loquercio** loquercio@ifi.uzh.ch

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Analysis of the impact of the state estimate degradation on closed-loop flight - Available

Description: Onboard vision is one of the most common sensing modality for autonomous quadrotors. A state estimate from onboard vision can be intermittent, noisy, and delayed.

Goal: The goal of this project is to experimentally evaluate the impact of degraded vision-based state estimation on the closed-loop performance of a quadrotor for different tasks.

Contact Details: Davide Falanga (falanga@ifi.uzh.ch), ATTACH CV AND TRANSCRIPT!

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learning morphology for a morphing quadrotor - Available

Description: Morphing quadrotors are an increasingly hot topic in the field of micro aerial vehicles. One of the open questions is to find the optimal morphology to execute a given task.

Goal: The goal of this project is to allow a morphing quadrotor to learn from raw sensory data (e.g. images, inertial measurements, etc) which morphology should be adopted to improve the performance of a given task.

Contact Details: Davide Falanga (falanga@ifi.uzh.ch), ATTACH CV AND TRANSCRIPT!

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Model Predictive Control for a morphing quadrotor - Available

Description: Morphing endows quadrotors with the capability to achieve task-specific morphologies without compromising their performance during nominal flight conditions. The ability of changing their morphology can therefore widely broaden the spectrum of tasks that quadrotors can execute.

Goal: The goal of this project is to leverage model predictive control to allow a morphing quadrotor to optimally execute a given task while suitably adapting its morphology to its requirements.

Contact Details: Davide Falanga (falanga@ifi.uzh.ch), ATTACH CV AND TRANSCRIPT!

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Automoatic Hyperparameter Optimization - Available

Description: All the algorithms and sensors used in mobile robotics are often configured off-line by an expert user specifically for a given task. Unfortunately, manual parameter tuning is a tedious and error-prone task. Automated algorithm configuration tools can find the best parameter configurations without (or with less) manual tuning, simplifying user intervention.

Goal: During this project, we will develop machine learning based techniques to develop a meta-algorithm to automatically tune parameters for (several) computer vision tasks.

Contact Details: Antonio Loquercio (loquercio@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Learning to Plan - Available

Description: Recent techniques based on machine learning enabled robotics system to perform many difficult tasks, such as manipulation or navigation. While networks are provably good in performing reactive tasks, as obstacle avoidance, they mostly fail to plan long-term actions for more complicated tasks (e.g. navigate through a maze). In this project, we will develop techniques to let a robot reason in time and plan to perform actions to achieve high level goals.

Goal: The project aims to develop techniques based on machine learning to let a robot plan actions to achieve high-level goals.

Contact Details: Antonio Loquercio (loquercio@ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Benchmarking camera control for visual odometry - Available

Description: There are many existing datasets to evaluate the performance of visual odometry algorithms. However, few work has been done in providing a principled way to benchmark the performance of camera control (exposure time/gain) algorithms, which have a large impact on the performance of visual odometry. Most of the current datasets/benchmark tools simply contain images captured at a certain camera configuration, which is not suitable for this purpose. A proper benchmark tool can fill this gap and will be useful for understanding the strengths and weaknesses of different algorithms.

Goal: The goal of this project is to make use of both synthetic and real data to build a benchmark tool and evaluate the influence of different camera control algorithms on the performance of visual odometry.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Simulating decentralized multi-robot SLAM - Available

Description: We have recently developed decentralized multi-robot visual place recognition (neural network based) and SLAM and demonstrated them on well-known datasets ( http://rpg.ifi.uzh.ch/docs/arXiv17_Cieslewski.pdf ). We want to take this work one step further and deploy it in the real world with a group of quadrotors. Since this is quite an effort, logistically, a first step will be to simulate the full system (SLAM, but also obstacle avoidance and control) in simulation.

Goal: In this project, you will simulate a scenario where a group of quadrotors explores and maps an unknown environment. We will start with a simplistic simulation and gradually increase its complexity. Axes in which to complexity can be increased: From manual camera placement to using a full control stack, from rendering camera frames in Gazebo to rendering them in a more photorealistic simulator, from random motion with reactive obstacle avoidance to active exploration, from a few robots to many robots, …

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), ATTACH CV AND TRANSCRIPT! Required skills: Linux, experience in ROS or a very strong ability to learn, C++/Python.

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Trajectory estimation and scene reconstruction from any YouTube video! - Available

Description: We believe that with the right processing, it should be possible to obtain trajectory estimation and scene reconstruction from many of the videos that are already out there on the internet! This could have nice applications: two approaches that we would be quite passionate about would be a) to visualize FPV races ( https://www.youtube.com/watch?v=EcLk_uZe33w ) and b), more practical for the community, to create new robotics datasets with little effort.

Goal: Go from YouTube videos to trajectory estimation and scene reconstruction. The more general, the better. Of course, you will start with an as simple as possible approach: Slow motion, 360° videos (theoretically no need to calibrate), then increase in complexity. While this will likely be rigged with engineering, hacks and qualitative evaluation, it might, with the right approach, also lead to interesting research (e.g. how to deal with motion blur in tracking, auto-calibration, potential to apply machine learning etc...).

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), ATTACH CV AND TRANSCRIPT! Appreciated skills: Linux, ROS, good programming skills, ideally in Matlab or Python. Prefer students who took the Vision Algorithms for Mobile Robots class!

Thesis Type: Master Thesis

See project on SiROP

Improve Visual-Inertial Odometry With Deep Learning - Available

Description: Modern visual-inertial odometry (VIO) systems can provide accurate and robust pose and velocity estimate using the combination of cameras and IMUs. However, state-of-the-art VIO methods rely on strong assumptions about the environment and sensors to work properly. If any of the assumptions is violated in practice, the performance will degrade rapidly, and VIO systems may fail completely. This greatly damages the robustness of VIO systems and causes problems for many real-world applications.

Goal: The goal of this project is to use machine learning methods to improve the robustness and performance of VIO in situations where conventional methods would fail.

Contact Details: Zichao Zhang (zzhang at ifi.uzh.ch) Required skills: Linux, ROS and C++. Experience with VO/VIO/machine learning is a plus.

Thesis Type: Semester Project / Master Thesis

See project on SiROP

“Mapless” exploration in 3D - Available

Description: In exploration, a robot creates a map of a previously unknown environment. Typically, the goal is to cover all of the reachable space. Think of a robot deployed in a disaster area, tasked with finding all survivors (search and rescue). The problem with most current exploration algorithms is that they assume perfect pose estimates. The problem is that robots equipped with on-board pose estimators will always produce an estimate with an error/drift. Recently, we showed in 2D that exploration is possible without self-consistency in the map. But can we also do it in 3D?

Goal: Explore an unknown space in 3D, relying only on visual-inertial odometry (with drift) and basic place recognition (but no loop closure/map optimization). Start in simulation, then possibly deploy in the real world (quad equipped with depth sensor).

Contact Details: Titus Cieslewski ( titus at ifi.uzh.ch ), ATTACH CV AND TRANSCRIPT (also Bachelor)! Required skills: Linux, ROS. Ideally C++ and Python, but if you know only one you should also be fine.

Thesis Type: Master Thesis

See project on SiROP

Optical Flow Estimation with an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The goal of this project is to use event cameras to compute the optical flow in the image plane induced by either a moving camera in a scene or by moving objects with respect to a static event camera. Several existing methods as well as proposed new ones will be analyzed, implemented and compared. A successful candidate is expected to be familiar with state-of-the-art optical flow methods for standard cameras. This is a project with considerable room for creativity, for example in applying the ideas from low-level vision or ideas driving optical flow methods for standard cameras to the new paradigm of event-based vision. Experience in coding image processing algorithms in C++ is required.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch), Henri Rebecq (rebecq at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Event Camera Characterization - Available

Description: Event cameras such as the Dynamic and Active Pixel Vision Sensor (DAVIS, http://inilabs.com/products/dynamic-and-active-pixel-vision-sensor/ ) are recent sensors with large potential for high-speed and high dynamic range robotic applications. In spite the successful demonstration of the sensor to address several problems in computer vision and robotics, a comprehensive characterization of the sensor for such high level applications is still missing.

Goal: The goal of this project is to characterize various aspects of these novel types of sensors, such as: event noise characteristics (distribution and spectral density), contrast threshold (relation to bias settings, variability: spatially, with the pixel, and photometrically, with respect to the scene illumination), non-linearities, etc. Additionally, the images and IMU measurements provided by the DAVIS also require an integrated characterization. A successful completion of the project will lead to a better understanding of the potential, limitations and impact of these sensors on the design of novel algorithms for computer vision and robotics. The expected candidate should have a background on instrumentation, electrical engineering (to understand the principle of operation of the DAVIS pixels) and random processes. This project involves close collaboration with the Institute of Neuroinformatics (INI) at UZH-ETH.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Semester Project / Bachelor Thesis

See project on SiROP

Building a high-speed camera! Learning Image reconstruction with an Event Camera - Available

Description: Event cameras such as the Dynamic Vision Sensor (DVS) are recent sensors with large potential for high-speed and high dynamic range robotic applications. The output of an event camera is a sparse stream of events that encode only light intensity changes - in other terms, a highly compressed version of the visual signal.

Goal: The goal of this project is to turn an event camera into a high-speed camera, by designing an algorithm to recover images from the compressed event stream. Inspired by a recent approach, the goal of this project will be to train a machine learning algorithm (or neural network) to learn how to reconstruct an image from the noisy event stream. The first part of the project will consist in acquiring training data, using both simulation and real event cameras. The second part will consist in designing and training a suitable machine learning algorithm to solve the problem. Finally, the algorithm will be compared against state-of-the-art image reconstruction algorithms. The expected candidate should have some background on both machine learning and computer vision (or image processing) in order to undertake this project.

Contact Details: Henri Rebecq (rebecq at ifi.uzh.ch), Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

A Visual-Inertial Odometry System for Event-based Vision Sensor - Available

Description: Event-based cameras are recent revolutionary sensors with large potential for high-speed and low-powered robotic applications. The goal of this project is to develop visual-inertial pipeline for the Dynamic and Active Vision Sensor (DAVIS). The system will estimate the pose of the DAVIS using the event stream and IMU measurements delivered by the sensor. Filtering approaches as well as batch optimization methods will be investigated. https://youtu.be/bYqD2qZJlxE http://www.inilabs.com/products/davis

Contact Details: Henri Rebecq (rebecq at ifi.uzh.ch), Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Master Thesis

See project on SiROP

Hand-Eye Calibration Toolbox - Available

Description: Hand-Eye calibration is a paramount pre-processing stage of many robotic and augmented reality applications, where the knowledge of the relative transformation between different sensors (e.g. a camera and a head-mounted display) is required to have an accurate geometric representation of the scene.

Goal: The goal of this project is to develop a user-friendly hand-eye calibration toolbox integrated with our robotic system. The toolbox will contain existing and novel hand-eye calibration methods, and it will allow to visualize the results of the different methods in an integrated manner to improve the understanding of the quality of the processed dataset, specially paying attention to error estimates, uncertainties and detection of inconsistent data.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP

Integrated Multi-Camera Calibration Toolbox - Available

Description: The toolbox is expected to handle different camera brands, projection models and calibration patterns. In the multi-sensor scenario, the toolbox is also expected to compute the temporal offsets between the sensors. Special attention will be given to estimation of error measures, parameter uncertainties, detection of inconsistent data and interactive guidance of data acquisition.

Goal: The goal of this project is to develop a user-friendly, single and multi-camera calibration toolbox adapted to our robotic system. The toolbox will integrate existing calibration software in our group and in other libraries and will provide user-friendly reports of the different stages to assess the quality of the processed dataset, thus speeding up and improving the understanding of the whole sensor calibration stage.

Contact Details: Guillermo Gallego (guillermo.gallego at ifi.uzh.ch)

Thesis Type: Semester Project / Master Thesis

See project on SiROP