We are hiring multiple talented Postdocs in AI!


Come build the future of robotics with us!


The mission of the Robotics and Perception Group is to research the fundamental challenges of robotics and computer vision that will benefit all of humanity, and we want people with diverse perspectives and backgrounds. Our team includes various nationalities and ages. We have several fully-funded PostDoc positions in:

  • Neural control of drones
  • Reinforcement Learning
  • Neural SLAM
  • Event cameras
  • Semantic scene understanding
  • Computational photography

to contribute to the areas of:


Check here for further info on:

SLAM, Scene Understanding, and Computational Photography with Event Cameras



The goal of this project is to fuse event cameras, standard cameras, and other sensors to enhance the image quality of standard image sensors as well as to develop novel Simultaneous Localization And Mapping (SLAM) algorithms with unprecedented performance.

Event cameras are bio-inspired vision sensors output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very High Dynamic Range (HDR), no motion blur, and a latency with microsecond resolution. However, because the output is asynchronous, traditional vision algorithms cannot be applied, and new algorithms must be developed to take advantage of them.

The goal of this project is to develop new vision algorithms that fuse event cameras, standard cameras, and inertial sensors to improve the perception capabilities of modern imaging systems (e.g., perfect image quality, semantic understanding, and SLAM) regardless of the camera speed and illumination conditions.

If you are interested to know more about our current research in this area, check out our related research page on event cameras.

This project is funded by a top-tier company.


Neural Control of Drones


The goal of this project is to make autonomous drones fly better than human pilots by using only onboard cameras and computation.

Current commercial drones are completely blind: they navigate using GPS or a human pilot, which prevents their use for search-and-rescue operations in complex environments (e.g., exploring an earthquake-damaged building). Autonomous drone navigation based on board sensors and computation has made tremendous progress over the last decade. However, if you look at the performance of today's autonomous drones in such scenarios, they are still far from human pilot performance in terms of speed, versatility, and robustness. Speed is particularly important: since battery life is unlikely to improve over the next years, we need to make drones faster so that they can accomplish more in a given time. But to do so, we need to use faster algorithms and sensors.

We were recently in the world's news for our research on Deep Drone Acrobatics (RSS 2020 Best Paper Award Honorable Mention, watch the impressive video!), Deep Drone Racing (Best Systems Paper Award at both RSS 2020 and CORL 2018, watch the video), Dodging Fast Moving Objects with Event Cameras (paper on the Cover of Science Robotics 2020, watch the impressive video!).

The goal of this project is to develop algorithms and sensors that we will make autonomous drones faster and more robust in complex scenarios. One of the (many) demonstrators of this project is to build an Alpha Pilot that will beat a human pilot at a drone racing competition.

If you are interested to know more about our current research in this area, check out our related research pages:

This project is funded by the European Research Council Consolidator Grant (ERC-CoG) under the European Union�s Horizon 2020 Research and Innovation Programme (Grant agreement No. 864042). Press release.


Autonomous Inspection and Maintenance of Power Lines



The goal of this project is to make autonomous drones equipped with manipulators able to replace or assist humans in the inspection and maintenance of large-scale power lines.

The inspection of the European electrical power system requires accurate tracking of thousands of kilometers of lines. Currently this long-range inspection task is performed with manned helicopters, which is costly, lengthy, and risky. Commercial drones are limited by the battery life and require human pilots for such complex operation.

The ambitious goal of this project is to assist or even replace humans in such large scale operations, which includes, among other things: thermographic fault detection, insulator fault detection, track-clearance inspection including, 3D mapping of power infrastructure and computation of distances to buildings and vegetation.

This project is part of a large European consortium, called Aerial-Core, with 15 partners (project website). The goal of Aerial-Core is to develop control, planning, perception, and aerial manipulation algorithms to perform such inspection and maintenance tasks completely autonomously. Two types of drones will be investigated: fixed wing and quadrotor drones.

The role of the Robotics and Perception Group is the development of control, planning, and perception algorithms for perching or landing on cables to enable battery recharging (recharging from high-voltage cables will be developed by the University of Southern Denmark) as well as perception with thermal cameras, standard cameras, event cameras and lidar.

This project is funded by the European Union�s Horizon 2020 Research and Innovation Programme ICT10-2019-2020 under Grant Agreement No. 871479: Aerial-Core. Aerial-Core project website.


Benefits of working with us


  • The position is fully funded.
  • PhD student and Postdoc positions in Switzerland are regular jobs with social benefits (e.g., a pension plan!).
  • You will get a very competitive salary and access to excellent research facilities (motion capture, 3D printing, large flying arenas (indoors and outdoors), and electronic and machine workshops.
  • Excellent work atmosphere with many social events, such as ski trips, hikes, lab dinners, and lab retreats (check out our photo gallery).
  • Weekly visits and talks by international researchers from reknown research labs or companies.
  • Collaboration with other top researchers in both Switzerland and abroad.
  • Zurich is regularly ranked in the top cities in the world for quality of life (link).
  • Switzerland is considered the Silicon Valley of Robotics (link).
  • Robotics papers coming from Switzerland each year collect the highest number of citations (normalized by country's population) at all major international robotics conferences.



Who we are


Our research lab, called the Robotics and Perception Group, belongs to two departments: the Dept. of Informatics of the University of Zurich and the Dept. of Neuroinformatics of the University of Zurich and ETH Zurich.

Our researchers have received numerous prestigious awards, such as the recent European Research Council (ERC) Consolidator Grant (2 million Euros), an IEEE Robotics Early Career Award, several industry awards (Google, Qualcomm, Kuka, Intel), and paper awards (the full list of our awards can be found here here).

Our former researchers now occupy prestigious positions at top tier companies, others have become professors, while others have founded succesful spinoffs (e.g., Fotokite, Zurich-Eye (today Facbook Zurich), which developed the visual-inertial position tracking technology used in Oculus Quest (read more)).

The research carried out in our lab has received extensive media coverage. More recently, we were in the news worldwide (New York Times, BBC, Neue Zurcher Zeiting, La Repubblica, etc.) for our work on deep drone acrobatics, deep drone racing, deep learning for autonomous drone navigation, and agile navigation of quadrotors using standard cameras or event-based cameras.

An up-to-date list of our current research projects is here. For videos, please check out our YouTube channel. For press coverage of our research, please check out our media page.


Your Skills


  • A Master or PhD degree in computer engineering, computer science, mechanical engineering, robotics, physics, aerodynamics, or related fields
  • A strong passion for computer vision, robotics, mathematics, programming and abstract thinking
  • Excellent written and spoken English skills
  • Very strong C++ and Python skills
  • Strong experience with robotic systems and/or aerial robots
  • Background knowledge in any of the following: control, path planning, aerodynamics, state estimation, computer vision, numerical optimization
  • Additionally for Postdocs:
    • Excellent track record (publications in high-impact-factor conferences and journals)
    • Proven theoretical and practical experience in solving complex control or computer vision problems and implementing them efficiently.

Familiarity with tools such as ROS, TensorFlow, OpenCV, and Git is desirable.


Starting Date


As soon as possible. The evaluation of the received applications started on June 1, 2020 and will continue until the positions are filled. Check here for updates.



How to apply


PhD candidates: APPLY HERE

Postdocs: APPLY HERE

IMPORTANT: Support letters are not required at this stage but if you have them already, feel free to upload them in the application form. In case you are selected for a physical interview in our lab, support letters will be requested.

IMPORTANT


For questions, please contact Prof. Davide Scaramuzza at: careersrpg (AT) ifi (DOT) uzh (DOT) ch (please do not use his private email for inquiries about these job positions). Please do not send inquiries asking to check whether your CV fits any of the positions. If you are unsure, just apply; you have nothing to lose. Applications sent directly by email and not through the web form will not be considered. In case of positive feedback, you will be contacted. If not positive, you won't hear back.