We are hiring multiple talented PhD and Postdocs in AI!
Be part of where breakthroughs really happen!
Are you ready to take your research career to the next level? We are seeking highly motivated PhD students and postdoctoral researchers to join our lab and make a significant impact on real-world challenges! Our researchers have earned numerous prestigious awards, and our alumni have gone on to lead teams at top-tier companies, become professors, and establish successful startups. Our work has received global recognition, with coverage in The Guardian, The New York Times, Forbes, The Economist, and BBC, highlighting achievements such as outperforming world champion drone racing pilots and advancing high-speed navigation with event cameras. If you're passionate about advancing research, technology, and innovation, apply now to join a team that is shaping the future. The mission of the Robotics and Perception Group is to research the fundamental challenges of robotics and computer vision that will benefit all of humanity, and we want people with diverse perspectives and backgrounds. Our team includes various nationalities, genders, and ages. We have several fully-funded positions for PhD students and PostDocs in:
- Vision-based Robot Learning with Foundation Models
- Low-Energy Vision with Event Cameras
- Vision-Based Localization for Autonomous Inspection
- Robust Control in Confined Spaces
Deadline for Applications and Starting Date
The closing deadline for applications is November 30, 2024, but we have already started the screening of applications and will continue until the positions are filled.
Thus, we encourage you to apply asap if you want to be guaranteed a spot.
Starting date: as soon as possible.
Vision-based Robot Learning with Foundation Models
Navigating complex unstructured environments requires a deep understanding of the robot's surroundings. Where is it safe to navigate?
What actions are too risky to take? How can the robot quickly adapt from experience collected during operation?
To answer these questions, novel algorithms that combine machine learning, control, and vision are required.
Autonomous navigation based on onboard sensors and computation has made tremendous progress over the last decade (e.g., NASA Mars helicopter, Tesla autopilot, Boston Dynamics Atlas).
However, the performance of these systems is still far from humans in of agility, versatility, and robustness:
"agility" will allow the robot to increase its productivity, "versatility" to adapt to new environments and continually learn from new data, and "robustness" to succeed at any task.
This PhD will be done within the EU Projet AGILEFLIGHT.
The goal of the PhD will be to research deep learning algorithms (imitation learning, reinforcement learning, differentiable simulation) with foundation models to train sensorimotor policies that can navigate robots better than humans,
end to end, by mapping visual inputs to control commands. Experimental platforms range from flying robots to legged robots and cars.
Your research will revolve around learning from data collected online, learning from offline data (e.g., YouTube videos), and generalist, multi-task learning (learning behaviors that generalize to multiple tasks and scenarios).
The specific topic will be decided with your PhD advisor, Prof. Davide Scaramuzza.
If you want to know more about our current research in this area, check out these papers for more details:
- Learning Quadruped Locomotion Using Differentiable Simulation: PDF
- Champion-level Drone Racing using Deep Reinforcement Learning (published in Nature and featured on the cover): PDF, YouTube
- Learning High-Speed Flight in the Wild: PDF, YouTube
- Deep Drone Acrobatics: PDF, YouTube
- Learning Minimum-Time Flight in Cluttered Environments: PDF, YouTube
- A Benchmark Comparison of Learned Control Policies for Agile Quadrotor Flight: PDF, YouTube
- Our general research on autonomous drones,
- Our research on drone racing,
- Our research on agile flight.
Low-Energy Vision with Event Cameras
The goal of this PhD is to fuse event cameras with standard cameras and inertial sensors to improve
the energy consumption and latency of future computer vision algorithms (SLAM and/or object tracking) for autonomous vehicles and mobile devices (e.g., AR/VR).
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames.
They offer significant advantages over standard cameras, namely a very High Dynamic Range (HDR), no motion blur,
latency with microsecond resolution, and low bandwidth.
However, because the output is asynchronous, traditional vision algorithms cannot be applied, and new algorithms must be
developed to take advantage of them.
If you are interested to know more about our current research in this area, check out our related research page on event cameras.
Also, check out these papers for more details:
- Low Latency Automotive Vision with Event Cameras: PDF, YouTube
- Data-driven Feature Tracking for Event Cameras: PDF, YouTube
- TimeLens: Event-based Video Frame Interpolation: PDF, YouTube
- Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios: PDF, YouTube
- Dynamic Obstacle Avoidance for Quadrotors with Event Cameras: PDF, YouTube
Vision-Based Localization for Autonomous Inspection
Inspection of enclosed marine structures, such as ballast water tanks and complex ship compartments,
represents a challenging and critical task in extreme environments. Within these confined spaces, traditional
GPS-based navigation is rendered impractical, making vision-based localization the sole viable option.
The localization task becomes particularly challenging due to the prevailing dark lighting conditions and intricate, repetitive structural patterns in such environments.
This PhD proposal aims at advancing vision-based algorithms for localization of unmanned aerial systems (UAS)
in industrial environments. You will investigate multi-sensor fusion strategies, including event cameras, LiDARs, and
inertial sensors for robust and precise position estimation ensuring the UAS can navigate with confidence.
In addition to improving visual odometry in low-light situations, the envisioned localization system should also
be able to register the drone's position against pre-existing maps and semantic information. This critical capability will
enable the UAS to comprehensively understand its surroundings and make accurate decisions during complex inspection tasks.
This PhD will be done within the EU project AutoASSESS, which is positioned to bring
about a transformative shift in the inspection of expansive and enclosed marine structures, surpassing the limits of current technology.
This compelling endeavor provides you with the opportunity to develop a robust and resilient drone localization system tailored for extreme industrial settings.
Our key partners in this project are DTU (Prof. Evangelos Boukas), NTNU (Prof. Kostas Alexis), TUM (Prof. Stefan Leutenegger), U. of Twente (Prof. Antonio Franchi), and company Flyability
If you are interested to know more about our current research in this area, check out our related research page on event cameras.
Also, check out these papers for more details:
- HDVIO: Improving Localization and Disturbance Estimation with Hybrid Dynamics VIO: PDF, YouTube
- Dynamic obstacle avoidance for quadrotors with event cameras: PDF, YouTube
- Powerline Tracking with Event Cameras: PDF, YouTube
- Learning high-speed flight in the wild: PDF, YouTube
Robust Control in Confined Spaces
Navigating the labyrinthine confines of ballast water tanks within ships for inspection purposes poses an
extraordinary technological challenge. These tanks feature very small entrances through which the drone must skillfully navigate.
The drone's mission is two-fold: swiftly map the expansive tank interiors while exhibiting both agility and resilience to collisions.
The challenge is for the drone to operate autonomously within these confined spaces, relying solely on onboard sensors for navigation and localization.
This PhD position focuses on pioneering advanced control algorithms tailored for drones with exceptional maneuverability.
Your research will evolve around real-time trajectory planning algorithms that allow drones to adapt their course dynamically in
response to unforeseen obstacles or environmental changes. Recent advances in reinforcement learning-based control algorithms will be at
the heart of this endeavor. While the machine learning based drone control has seen significant advancements, as exemplified
by the AI-based drone system SWIFT that defeated humans in the sport of drone racing, it still lags behind human pilots in terms of robustness
especially in challenging industrial settings. Additionally, you will be able to contribute to the integration of event cameras
with their rapid obstacle detection capabilities for collision avoidance as well as vision-based resilient onboard localization algorithms.
This PhD will be done within the EU project AutoASSESS, which is poised to revolutionize the inspection
of large and enclosed marine structures in extreme environments beyond the capabilities of current technology. It is an exciting opportunity
for you to redefine drone control boundaries, allowing them to excel in complex industrial environments.
If you are interested to know more about our current research in this area, check out our related research page on event cameras.
Also, check out these papers for more details:
- Champion-level drone racing using deep reinforcement learning: PDF, YouTube
- Learning high-speed flight in the wild: PDF, YouTube
- Time-Optimal Planning for Quadrotor Waypoint Flight: PDF, YouTube
- Learning Minimum-Time Flight in Cluttered Environments: PDF, YouTube
Benefits of working with us
- We do world-class research: our two recent papers were published in Nature: PDF, YouTube, PDF, YouTube
- We were the first to demonstrate an AI beating the best human in a physical world (previous AI wins in chess, Dota, Starcraft, Gran Turismo, and Go were done in simulation or board games): PDF, YouTube.
- Our students and postdocs have won many international awards and paper awards; check out the full list here.
- Whatever you will do with us, you will do well: all our former students and postdocs have landed at great companies or universities (MIT, Berkeley, Google, Facebook, Microsoft, Skydio)
- Our research is often featured in the world news (The Guardian, The New York Times, Forbes, The Economist, BBC, etc.). Full list here.
- The position is fully funded and is a regular job with social benefits (e.g., a pension plan, accident insurance).
- You will get a very competitive salary and access to world-class research facilities (one of the world's largest motion capture arenas, 3D printing, electronic and machine workshops, world-class GPU infrastructure.
- Excellent work atmosphere with many social events, such as ski trips, hikes, lab dinners, and lab retreats (check out our photo gallery).
- Regular visits and talks by international researchers from renowned research labs or companies.
- Collaboration with other top researchers in both Switzerland and abroad.
- Zurich is regularly ranked in the top cities in the world for quality of life (link).
- Switzerland is considered the Silicon Valley of Robotics (link).
- Robotics papers from Switzerland each year collect the highest number of citations (normalized by country's population) at all major international robotics conferences.
Who we are
We belong to two departments: the
Dept. of Informatics of the University of Zurich (which is our primary affiliation) and the
Dept. of Neuroinformatics of the University of Zurich and ETH Zurich.
Our researchers have received numerous prestigious awards, such as the recent IEEE Kiyo Tomiyasu Technical Field Award (only 3 roboticsts have received this award so far),
the European Research Council (ERC) Consolidator Grant (2 million Euros), an IEEE Robotics Early Career Award, several
industry awards (Google, Qualcomm, Kuka, Intel), and paper awards (the full list of our awards can be found here
here).
Our former researchers now occupy prestigious positions at top-tier companies, and others have become
professors, while others have founded successful spinoffs
(e.g., Fotokite, Zurich-Eye (today Meta Zurich), which
developed the visual-inertial SLAM technology used in
Meta Quest (read
more)), and SUIND, which develops agricultural drones.
The research carried out in our lab has received extensive media coverage, including newspapers like The Guardian, New York Times, Forbes, The Economist, BBC, Neue Zurcher Zeitung, La
Repubblica, El Pais, and TV stations such as Discovery Channel, ZDF, SRF, RAI, SuperQuark, ARTE, Canal+, etc.) for our work on
drone racing outperforming the world champion drone racing pilots (The Guardian),
event cameras (The Economist),
learning high speed navigation in unknown environments (Forbes),
deep drone racing (The New York Times),
teaching drones to fly by imitating cars and bikes (IEEE Spectrum).
An up-to-date list of our current research projects is here. For videos, please check out our
YouTube channel.
For press coverage of our research, please check out our media page.
Your Skills
- A Master or PhD degree in computer engineering, computer science, mechanical engineering, robotics, physics, aerodynamics, or related fields
- A strong passion for computer vision, robotics, mathematics, programming and abstract thinking
- Excellent written and spoken English skills
- Very strong C++ and Python skills
- Strong experience with robotic systems and/or aerial robots
- Background knowledge in any of the following: deep learning, reinforcement learning, control, path planning, aerodynamics, state estimation, computer vision, numerical optimization
- Additionally for Postdocs:
- Excellent track record (publications in high-impact-factor conferences and journals)
- Proven theoretical and practical experience in solving complex robotics or computer vision problems and implementing them efficiently.
Familiarity with tools such as ROS, PyTorch, TensorFlow, OpenCV, and Git is desirable.
How to apply
PhD candidates: APPLY HERE
Postdocs: APPLY HERE
IMPORTANT: Support letters are not required at this stage but if you have them
already, feel free to upload them in the application form.
In case you are selected for a physical interview in our lab, support letters will be requested.
IMPORTANT
For questions, please contact Prof. Davide Scaramuzza at: careersrpg (AT) ifi (DOT) uzh (DOT) ch (please do not use his private email for inquiries about these job positions). Please do not send inquiries asking to check whether your CV fits any of the positions. If you are unsure, just apply; you have nothing to lose. Applications sent directly by email and not through the web form will not be considered. In case of positive feedback, you will be contacted. If not positive, you won't hear back.