Medical Robotics

Medical robotics integrates advanced sensing, machine learning, and real-time visualization to enhance precision, efficiency, and accessibility in healthcare. Our research focuses on SLAM-based navigation for visually impaired individuals, wearable robotic systems for assistive tasks, and augmented reality for improved accessibility. Our research aims to deliver adaptive, patient-centered robotic solutions for diverse medical needs.


SLAM for Visually Impaired People: A Survey

SLAM for Visually Impaired People: A Survey

In recent decades, several assistive technologies have been developed to improve the ability of blind and visually impaired (BVI) individuals to navigate independently and safely. At the same time, simultaneous localization and mapping (SLAM) techniques have become sufficiently robust and efficient to be adopted in developing these assistive technologies. We present the first systematic literature review of 54 recent studies on SLAM-based solutions for blind and visually impaired people, focusing on literature published from 2017 onward. This review explores various localization and mapping techniques employed in this context. We systematically identified and categorized diverse SLAM approaches and analyzed their localization and mapping techniques, sensor types, computing resources, and machine-learning methods. We discuss the advantages and limitations of these techniques for blind and visually impaired navigation. Moreover, we examine the major challenges described across studies, including practical challenges and considerations that affect usability and adoption. Our analysis also evaluates the effectiveness of these SLAM-based solutions in real-world scenarios and user satisfaction, providing insights into their practical impact on BVI mobility. The insights derived from this review identify critical gaps and opportunities for future research activities, particularly in addressing the challenges presented by dynamic and complex environments. We explain how SLAM technology offers the potential to improve the ability of visually impaired individuals to navigate effectively. Finally, we present future opportunities and challenges in this domain.


References

SLAM for Visually Impaired People: A Survey

Marziyeh Bamdad, Davide Scaramuzza, Alireza Darvishy

SLAM for Visually Impaired People: A Survey

IEEE Access, 2024.

PDF


Wearable robots for the real world need vision

Wearable robots for the real world need vision

(A) In a vision-based grasp assistance system, the user might wear glasses with a camera and a robotic glove that augments grasp forces. The system can use machine learning-based image processing to classify the target object and infer the likely task the user wants to accomplish. In the example shown here, the system recognizes a full glass of water and infers that the user intends to take a drink. The system then selects a wrap grasp tailored to the size of the glass and closes the hand when vision indicates that the fingers surround the glass (10). (B) A lower-limb assistance system can integrate wearable sensors and vision to expand the range of assistance that can be provided. In this representative example, a vision system detects a staircase in the user's path. The system uses inertial measurement units to detect heel strikes and estimates which footfall will be the first on a raised step. The wearable robot controller then triggers extra assistance torque to help raise the user's center of gravity, with precise timing of the assistance adjusted by EMG signals indicating the user's leg muscle activation.


References

Wearable robots for the real world need vision

Letizia Gionfrida, Daekyum Kim, Davide Scaramuzza, Dario Farina, Robert D. Howe

Wearable robots for the real world need vision

Science Robotics, 2024.

PDF


A smartphone application to determine body length for body weight estimation in children: a prospective clinical trial

A smartphone application to determine body length for body weight estimation in children: a prospective clinical trial

The aim of this study was to test the feasibility and accuracy of a smartphone application to measure the body length of children using the integrated camera and to evaluate the subsequent weight estimates. A prospective clinical trial of children aged 0-<13 years admitted to the emergency department of the University Children's Hospital Zurich. The primary outcome was to validate the length measurement by the smartphone application "Optisizer". The secondary outcome was to correlate the virtually calculated ordinal categories based on the length measured by the app to the categories based on the real length. The third and independent outcome was the comparison of the different weight estimations by physicians, nurses, parents and the app. For all 627 children, the Bland Altman analysis showed a bias of -0.1% (95% CI -0.3-0.2%) comparing real length and length measured by the app. Ordinal categories of real length were in excellent agreement with categories virtually calculated based upon app length (kappa = 0.83, 95% CI 0.79-0.86). Children's real weight was underestimated by physicians (-3.3, 95% CI -4.4 to -2.2%, p < 0.001), nurses (-2.6, 95% CI -3.8 to -1.5%, p < 0.001) and parents (-1.3, 95% CI -1.9 to -0.6%, p < 0.001) but overestimated by categories based upon app length (1.6, 95% CI 0.3-2.8%, p = 0.02) and categories based upon real length (2.3, 95% CI 1.1-3.5%, p < 0.001). Absolute weight differences were lowest, if estimated by the parents (5.4, 95% CI 4.9-5.9%, p < 0.001). This study showed the accuracy of length measurement of children by a smartphone application: body length determined by the smartphone application is in good agreement with the real patient length. Ordinal length categories derived from app-measured length are in excellent agreement with the ordinal length categories based upon the real patient length. The body weight estimations based upon length corresponded to known data and limitations. Precision of body weight estimations by paediatric physicians and nurses were comparable and not different to length based estimations. In this non-emergency setting, parental weight estimation was significantly better than all other means of estimation (paediatric physicians and nurses, length based estimations) in terms of precision and absolute difference.


References

A smartphone application to determine body length for body weight estimation in children: a prospective clinical trial

O. Wetzel, A. R. Schmidt, M. Seiler, D. Scaramuzza, B. Seifert, D. R. Spahn, P. Stein

A smartphone application to determine body length for body weight estimation in children: a prospective clinical trial

Journal of Clinical Monitoring and Computing, June 2017.

PDF


Pedicle screw navigation using surface digitization on the Microsoft HoloLens

Pedicle screw navigation using surface digitization on the Microsoft HoloLens

In spinal fusion surgery, imprecise placement of pedicle screws can result in poor surgical outcome or may seriously harm a patient. Patient-specific instruments and optical systems have been proposed for improving precision through surgical navigation compared to freehand insertion. However, existing solutions are expensive and cannot provide in situ visualizations. Recent technological advancement enabled the production of more powerful and precise optical see-through head-mounted displays for the mass market. The purpose of this laboratory study was to evaluate whether such a device is sufficiently precise for the navigation of lumbar pedicle screw placement. A novel navigation method, tailored to run on the Microsoft HoloLens, was developed. It comprises capturing of the intraoperatively reachable surface of vertebrae to achieve registration and tool tracking with real-time visualizations without the need of intraoperative imaging. For both surface sampling and navigation, 3D printable parts, equipped with fiducial markers, were employed. Accuracy was evaluated within a self-built setup based on two phantoms of the lumbar spine. Computed tomography (CT) scans of the phantoms were acquired to carry out preoperative planning of screw trajectories in 3D. A surgeon placed the guiding wire for the pedicle screw bilaterally on ten vertebrae guided by the navigation method. Postoperative CT scans were acquired to compare trajectory orientation (3D angle) and screw insertion points (3D distance) with respect to the planning. First promising results under laboratory conditions indicate that precise lumbar pedicle screw insertion can be achieved by combining HoloLens with our proposed navigation method. As a next step, cadaver experiments need to be performed to confirm the precision on real patient anatomy.


References

Pedicle screw navigation using surface digitization on the Microsoft HoloLens

Florentin Liebmann, Simon Roner, Marco von Atzigen, Davide Scaramuzza, Reto Sutter, Jess Snedeker, Mazda Farshad, Philipp Furnstahl

Pedicle screw navigation using surface digitization on the Microsoft HoloLens

International Journal of Computer Assisted Radiology and Surgery, 2019.

PDF