Master Thesis MSTR-2021-83

BibliographyKrauthausen, Fabian Raphael: Robotic surgery training in AR : multimodal record and replay.
University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology, Master Thesis No. 83 (2021).
145 pages, english.

Robotic surgery enhances surgeon capabilities and lowers patient risk compared to traditional minimally invasive surgery. However, learning robotic surgery can be challenging for novice surgeons. On the one hand, existing physical approaches are time-consuming for expert surgeons, as they require experts to supervise the training process. On the other hand, existing virtual approaches do not provide a physical experience for novice surgeons, resulting in lower skill transfer to real surgeries. To overcome these challenges and to combine the advantages of both approaches, a mixed approach with augmented reality could be used. This approach could relieve expert surgeons from supervising training processes and provide novice surgeons with physical experiences to improve skill transfer to real surgeries. In this work, we develop a multimodal record and replay platform in augmented reality for robotic surgery training using the Intuitive da Vinci Surgical System. With this platform, we aim to investigate if multimodal record and replay in augmented reality is a beneficial approach for training in robotic surgery. Moreover, we aim to explore different concepts for motion guidance and surgical skill evaluation in augmented reality. The developed platformallows expert surgeons to record surgical procedures with multiple modalities and novice surgeons to replay and train with the attained multimodal recordings, including visual, haptic, and auditory feedback. Visual feedback is provided by recording left and right camera streams of a stereo endoscope and directly overlaying the recorded videos on the surgeon’s stereoscopic view. Haptic feedback is achieved by recording left and right robotic instrument vibrations with accelerometers attached to the robotic instruments and replaying the recorded vibrations on voice coil actuators mounted to the surgeon handles. Finally, auditory feedback is enabled by recording verbal explanations of a procedure and playing back the recordings on the surgeon speakers. The platform also incorporates motion guidance concepts in the form of ghost tools. For the development of the platform, we used ROS and OpenCV with C++ and Python. With this platform, we conducted an exploratory study with three chief surgeons to evaluate the concepts of the developed platform and to further explore the influence of different modalities and motion guidance concepts. For this purpose, we manufactured a box trainer with two standardized tasks. Further, we placed a force sensor below the task board and used the previously attached accelerometers on the robotic instruments to explore possibilities for quantitative evaluation of surgical skill performances. In the study, participants were first asked to record performances for the two tasks without and with verbal explanations. Next, they were asked to replay and train with their multimodal recordings. After each phase, they were requested to evaluate their experience and to provide suggestions for improvement. Overall, we found that multimodal record and replay in augmented reality is a promising approach for robotic surgery training. In terms of modalities, we found that chief surgeons generally prefer a combination of visual and auditory feedback. Regarding the visual feedback, the surgeons prefer a direct overlay while replaying as the focus should be on the recorded performance. In contrast, a corner view is preferred while training as surgeons should focus on their own performance. Regarding the auditory feedback, the surgeons found verbal explanations beneficial in the early stages of training. In terms of motion guidance, our results showed that visual cues are preferred only during challenging sub-tasks of surgical procedures. In contrast, no preference was expressed between ghost tools and trajectories. In the future, we aim to use the recordings of the chief surgeons to conduct a user study with novice surgeons to assess the effects of the developed training platform on their learning curve.

Full text and
other links
Department(s)University of Stuttgart, Institute of Visualisation and Interactive Systems, Visualisation and Interactive Systems
Superviser(s)Sedlmair, Prof. Michael; Kuchenbecker, Ph.D. Katherine
Entry dateApril 11, 2022
   Publ. Computer Science