Share this project
I See Through... You
Summary
In recent years, extended reality (XR)’s fast advancements, in terms of hardware and software components, enabled it to become a mainstream consumer-level technology. The interest from the medical community became evident due to the potential that XR has shown in other fields (e.g., industrial). In particular, surgical navigation based on augmented reality (AR) evolved into a main research and development topic, where developed systems would ultimately assist during interventions in the operating room (OR). In this thesis, we developed and assessed AR approaches for surgical navigation focusing on image-to-patient alignment and depth perception challenges as a first step towards clinical implementation.
Chapter 1 brings AR for surgical navigation in the context of computer-assisted interventions. It gives brief introductions from a medical point of view to medical imaging, surgical planning, and surgical navigation, as well as from the technological point of view to XR, virtual reality (VR), and AR.
Chapter 2 investigates AR surgical navigation for craniomaxillofacial surgery. Through a systematic review, it provides insights on the systems developed, the hardware, the software and the surgical outcome reached for the selected papers. It points out the challenges and shortcomings of the selected studies, and makes a collection of the reported advantages of AR for surgery.
In Chapter 3, an image-to-patient alignment approach was developed and assessed for the Microsoft HoloLens 2 (HL2), a head-mounted display (HMD). The approach is considered an outside-in alignment and tracking technique i.e. supported by an external electromagnetic tracking system (EMTS). Chapter 3 proposes the multimodal marker approach where a hybrid marker trackable by both the HL2 and the EMTS was created and calibrated to enabling real-time projection of 3-dimensional (3D) models in the surgeon’s view. The system was assessed systematically and also through various user studies.
Chapter 4 similarly proposes an image-to-patient alignment using the HL2. The approach in this chapter is an inside-out approach which relies only on the HMD’s sensors to perform a one time initial alignment. Chapter 4 presents an approach to generate synthetic data of subjects with surgical landmarks attached to their heads, to train a deep learning object detection model to locate these surgical landmarks. The deep learning model was able to detect the landmarks on real images, acquired by the HL2, and enabled, through a Perspective-n-Point (PnP) approach, the projection of the 3D model on the corresponding printed model.
It was clear from the previous studies that an accurate perception of depth of virtual objects using optical see-through (OST) HMDs is difficult. It is however important in a surgical context to provide an adequate visualization to the performed surgical task (e.g., drilling). Such a visualization should improve the overall performance, especially accuracy. For this, Chapter 5 proposes a new visualization paradigm for needle insertion or drilling tasks to improve instrument placement when using OST HMDs intraoperatively. The approach was assessed through a user study, from which it was concluded that instrument visualization in OST HMDs is needed, and extensions of surgical instruments with virtual objects provide more depth cues.
The multimodal marker approach was assessed through two phantom studies demonstrating two potential surgical applications for AR: the insertion of a navigated catheter simulating an external ventricular drain (EVD) placement (Chapter 6), and the delineation of cranial sutures on infant phantoms to plan a craniosynostosis surgery (Chapter 7).
In Chapter 6, an emphasis was placed on AR visualization modes. In particular, different visualizations were examined, both in terms of the device used (smartphone, HL2) and the method of visualization (2-dimensional (2D) or 3D). It compared their efficacy for catheter insertion in the ventricle cavity to identify the most suitable device and mode of visualization for such an intervention. The study recommended the use 3D approaches using HMDs because of the demonstrated accurate insertions, high confidence and high preference expressed by the volunteers. Chapter 6 in this way encourages surgeons to opt for 3D HMD for surgical navigation instead of 2D AR approaches.
Chapter 7 studies whether AR is capable of providing navigational support for identifying cranial sutures on infants’ skulls. Currently, this procedure is performed free-hand which can lead to large errors. Through a user study we investigated the accuracy of delineating cranial sutures that were projected using AR. The study showed a higher accuracy compared to free-hand techniques. Volunteers were asked to delineate the projected sutures on 3D printed skulls which showed a higher accuracy compared to free-hand techniques.
From Chapter 6 and Chapter 7, we can conclude that AR has the potential to be used in surgical navigation but requires more development and validations steps to become widely spread in future surgical workflows. This aspect is discussed further in Chapter 8, shedding the light the contributions, the limitations, and future perspectives.
See also these dissertations


The role of service plants in promoting biological pest control and pollination in Xinjiang pear


Wild meat in the city, health risks and implications


Developing Breathomics for Clinical Application


Pharmacological inhibition of ketohexokinase in inborn and acquired metabolic disorders


Enhancing antimicrobial stewardship in veterinary medicine


Identifying Sound Features from Brain Activity


Microbubble Oscillations and Microstreaming
We print for the following universities














