AFXR
Menu icoMenu232Dark icoCross32Dark

Ajoutez un logo, un bouton, des réseaux sociaux

Cliquez pour éditer
  • AFXR
  • Accueil ▴▾
  • L'AFXR ▴▾
    • Qui sommes-nous ?
    • L'équipe
    • Partenaires
    • Contribuer
    • Statuts
  • Evénements ▴▾
    • Evènements
    • Journées jf·XR 2025
    • Archives des jf·XR
  • Ressources ▴▾
    • Blog de l'AFXR
    • Publications clés
    • Entretiens sur la XR
    • Qu'est-ce que la XR ?
  • Communauté ▴▾
    • Offres d'emplois
    • Annuaires
    • Agenda commun
  • Zone Membre ▴▾
    • Que pouvez-vous y trouver ?
    • Annuaire
    • Soumettre un article
    • Soumettre une offre de poste
    • Communication Adhérents
  • Nous soutenir
  • Se connecter
  • Qui sommes-nous ?
  • L'équipe
  • Partenaires
  • Contribuer
  • Statuts
  • Evènements
  • Journées jf·XR 2025
  • Archives des jf·XR
  • Blog de l'AFXR
  • Publications clés
  • Entretiens sur la XR
  • Qu'est-ce que la XR ?
  • Offres d'emplois
  • Annuaires
  • Agenda commun
  • Que pouvez-vous y trouver ?
  • Annuaire
  • Soumettre un article
  • Soumettre une offre de poste
  • Communication Adhérents
Retour
Jean-Marie NORMAND
13 septembre 2022
Offre de Thèse Santé du futur Centrale Nantes

ABONNEZ-VOUS À NOTRE NEWSLETTER

Abonnez-vous à notre newsletter
icoCross16Dark

Offre de Thèse Santé du futur Centrale Nantes

FROM MEDICAL IMAGING TO AUGMENTED REALITY FOR SURGICAL APPLICATIONS

Axis: Health of the future - Santé du future

Supervision:

HASCOËT Jean-Yves, GeM, RMP UMR CNRS6183, Ecole Centrale de Nantes

VIDAL Luciano, GeM, RMP UMR CNRS6183, Ecole Centrale de Nantes

NORMAND Jean-Marie, AAU, CRENAU UMR CNRS 1563, Ecole Centrale de Nantes

FRIBOURG Rebecca, AAU, CRENAU UMR CNRS 1563, Ecole Centrale de Nantes

Collaborateurs:

CHINESTA, Francisco PIMM, CNRS - UMR 8006, Art et Métiers – ENSAM, Paris

HASCOËT, Nicolas PIMM, CNRS - UMR 8006, Art et Métiers – ENSAM, Paris

CRENN Vincent, Département de chirurgie orthopédique et traumatologie – CHU Nantes

Hosting:

- City: Nantes (France)

- Type of institution : University Campus / Research Laboratory

- Name of hosting institution : Ecole Centrale de Nantes

Keywords:

- Artificial Intelligence, Medical Imagery, 3D Reconstruction, Augmented Reality, Surgical Assistance.

Start of the PhD :

- From October 2022

Context

This thesis project aims to propose a complete pipeline allowing to go from medical images to Augmented Reality (AR) to assist surgeons during operations. To do this, we must be able to extract 3D information from medical images to integrate it in real-time in a robust and precise way on the patient in AR during the operation while allowing the surgeon to interact with it.

There are many scientific barriers: medical images can be of several types (MRI, scanners, x-rays, etc.) and must be aligned with each other to allow the different information present in the images to be merged (e.g., organs, bones, tumors, vascularization, etc.) and integrate them within a global 3D model whose scale is consistent and where the registration has been established. This global 3D model of information extracted from medical images will be extracted semi-automatically by the GeM RPM and PIMM teams. In addition, it is essential to segment this global model according to the different natures of the information (organs, bones, vascular tissues, etc.) to allow surgeons to choose the parts to be visualized in AR. By integrating AI technology in the analysis of medical images for the semi-automated segmentation of organs and tumors, we will be able to develop this segmentation of images in an optimized way for the creation via 3D printing of models. (Anatomical structures, surgical cutting guides, and models for 3D visualization in virtual and augmented reality). Regarding AR, there are several challenges: the first consists of being able to automatically detect organs, bones, etc., of the patient during the operation in order to be able to register the 3D model and display it robustly, quickly, and accurately in AR. In order to deal with this obstacle, the objective is to rely on deep learning methods. A second challenge concerns the use of the AR application during the operation. This challenge concerns the interaction of the surgeon with the application, but also the automatic or semi-automatic measurement of the elements to be displayed or hidden according to the surgeon's activity.

Positioning

Artificial Intelligence (IA), Augmented Reality (AR) and Virtual Reality (VR) are all transforming the practice of medicine by providing new, innovative, and effective methods for analyzing and interacting with data such as digital medical images. IA now plays a key role for the analysis of medical images (e.g. classification, 2D segmentation and identification [1], 3D volumetric images segmentation [2], 2D image reconstruction [3]) but also to automatically recreate 3D models from those images [4] (e.g. provide accurate 3D models of organs and/or of tumors [5]). AR and VR play an essential role in the current and future training of health professionals such as surgeons [6] and different actors in the medical world, including medical students [7]. AR and VR in surgical education have enhanced teaching and learning experiences and provided opportunities for distance teaching, participation, and collaboration between different surgical teams worldwide.

Information from medical imaging, such as computed tomography (CT), magnetic resonance angiography, and magnetic resonance imaging (MRI), is crucial for applying AR and VR in a medical, surgical context. Indeed, thanks to AR/VR capabilities for real-time in-situ visualizations, these technologies could be used not only for preoperative planning, surgical training, and education, but also during real surgery [6, 8, 9, 10, 11]. During operations, surgeons could benefit from the display of the information contained in medical images obtained preoperatively. In order to best assist surgeons, it is necessary to be able to present this information in situ, i.e., directly on the patient, and well localized both in terms of position and orientation: this is the role of AR. However, in order for such technology to be used in real time surgery, it must be able to recognize specific organs, tissues or other anatomical characteristics, which can be achieved by prior training of deep learning networks [1,4], and parametric models [12,13,14].

This thesis project aims to be able to extract and merge the information contained in the preoperative images and then propose solutions for the automatic detection of the organs or bones of the patients, which will serve as "anchor points" for the display of digital information to the surgeon during the operation. The assistance to the surgeon is at the center of this project and the thesis will also aim to propose innovations in the methods of interaction of surgeons in AR during operations.

In this project, our goal is to use AR to guide the user during a surgical procedure in order to enforce safety and efficacy of such procedures.

The information provided to the surgeon can be of different sorts: anatomical delineation, display of invisible (e.g., tumorous) tissues, virtual cutting guides, etc.

To do so, we need to be able to offer an end-to-end digital pipeline, an automatic digital chain originating from medical images that need to be analyzed and processed (segmented, classified) in order to finally give rise to accurate 3D models of the organs and tumors [15]. We aim to use these 3D models in two different ways: (i) for 3D printing (organs, tumors, etc.) that can be used for medical training, and (ii) serve as reference for the tracking required for the real-time visualization in Augmented Reality [16] [17].

The objectives of this PhD thesis are as follows:

● Work on a fully automated pipeline for the processing of digital medical images to generate accurate digital twins of organs/tumors.

● Use 3D print organs/tissues to develop new or use existing CNNs models, and train them to be able to perform robust and real-time tracking required for AR visualization and interaction.

● Develop experimental scenarios and physical models with the help of surgeons to mimic a real surgical procedure in AR. Based on these scenarios, develop interaction and visualization techniques to be tested in AR experiments using AR hardware, e.g. a Microsoft HoloLens 2.

Required skills

- Development of Augmented Reality applications

- Unity

- Knowledge of Artificial Intelligence (optional)

- Interest in 3D modelling (optional)

Contact:

Luciano VIDAL: luciano.vidal@ec-nantes.fr

Jean-Yves HASCOËT: jean-yves.hascoet@ec-nantes.fr Jean-Marie NORMAND: jean-marie.normand@ec-nantes.fr Rebecca FRIBOURG: rebecca.fribourg@ec-nantes.fr

Bibliography

[1] Dong H., Yang G., Liu F., Mo Y., Guo Y. (2017) Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks. In: Valdés Hernández M., González-Castro V. (eds) Medical Image Understanding and Analysis. MIUA 2017. Communications in Computer and Information Science, vol 723. Springer, Cham. https://doi.org/10.1007/978-3-319-60964- 5_44

[2] F. Milletari, N. Navab and S. Ahmadi, "V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation," in 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 2016 pp. 565-571. doi: 10.1109/3DV.2016.79

[3] A. J. Reader, G. Corda, A. Mehranian, C. d. Costa-Luis, S. Ellis and J. A. Schnabel, "Deep Learning for PET Image Reconstruction," in IEEE Transactions on Radiation and Plasma Medical Sciences, vol. 5, no. 1, pp. 1-25, Jan. 2021, doi: 10.1109/TRPMS.2020.3014786.

[4] J. Chen, Z. Wan, J. Zhang, W. Li, Y. Chen, Y. Li, Y. Duan, Medical image segmentation and reconstruction of prostate tumor based on 3D AlexNet, Computer Methods and Programs in Biomedicine, Volume 200, 2021, 105878, ISSN 0169-2607, https://doi.org/10.1016/j.cmpb.2020.105878.

[5] González, David, Alberto García-González, Francisco Chinesta, and Elías Cueto. 2020. "A Data-Driven Learning Method for Constitutive Modeling: Application to Vascular Hyperelastic Soft Tissues" Materials 13, no. 10: 2319. https://doi.org/10.3390/ma13102319

[6] Halabi O., Balakrishnan S., Dakua S.P., Navab N., Warfa M. (2020) Virtual and Augmented Reality in Surgery. In: Doorsamy W., Paul B., Marwala T. (eds) The Disruptive Fourth Industrial Revolution. Lecture Notes in Electrical Engineering, vol 674. Springer, Cham. https://doi.org/10.1007/978-3-030-48230-5_11

[7] Amelia Jiménez-Sánchez, Diana Mateus, Sonja Kirchhoff, Chlodwig Kirchhoff, Peter Biberthaler, Nassir Navab, Miguel A. González Ballester, Gemma Piella. Curriculum learning for improved femur fracture classification: Scheduling data with prior knowledge and uncertainty, Medical Image Analysis, Volume 75, 2022, 102273, ISSN 1361-8415, https://doi.org/10.1016/j.media.2021.102273.

[8] Lu L., Wang H., Liu P., Liu R., Zhang J., Xie Y., Liu S., Huo T., Xie M., Wu X. and Ye Z. (2022) Applications of Mixed Reality Technology in Orthopedics Surgery: A Pilot Study. Front. Bioeng. Biotechnol. 10:740507. doi: 10.3389/fbioe.2022.740507

[9] Sánchez-Margallo J. A., Plaza de Miguel C., Fernández Anzules R. A. and Sánchez-Margallo F. M. (2021) Application of Mixed Reality in Medical Training and Surgical Planning Focused on Minimally Invasive Surgery. Front. Virtual Real. 2:692641. doi: 10.3389/frvir.2021.692641

[10] Vidal L., Crenn, Hascoët J.-Y. (2020) Lauréat de la Bourse NEXT Cluster Fame 2020: Projet 3D Bone Print « Gestion numérique et Biofabrication pour la reconstruction osseuse ».

[11] Rauch M., Hascoët J.-.Y, Vidal L. (2021) Additive Manufacturing challenges and opportunities: from Naval and Aeronautics parts to Biomanufacturing applications. Int Conf. 3dPrintech, Frankfurt (Germany) Dec 2021.

[12] Giacomo Quaranta, Eberhard Haug, Jean Louis Duval, Elias Cueto, and Francisco Chinesta , "Parametric numerical solutions of additive manufacturing processes", AIP Conference Proceedings 2113, 100007 (2019) https://doi.org/10.1063/1.5112640

[13] Niroomandi S., González D., Alfaro I., Bordeu F., Leygue A., Cueto E., Chinesta F. (2013) Real-time simulation of biological soft tissues: a PGD approach. Int J Numer Method Biomed Eng. 2013 May;29(5):586-600. doi: 10.1002/cnm.2544.

[14] Kugler M., Hostettler A., Soler L., Borzacchiello D., Chinesta F., George D., Rémond Y. (2017) Numerical simulation and identification of macroscopic vascularised liver behaviour: Case of indentation tests. Biomed Mater Eng. 2017;28(s1):S107-S111. doi: 10.3233/BME-171631.

[15] Vidal L., Kampleitner C., Krissian S., Brennan M.Á., Hoffmann O., Raymond Y., Maazouz Y., Ginebra M.P., Rosset P., Layrolle P. (2020) Regeneration of segmental defects in metatarsus of sheep with vascularized and customized 3D-printed calcium phosphate scaffolds. Sci Rep. 2020 Apr 27;10(1):7068. doi: 10.1038/s41598-020-63742-w.

[16] A. Badías, I. Alfaro, D. González, F. Chinesta, E. Cueto. (2018) Reduced order modeling for physically-based augmented reality. Computer Methods in Applied Mechanics and Engineering, Elsevier, 2018, 10.1016/j.cma.2018.06.011

[17] Vidal L., Hascoët N. , Kavrakova T .Arduengo Garcia J. , Chinesta F , Hascoët JY. Machine Learning-based predictive model for printability and shape fidelity in Biofabrication. International Conference on Biofabrication, September 2022, Pise, Italy.

Documents
icoPaperclip32Dark sujet_thèse_AR_surgery_final.pdf
Découvrez davantage d'articles sur ces thèmes :
Recrutement Recherche Réalité Augmentée Thèse
icoFacebook35Color icoTwitter35Color icoLinkedin35Color icoComment35Color
icoFacebook35Color icoTwitter35Color icoLinkedin35Color icoComment35Color
0 commentaire(s)
ou
Connectez-vous
Aucun commentaire pour le moment.
Consultez également
Poste Doctorant F/H Immersive and Situated Visualizations of Personal Data

Poste Doctorant F/H Immersive and Situated Visualizations of Personal Data

Informations généralesThème/Domaine : Interaction et visualisationInstrumentation et...

Alexandre KABIL
27 avril 2020
Thèse : « «Dynamique d’interactions tactiles et cognition sociale » à l’UTC

Thèse : « «Dynamique d’interactions tactiles et cognition sociale » à l’UTC

Poursuivez ici selon votre inspiration...Type de financement : Demi-bourse région + demi-bourse...

1 mai 2020
Internship on Intelligent Tutoring System in Virtual Reality

Internship on Intelligent Tutoring System in Virtual Reality

Keywords: Virtual Reality Training System (VRTS), Data collection and analysis, Machine...

Alexandre KABIL
3 mai 2020
Postdoctoral researcher / Research engineer (AR/VR)

Postdoctoral researcher / Research engineer (AR/VR)

Keywords: Human Machine Interaction, Virtual Reality, Virtual Reality Training Systems (VRTS),...

Alexandre KABIL
3 mai 2020
Thèse: Prototypage rapide d’interactions en RA/RV

Thèse: Prototypage rapide d’interactions en RA/RV

Mots-clés : Interaction homme – machine (IHM), réalités virtuelle et augmentée (RV&A), génie...

Alexandre KABIL
4 mai 2020
Thèse en réalité virtuelle de l'Université Paris Saclay

Thèse en réalité virtuelle de l'Université Paris Saclay

Sujet de la thèse: Comprendre l’impact des rétroactions et de l’accompagnement virtuels sur...

Alexandre KABIL
4 mai 2020
Contacts
  • UFR Sciences et Technologies de l'Université Evry-Paris-Saclay, CE1455, 40 rue de Pelvoux, 91020 EVRY
  • Contact
Informations
  • A propos
  • Manifeste de l'AFXR
  • Les Statuts
  • Partenaires
Réseaux sociaux
  • LinkedIn
  • Twitter
  • Facebook
  •  
  • Plan du site
  • Licences
  • Mentions légales
  • CGUV
  • Paramétrer les cookies
  • Se connecter
  • Propulsé par AssoConnect, le logiciel des associations Professionnelles