Good practices in AI/ML for Ultrasound Fetal Brain Imaging Synthesis
Harvey Mannering, Sofia Miñano, and Miguel Xochicale
Medical image datasets for AI and ML methods must be diverse to generalise well unseen data (i.e. diagnoses, diseases, pathologies, scanners, demographics, etc). However there are few public ultrasound fetal imaging datasets due to insufficient amounts of clinical data, patient privacy, rare occurrence of abnormalities, and limited experts for data collection and validation. To address such challenges in Ultrasound Medical Imaging, Miguel will discuss two proposed generative adversarial networks (GAN)-based models: diffusion-super-resolution-GAN and transformer-based-GAN, to synthesise images of fetal Ultrasound brain image planes from one public dataset. Similarly, Miguel will present and discuss AI and ML workflow aligned to good ML practices by FDA, and methods for quality image assessment (e.g., visual Turing test and FID scores). Finally, a simple prototype in GitHub, google-colabs and guidelines to train it using Myriad cluster will be presented as well as applications for Medical Image Synthesis e.g., classification, augmentation, segmentation, registration and other downstream tasks, etc. will be discussed. The resources to reproduce the work of this talk are available at https://github.com/budai4medtech/xfetus.
University College London
The deep learning and computer vision Journal Club
UCL Centre for Advance Research Computing
1st of June 2023, 15:00 GMT
Medical Image Synthesis, Deep Learning,
- Google colabs
- Quick guidelines and demos for myriad
- How to run and re-train in myriad
Miguel Xochicale
Miguel is a Research Engineer at University College London within the Advanced Research Computing Centre and WEISS where he is advancing AI-based Surgical Navigation tools. Previously, he was a Research Associate at King’s College London where he advanced research in Ultrasound-Guidance Procedures and AI-enabled echocardiography pipelines. In 2019, he was awarded a Ph.D. degree in Computer Engineering from the University of Birmingham, researching “Nonlinear Analysis to Quantify Movement Variability in Human-Humanoid Interaction”. His primary research interests are in developing data-centric AI algorithms for Medical Imaging, MedTech, SurgTech, Biomechanics and clinical translation. Additionally, his work includes generative models for fetal imaging, sensor fusion data from time-series and medical imaging, real-time AI for echocardiography, image-guided procedures, AI-based surgical navigation tools, and child-robot interaction in low-resource countries.
aiming to provide understanding of essentials to train reliable, repeatable, reproducible and validated models for medical image synthesis.