Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Generalised super resolution for quantitative MRI using self-supervised mixture of experts

Lin, Hongxiang, Zhou, Yukun, Slator, Paddy J. ORCID: https://orcid.org/0000-0001-6967-989X and Alexander, Daniel C. 2021. Generalised super resolution for quantitative MRI using self-supervised mixture of experts. Presented at: International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, September 27–October 1, 2021. Published in: de Bruijne, Marleen, Cattin, Philippe C., Cotin, Stéphane, Padoy, Nicolas, Speidel, Stefanie, Zheng, Yefeng and Essert, Caroline eds. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. Lecture Notes in Computer Science , vol.12906 Cham, Switzerland: Springer, pp. 44-54. 10.1007/978-3-030-87231-1_5

[thumbnail of MICCAI21-200.pdf]
Preview
PDF - Accepted Post-Print Version
Download (2MB) | Preview

Abstract

Multi-modal and multi-contrast imaging datasets have diverse voxel-wise intensities. For example, quantitative MRI acquisition protocols are designed specifically to yield multiple images with widely-varying contrast that inform models relating MR signals to tissue characteristics. The large variance across images in such data prevents the use of standard normalisation techniques, making super resolution highly challenging. We propose a novel self-supervised mixture-of-experts (SS-MoE) paradigm for deep neural networks, and hence present a method enabling improved super resolution of data where image intensities are diverse and have large variance. Unlike the conventional MoE that automatically aggregates expert results for each input, we explicitly assign an input to the corresponding expert based on the predictive pseudo error labels in a self-supervised fashion. A new gater module is trained to discriminate the error levels of inputs estimated by Multiscale Quantile Segmentation. We show that our new paradigm reduces the error and improves the robustness when super resolving combined diffusion-relaxometry MRI data from the Super MUDI dataset. Our approach is suitable for a wide range of quantitative MRI techniques, and multi-contrast or multi-modal imaging techniques in general. It could be applied to super resolve images with inadequate resolution, or reduce the scanning time needed to acquire images of the required resolution. The source code and the trained models are available at https://github.com/hongxiangharry/SS-MoE.

Item Type: Conference or Workshop Item (Paper)
Date Type: Publication
Status: Published
Schools: Computer Science & Informatics
Publisher: Springer
ISBN: 978-3-030-87230-4
Date of First Compliant Deposit: 13 September 2023
Date of Acceptance: 2021
Last Modified: 17 Nov 2023 19:37
URI: https://orca.cardiff.ac.uk/id/eprint/162487

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics