Share:
Share this content in WeChat
X
Technical Article
A body examination study of brain MRI registration in adolescents using fusion generative adversarial network
ZHU Haiyan  LI Meng  JI Yuelong  ZHANG Fuchun  WANG Baiyang 

Cite this article as: ZHU H Y, LI M, JI Y L, et al. A body examination study of brain MRI registration in adolescents using fusion generative adversarial network[J]. Chin J Magn Reson Imaging, 2023, 14(2): 116-124. DOI:10.12015/issn.1674-8034.2023.02.020.


[Abstract] Objective To solve the problem of information loss in the sampling process of UNet framework, we used brain MRI of adolescents to study the problems of weak network learning ability and low accuracy of registration of brain marginal regions.Materials and Methods In this study, publicly available brain MRI data sets were used: HBN and LPBA40 propose a multiscale attention mechanisms generative adversarial networks (MAM_GAN). Single-mode brain image registration was realized. The method consists of registration network and authentication network. By adding multiscale attention mechanisms (MAM) modules to the identification network to acquire contextual information at different scales, more effective brain structural features were extracted during adversarial training. Secondly, the local cross-correlation loss function of image similarity was introduced into the registration network to constrain the similarity between the moving image and the fixed image, which further improves the image registration performance in the antagonistic training process of the two networks. Dice coefficient (Dice), structural similarity (SSIM) and Pearson's correlation coefficient (PCC) were used to measure the registration accuracy of registration image and fixed image.Results Compared with the traditional methods in Dice score, the accuracy of MAM_GAN method in cerebrospinal fluid (CSF), gray matter (gray matter, GM) and white matter (white matter, WM) increased by 0.013, 0.023 and 0.028 respectively, PCC score increased by 0.004 and SSIM score increased by 0.011. Hence, the experimental results showed that the method had good registration effect.Conclusions The MAM_GAN method can better learn the structural features of the brain, improve the registration performance, and provide a technical basis for the clinical diagnosis and physical detection of attention-deficit hyper-activity disorder (ADHD) in adolescents.
[Keywords] generative adversarial network;adolescent;attention-deficit hyper-activity disorder;imaging registration;multiscale;magnetic resonance image;attention mechanism;local cross-correlation

ZHU Haiyan1   LI Meng2*   JI Yuelong2   ZHANG Fuchun2   WANG Baiyang2  

1 School of Physical Education and Health, Linyi University, Linyi 276005, China

2 School of Computer Science and Engineering, Linyi University, Linyi 276005, China

*Correspondence to: Li M, E-mail: 1426980891@qq.com

Conflicts of interest   None.

ACKNOWLEDGMENTS Shandong Social Science Planning and Research Project (No. 21CTYJ03).
Received  2022-08-09
Accepted  2022-12-12
DOI: 10.12015/issn.1674-8034.2023.02.020
Cite this article as: ZHU H Y, LI M, JI Y L, et al. A body examination study of brain MRI registration in adolescents using fusion generative adversarial network[J]. Chin J Magn Reson Imaging, 2023, 14(2): 116-124. DOI:10.12015/issn.1674-8034.2023.02.020.

[1]
HU Y P, MODAT M, GIBSON E, et al. Weakly-supervised convolutional neural networks for multimodal image registration[J]. Med Image Anal, 2018, 49: 1-13. DOI: 10.1016/j.media.2018.07.002.
[2]
LEOW A D, YANOVSKY I, PARIKSHAK N, et al. Alzheimer's Disease Neuroimaging Initiative: a one-year follow up study using tensor-based morphometry correlating degenerative rates, biomarkers and cognition[J]. NeuroImage, 2009, 45(3): 645-655. DOI: 10.1016/j.neuroimage.2009.01.004.
[3]
MARKELJ P, TOMAŽEVIČ D, LIKAR B, et al. A review of 3D/2D registration methods for image-guided interventions[J]. Med Image Anal, 2012, 16(3): 642-661. DOI: 10.1016/j.media.2010.03.005.
[4]
SHAN S Y, YAN W, GUO X Q, et al. Unsupervised end-to-end learning for deformable medical image registration[EB/OL]. [2022-08-08]. https://arxiv.org/abs/1711.08608.
[5]
MA Y J, NIU D M, ZHANG J S, et al. Unsupervised deformable image registration network for 3D medical images[J]. Appl Intell, 2022, 52(1): 766-779. DOI: 10.1007/s10489-021-02196-7.
[6]
MOK T C W, CHUNG A C S. Conditional Deformable Image Registration with Convolutional Neural Network[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2021: 35-45.10.1007/978-3-030-87202-1_4.
[7]
GONG X, KHAIDEM L, ZHU W T, et al. Uncertainty learning towards unsupervised deformable medical image registration[C]//2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Waikoloa: IEEE, 2022: 1555-1564. DOI: 10.1109/WACV51458.2022.00162.
[8]
KIM B, KIM D H, PARK S H, et al. CycleMorph: Cycle consistent unsupervised deformable image registration[J/OL]. Med Image Anal, 2021, 71: 102036 [2022-08-08]. https://www.sciencedirect.com/science/article/abs/pii/S1361841521000827. DOI: 10.1016/j.media.2021.102036.
[9]
MIAO S, PIAT S, FISCHER P, et al. Dilated FCN for multi-agent 2D/3D medical image registration[J/OL]. Proc AAAI Conf Artif Intell, 2018, 32(1): 4694-4701 [2022-08-09]. https://ojs.aaai.org/index.php/AAAI/article/view/11576. DOI: 10.1609/aaai.v32i1.11576.
[10]
BALAKRISHNAN G, ZHAO A, SABUNCU M R, et al. VoxelMorph: a learning framework for deformable medical image registration[J]. IEEE transactions on medical imaging, 2019, 38(8): 1788-1800. DOI: 10.48550/arXiv.1809.05231.
[11]
ZHAO S Y, DONG Y, CHANG E, et al. Recursive cascaded networks for unsupervised medical image registration[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul: IEEE, 2020: 10599-10609. DOI: 10.1109/ICCV.2019.01070.
[12]
HE Z Q, HE Y P, CAO W M. Deformable image registration with attention-guided fusion of multi-scale deformation fields[J]. Appl Intell, 2022: 1-15. DOI: 10.1007/s10489-022-03659-1.
[13]
YANG T, BAI X, CUI X, et al. DAU‐Net: An unsupervised 3D brain MRI registration model with dual‐attention mechanism[J/OL]. Int J Imaging Syst Technol, 2022 [2022-08-09]. https://onlinelibrary.wiley.com/doi/10.1002/ima.22801. DOI: 10.1002/ima.22801.
[14]
SHAO S W, PEI Z C, CHEN W H, et al. A multi-scale unsupervised learning for deformable image registration[J]. Int J CARS, 2022, 17(1): 157-166. DOI: 10.1007/s11548-021-02511-0.
[15]
HUANG Y, AHMAD S, FAN J, et al. Difficulty-aware hierarchical convolutional neural networks for deformable registration of brain MR images[J/OL]. Med Image Anal, 2021, 67: 101817 [2022-08-08]. https://www.sciencedirect.com/science/article/abs/pii/S136184152030181X. DOI: 10.1016/j.media.2020.101817.
[16]
GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J/OL]. NIPS, 2014, 63: 139-144 [2022-08-08]. https://dl.acm.org/doi/abs/10.1145/3422622. DOI: 10.48550/arXiv.1406.2661.
[17]
JIANG D H, ZHANG S, DAI L, et al. Multi-scale generative adversarial network for image super-resolution[J]. Soft Comput A Fusion Found Methodol Appl, 2022, 26(8): 3631-3641. DOI: 10.1007/s00500-022-06822-5.
[18]
ZHAO M, WEI Y, WONG K K L. A generative adversarial network technique for high-quality super-resolution reconstruction of cardiac magnetic resonance images[J/OL]. Magn Reson Imaging, 2022, 85: 153-160 [2022-08-08].https://www.sciencedirect.com/science/article/abs/pii/S0730725X21002009. DOI: 10.1016/j.mri.2021.10.033.
[19]
LI G, LV J, TONG X, et al. High-resolution pelvic MRI reconstruction using a generative adversarial network with attention and cyclic loss[J/OL]. IEEE Access, 2021, 9: 105951-105964 [2022-08-08]. https://ieeexplore.ieee.org/abstract/document/9493874. DOI: 10.1109/ACCESS.2021.3099695.
[20]
LV J, WANG C, YANG G. PIC-GAN: a parallel imaging coupled generative adversarial network for accelerated multi-channel MRI reconstruction[J]. Diagnostics (Basel), 2021, 11(1): 61 [2022-08-08]. https://www.mdpi.com/2075-4418/11/1/61. DOI: 10.3390/diagnostics11010061.
[21]
DENG Z, CAI Y H, CHEN L, et al. RFormer: transformer-based generative adversarial network for real fundus image restoration on a new clinical benchmark[J]. IEEE J Biomed Health Inform, 2022, 26(9): 4645-4655. DOI: 10.1109/JBHI.2022.3187103.
[22]
CHEN Z, XIE L, CHEN Y, et al. Generative adversarial network based cerebrovascular segmentation for time-of-flight magnetic resonance angiography image[J/OL]. Neurocomputing, 2022, 488: 657-668 [2022-08-08]. https://www.sciencedirect.com/science/article/abs/pii/S0925231221017537. DOI: 10.1016/j.neucom.2021.11.075.
[23]
QUINTANA-QUINTANA O J, DE LEÓN-CUEVAS A, GONZÁLEZ-GUTIÉRREZ A, et al. Dual U-Net-Based Conditional Generative Adversarial Network for Blood Vessel Segmentation with Reduced Cerebral MR Training Volumes[J/OL]. Micromachines, 2022, 13(6): 823 [2022-08-08]. https://www.mdpi.com/2072-666X/13/6/823. DOI: 10.3390/mi13060823.
[24]
ZHU L, HE Q, HUANG Y, et al. DualMMP-GAN: Dual-scale multi-modality perceptual generative adversarial network for medical image segmentation[J/OL]. Computers in Biology and Medicine, 2022, 144: 105387 [2022-08-08]. https://www.sciencedirect.com/science/article/abs/pii/S0010482522001792. DOI: 10.1016/j.compbiomed.2022.105387.
[25]
DING Y, ZHANG C, CAO M, et al. ToStaGAN: An end-to-end two-stage generative adversarial network for brain tumor segmentation[J/OL]. Neurocomputing, 2021, 462: 141-153 [2022-08-08]. https://www.sciencedirect.com/science/article/abs/pii/S0925231221011425. DOI: 10.1016/j.neucom.2021.07.066.
[26]
GUO K, HU X H, LI X F. MMFGAN: a novel multimodal brain medical image fusion based on the improvement of generative adversarial network[J]. Multimed Tools Appl, 2022, 81(4): 5889-5927. DOI: 10.1007/s11042-021-11822-y.
[27]
NANDHINI ABIRAMI R, Durai RAJ VINCENT P M, SRINIVASAN K, et al. Multimodal Medical Image Fusion of Positron Emission Tomography and Magnetic Resonance Imaging Using Generative Adversarial Networks[J/OL]. Behavioural Neurology, 2022: 6878783 [2022-08-08]. https://www.hindawi.com/journals/bn/2022/6878783/. DOI: 10.1155/2022/6878783.
[28]
NARUTE B, BARTAKKE P. Brain MRI and CT Image Fusion Using Generative Adversarial Network[C]//RAMAN B, MURALA S, CHOWDHURY A, et al. International Conference on Computer Vision and Image Processing. Cham: Springer, 2022: 97-109. DOI: 10.1007/978-3-031-11349-9_9.
[29]
WANG L, SUN Y, WANG Z. CCS-GAN: a semi-supervised generative adversarial network for image classification[J]. Vis Comput Int J Comput Graph, 2022, 38(6): 2009-2021. DOI: 10.1007/s00371-021-02262-8.
[30]
GHASSEMI N, SHOEIBI A, ROUHANI M. Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images[J/OL]. Biomedical Signal Processing and Control, 2020, 57: 101678 [2022-08-08]. https://www.sciencedirect.com/science/article/abs/pii/S1746809419302599. DOI: 10.1016/j.bspc.2019.101678.
[31]
ALEX V, MOHAMMED SAFWAN K P, CHENNAMSETTY S S, et al. Generative adversarial networks for brain lesion detection[C]//SPIE Medical Imaging. Proc SPIE 10133, Medical Imaging 2017: Image Processing, Orlando, Florida, USA.2017, 10133: 113-121. DOI: 10.1117/12.2254487.
[32]
FAN J, CAO X, WANG Q, et al. Adversarial learning for mono-or multi-modal registration[J/OL]. Medical image analysis, 2019, 58: 101545 [2022-08-08]. https://www.sciencedirect.com/science/article/abs/pii/S1361841519300805. DOI: 10.1016/j.media.2019.101545.
[33]
ZHENG Y J, SUI X D, JIANG Y Y, et al. SymReg-GAN: symmetric image registration with generative adversarial networks[J]. IEEE Trans Pattern Anal Mach Intell, 2021, 44(9): 5631-5646. DOI: 10.1109/TPAMI.2021.3083543.
[34]
RONNEBERGER O, FISCHER P, BROX T. U-net: Convolutional networks for biomedical image segmentation[C]//International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015: 234-241. DOI: 10.1007/978-3-319-24574-4_28.
[35]
LI M, WANG Y W, ZHANG F C, et al. Deformable medical image registration based on unsupervised generative adversarial network integrating dual attention mechanisms[C]//2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI). Shanghai: IEEE, 2021: 1-6. DOI: 10.1109/CISP-BMEI53629.2021.9624229.
[36]
ALEXANDER L M, ESCALERA J, AI L, et al. An open resource for transdiagnostic research in pediatric mental health and learning disorders[J/OL]. Sci Data, 2017, 4: 170181 [2022-08-08]. https://www.nature.com/articles/sdata2017181. DOI: 10.1038/sdata.2017.181.
[37]
SHATTUCK D W, MIRZA M, ADISETIYO V, et al. Construction of a 3D probabilistic atlas of human cortical structures[J]. NeuroImage, 2008, 39(3): 1064-1080. DOI: 10.1016/j.neuroimage.2007.09.031.
[38]
HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016: 770-778. DOI: 10.1109/CVPR.2016.90.
[39]
SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. [2022-08-08]. https://arxiv.org/abs/1409.1556.
[40]
OKTAY O, SCHLEMPER J, FOLGOC L L, et al. Attention U-net: learning where to look for the pancreas[EB/OL]. [2022-08-08]. https://arxiv.org/abs/1804.03999

PREV Design and application of brain tissue channel tracer-based magnetic resonance imaging analyzer
NEXT A comparative study of three scanning sequences on the arterial phase and image quality of gadoxetic acid disodium liver-enhanced MRI
  



Tel & Fax: +8610-67113815    E-mail: editor@cjmri.cn