[1] Dey N, Schlemper J, Salehi S S M, et al. ContraReg: contrastive learning of multi-modality unsupervised deformable image registration[C]//2022 International Conference on Medical Image Computing and Computer Assisted Intervention. September 18-22, 2022, Singapore. Springer, 2022:66-77. DOI:10.1007/978-3-031-16446-0_7. [2] Hu J, Luo Z W, Wang X, et al. End-to-end multimodal image registration via reinforcement learning[J]. Medical Image Analysis, 2021, 68:101878. DOI:10.1016/j.media.2020.101878. [3] Song X R, Chao H Q, Xu X A, et al. Cross-modal attention for multi-modal image registration[J]. Medical Image Analysis, 2022, 82:102612. DOI:10.1016/j.media. 2022.102612. [4] Chen X, Diaz-Pinto A, Ravikumar N, et al. Deep learning in medical image registration[J]. Progress in Biomedical Engineering, 2021, 3(1):012003. DOI:10.1088/2516-1091/abd37c. [5] Haskins G, Kruger U, Yan P K. Deep learning in medical image registration: a survey[J]. Machine Vision and Applications, 2020, 31(1):1-18. DOI:10.1007/s00138-020-01060-x. [6] Haskins G, Kruecker J, Kruger U, et al. Learning deep similarity metric for 3D MR-TRUS image registration[J]. International Journal of Computer Assisted Radiology and Surgery, 2019, 14(3):417-425. DOI:10. 1007/s11548-018-1875-7. [7] Balakrishnan G, Zhao A, Sabuncu M R, et al. VoxelMorph: a learning framework for deformable medical image registration[J]. IEEE Transactions on Medical Imaging, 2019:1788-1800. DOI:10.1109/TMI. 2019. 2897538. [8] Mok T C W, Chung A C S. Large deformation diffeomorphic image registration with laplacian pyramid networks[C]//2020 International Conference on Medical Image Computing and Computer Assisted Intervention. October 4-8, 2020, Lima, Peru. Springer, 2020: 211-221. DOI:10.1007/978-3-030-59716-0_21. [9] Guo H T, Kruger M, Xu S, et al. Deep adaptive registration of multi-modal prostate images[J]. Computerized Medical Imaging and Graphics, 2020, 84:101769.DOI:10.1016/j.compmedimag.2020.101769. [10] Xie S N, Girshick R, Dollár P, et al. Aggregated residual transformations for deep neural networks[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). July 21-26, 2017, Honolulu, HI, USA. IEEE, 2017:5987-5995. DOI:10.1109/CVPR.2017. 634. [11] Sun Y Y, Moelker A, Niessen W J, et al. Towards robust CT-ultrasound registration using deep learning methods[C]//International Workshop on Machine Learning in Clinical Neuroimaging, International Workshop on Deep Learning Fails, International Workshop on Interpretability of Machine Intelligence in Medical Image Computing. Cham: Springer, 2018: 43-51.DOI:10.1007/978-3-030-02628-8_5. [12] Song X, Guo H, Xu X, et al. Cross-modal attention for MRI and ultrasound volume registration[C]//2021 International Conference on Medical Image Computing and Computer Assisted Intervention. September 27-October 1, 2021, Strasbourg, France. Springer, 2021:66-75. DOI:10.1007/978-3-030-87202-1_7. [13] Chen X C, Zhou B, Xie H D, et al. Dual-branch squeeze-fusion-excitation module for cross-modality registration of cardiac SPECT and CT[C]//2022 International Conference on Medical Image Computing and Computer Assisted Intervention. September 18-22, 2022, Singapore. Springer, 2022:46-55. DOI:10.1007/978-3-031-16446-0_5. [14] Oktay O, Schlemper J, Le Folgoc L, et al. Attention U-net: learning where to look for the pancreas[EB/OL]. 2018.arXiv:1804.03999.(2018-04-11)[2023-06-21]. https://arxiv.org/abs/1804.03999. [15] Zheng S X, Lu J C, Zhao H S, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 20-25, 2021, Nashville, TN, USA. IEEE, 2021: 6877-6886. DOI:10.1109/CVPR46437.2021.00681. [16] Lowe D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2):91-110. DOI:10.1023/B: VISI. 0000029664.99615.94. [17] Chen J, Tian J. Real-time multi-modal rigid registration based on a novel symmetric-SIFT descriptor[J]. Progress in Natural Science, 2009, 19(5):643-651. DOI:10. 1016/j.pnsc.2008.06.029. [18] Jaderberg M, Simonyan K, Zisserman A, et al. Spatial transformer networks[EB/OL]. 2015.arXiv:1506. 02025.(2015-06-05)[2023-06-21]. https://arxiv.org/abs/1506.02025. [19] Fonov V, Evans A C, Botteron K, et al. Unbiased average age-appropriate atlases for pediatric studies[J]. Neuroimage, 2011, 54(1):313-327. DOI:10.1016/j. neuroimage.2010.07.033. [20] Shapey J, Kujawa A, Dorent R, et al. Segmentation of vestibular schwannoma from MRI, an open annotated dataset and baseline algorithm[J]. Scientific Data, 2021, 8(1):286. DOI:10.1038/s41597-021-01064-w. [21] Smith S, Bannister P R, Beckmann C, et al. FSL: new tools for functional and structural brain image analysis[J]. NeuroImage, 2001, 13(6): 249. DOI:10. 1016 /S1053-8119(01)91592-7. [22] Beare R, Lowekamp B, Yaniv Z. Image segmentation, registration and characterization in R with SimpleITK [J]. Journal of Statistical Software, 2018, 86. DOI:10. 18637/jss.v086.i08. [23] Avants B B, Epstein C L, Grossman M, et al. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain[J]. Medical Image Analysis, 2008, 12(1):26-41. DOI:10.1016/j.media. 2007.06.004. [24] Avants B B, Tustison N, Song G. Advanced normalization tools (ANTS)[J]. Insight J, 2009: 1-35. DOI:10.54294/uvnhin. |