[1] Wiley C A. Synthetic aperture radars[J]. IEEE Transactions on Aerospace and Electronic Systems, 1985, AES-21(3): 440-443. DOI: 10.1109/TAES.1985.310578. [2] 潘志刚, 王小龙, 李志勇. SAR原始数据压缩的自适应比特分配BAQ算法[J]. 中国科学院大学学报, 2017, 34(1): 106-111. DOI: 10.7523/j.issn.2095-6134.2017.01.014. [3] 潘志刚, 高鑫. 针对纹理图像压缩的改进SPIHT算法[J]. 中国科学院研究生院学报, 2010, 27(2): 222-227. DOI: 10.7523/j.issn.2095-6134.2010.2.012. [4] 李萌, 刘畅. 基于特征复用的膨胀-残差网络的SAR图像超分辨重建[J]. 雷达学报, 2020, 9(2): 363-372. DOI: 10.12000/JR19110. [5] Gu F, Zhang H, Wang C, et al.SAR image super-resolution based on noise-free generative adversarial network[C]//IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium. July 28-August 2, 2019, Yokohama, Japan. IEEE, 2019: 2575-2578. DOI: 10.1109/IGARSS.2019.8899202. [6] Cen X, Song X, Li Y C, et al.A deep learning-based super-resolution model for bistatic SAR image[C]//2021 International Conference on Electronics, Circuits and Information Engineering (ECIE). January 22-24, 2021, Zhengzhou, China. IEEE, 2021: 228-233. DOI: 10.1109/ECIE52353.2021.00056. [7] Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[EB/OL]. (2020-10-22)[2023-03-22]. https://arxiv.org/abs/2010.11929. [8] Schönfeld E, Schiele B, Khoreva A.A U-net based discriminator for generative adversarial networks[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 13-19, 2020, Seattle, WA, USA. IEEE, 2020: 8204-8213. DOI: 10.1109/CVPR42600.2020.00823. [9] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[EB/OL]. (2014-09-04)[2023-03-22]. https://arxiv.org/abs/1409.1556. [10] He K M, Chen X L, Xie S N, et al.Masked autoencoders are scalable vision learners[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 18-24, 2022, New Orleans, LA, USA. IEEE, 2022: 15979-15988. DOI: 10.1109/CVPR52688.2022.01553. [11] Carion N, Massa F, Synnaeve G, et al.End-to-end object detection with transformers[C]//European Conference on Computer Vision. Cham: Springer, 2020: 213-229. DOI: 10.1007/978-3-030-58452-8_13. [12] Ramachandran P, Parmar N, Vaswani A, et al. Stand-alone self-attention in vision models[EB/OL]. (2019-06-13)[2023-03-22]. https://arxiv.org/abs/1906.05909. [13] Si C Y, Yu W H, Zhou P, et al. Inception transformer[EB/OL]. (2022-05-25)[2023-03-22]. https://arxiv.org/abs/2205.12956. [14] Xu W J, Xu Y F, Chang T, et al.Co-scale conv-attentional image transformers[C]//2021 IEEE/CVF International Conference on Computer Vision (ICCV). October 10-17, 2021, Montreal, QC, Canada. IEEE, 2022: 9961-9970. DOI: 10.1109/ICCV48922.2021.00983. [15] Xu Y F, Zhang Q M, Zhang J, et al. ViTAE: Vision transformer advanced by exploring intrinsic inductive bias[EB/OL]. (2021-06-07)[2023-03-22]. https://arxiv.org/abs/2106.03348. [16] Chen B Y, Li P X, Li C M, et al.GLiT: neural architecture search for global and local image transformer[C]//2021 IEEE/CVF International Conference on Computer Vision (ICCV). October 10-17, 2021, Montreal, QC, Canada. IEEE, 2022: 12-21. DOI: 10.1109/ICCV48922.2021.00008. [17] Yang R, Ma H L, Wu J, et al.ScalableViT: rethinking the context-oriented generalization of vision transformer[C]//European Conference on Computer Vision. Cham: Springer, 2022: 480-496. DOI: 10.1007/978-3-031-20053-3_28. [18] Gu J J, Dong C.Interpreting super-resolution networks with local attribution maps[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 20-25, 2021, Nashville, TN, USA. IEEE, 2021: 9195-9204. DOI: 10.1109/CVPR46437.2021.00908. [19] Park N, Kim S. How do vision transformers work? [EB/OL]. (2022-02-14)[2023-03-22]. https://arxiv.org/abs/2202.06709. [20] Ronneberger O, Fischer P, Brox T.U-net: Convolutional networks for biomedical image segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2015: 234-241. DOI: 10.1007/978-3-319-24574-4_28. [21] Lata K, Dave M, Nishanth K N.Image-to-image translation using generative adversarial network[C]//2019 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA). June 12-14, 2019, Coimbatore, India. IEEE, 2019: 186-189. DOI: 10.1109/ICECA.2019.8822195. [22] Miyato T, Kataoka T, Koyama M, et al. Spectral normalization for generative adversarial networks[EB/OL]. (2018-02-16)[2023-03-22]. https://arxiv.org/abs/1802.05957. [23] Ledig C, Theis L, Huszár F, et al.Photo-realistic single image super-resolution using a generative adversarial network[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). July 21-26, 2017, Honolulu, HI, USA. IEEE, 2017: 105-114. DOI: 10.1109/CVPR.2017.19. [24] Wang X T, Yu K, Wu S X, et al.ESRGAN: enhanced super-resolution generative adversarial networks[C]//European Conference on Computer Vision. Cham: Springer, 2019: 63-79. DOI: 10.1007/978-3-030-11021-5_5. [25] Wang X T, Xie L B, Dong C, et al.Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data[C]//2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). October 11-17, 2021, Montreal, BC, Canada. IEEE, 2021: 1905-1914. DOI: 10.1109/ICCVW54120.2021.00217. |