[1] Ouali Y, Hudelot C,Tami M. An overview of deep semi-supervised learning[EB/OL]. arXiv:2006.05278. (2020-07-06)[2021-04-15]. https://arxiv.org/abs/2006.05278. [2] Oliver A, Odena A, Raffel C, et al. Realistic evaluation of deep semi-supervised learning algorithms[EB/OL]. arXiv:1804.09170. (2019-06-17)[2021-04-15]. https://arxiv.org/abs/1804.09170. [3] Chen T, Kornblith S, Swersky K, et al. Big self-supervised models are strong semi-supervised learners[EB/OL]. arXiv:2006.10029. (2020-10-26)[2021-04-15]. https://arxiv.org/abs/2006.10029. [4] Xie Q Z, Luong M T, Hovy E, et al. Self-training with noisy student improves ImageNet classification[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 13-19, 2020, Seattle, WA, USA. IEEE, 2020:10684-10695. [5] Ibrahim M S, Vahdat A, Ranjbar M, et al. Semi-supervised semantic image segmentation with self-correcting networks[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 13-19, 2020, Seattle, WA, USA. IEEE, 2020:12712-12722. [6] Ouali Y, Hudelot C, Tami M. Semi-supervised semantic segmentation with cross-consistency training[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 13-19, 2020, Seattle, WA, USA. IEEE, 2020:12671-12681. [7] He J X, Gu J T, Shen J J, et al. Revisiting self-training for neural sequence generation[EB/OL]. arXiv:1909.13788. (2020-10-18)[2021-04-15]. https://arxiv.org/abs/1909.13788. [8] Chen L X, Ruan W T, Liu X Y, et al. SeqVAT:virtual adversarial training for semi-supervised sequence labeling[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online. Stroudsburg, PA, USA:Association for Computational Linguistics, 2020:8801-8811. [9] Li Y T, Liu L, Tan R T. Decoupled certainty-driven consi-stency loss for semi-supervised learning[EB/OL]. arXiv:1901.05657. (2020-07-31)[2021-04-15]. https://arxiv.org/abs/1901.05657. [10] Tarvainen A, Valpola H. Mean teachers are better role models:weight-averaged consistency targets improve semi-supervised deep learning results[EB/OL]. arXiv:1703.01780. (2018-04-16)[2021-04-15]. https://arxiv.org/abs/1703.01780. [11] Arazo E, Ortego D, Albert P, et al. Pseudo-labeling and confirmation bias in deep semi-supervised learning[C]//2020 International Joint Conference on Neural Networks (IJCNN). July 19-24, 2020, Glasgow, UK. IEEE, 2020:1-8. [12] Zhang H, Cisse M, Dauphin Y N. Mixup:beyond empirical risk minimization[EB/OL]. arXiv:1710.09412. (2018-04-27)[2021-04-15]. https://arxiv.org/abs/1710.09412. [13] Veličković P, Cucurull G, Casanova A, et al. Graph attention networks[EB/OL]. arXiv:1710.10903. (2018-02-04)[2021-04-15]. https://arxiv.org/abs/1710.10903. [14] Riloff E, Wiebe J. Learning extraction patterns for subjective expressions[C]//Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing. July 11-12, 2003, Sapporo, Japan. Association for Computational Linguistics, 2003:105-112. [15] Laine S, Aila T. Temporal ensembling for semi-supervised learning[EB/OL]. arXiv:1610.02242. (2017-03-15)[2021-04-15]. https://arxiv.org/abs/1610.02242. [16] Ke Z H, Wang D Y, Yan Q, et al. Dual student:breaking the limits of the teacher in semi-supervised learning[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). October 27-November 2, 2019, Seoul, South Korea. IEEE, 2019:6728-6736. [17] Miyato T, Maeda S I, Koyama M, et al. Virtual adversarial training:a regularization method for supervised and semi-supervised learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(8):1979-1993. [18] Verma V, Lamb A, Kannala J, et al. Interpolation consi-stency training for semi-supervised learning[C]//Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. August 10-16, 2019. Macao, China. California:International Joint Conferences on Artificial Intelligence Organization, 2019:3635-3641. [19] Berthelot D, Carlini N, Goodfellow I J, et al. MixMatch:a holistic approach to semi-supervised learning[EB/OL]. arXiv:1905.02249. (2019-10-23)[2021-04-15]. https://arxiv.org/abs/1905.02249. [20] Xie Q Z, Dai Z H, Hovy E, et al. Unsupervised data augmentation for consistency training[EB/OL]. arXiv:1904.12848. (2020-11-5)[2021-04-15]. https://arxiv.org/abs/1904.12848. [21] Lee D H. Pseudo-label:the simple and efficient semi-super-vised learning method for deep neural networks[EB/OL]. (2013-07)[2021-04-15]. https://www.researchgate.net/publication/280581078_Pseudo-Label_The_Simple_and_Efficient_Semi-Supervised_Learning_Method_for_Deep_Neural_Networks. [22] Zhuang C X, Zhai A, Yamins D. Local aggregation for unsu-pervised learning of visual embeddings[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). October 27-November 2, 2019, Seoul, South Korea. IEEE, 2019:6001-6011. [23] Kuo C W, Ma C Y, Huang J B, et al. Manifold graph with learned prototypes for semi-supervised image classification[EB/OL]. arXiv:1906.05202. (2019-06-13)[2021-04-15]. https://arxiv.org/abs/1906.05202. [24] Srivastava N, Hinton G E, Krizhevsky A, et al. Dropout:a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15:1929-1958. [25] Salimans T, Kingma D P. Weight normalization:a simple reparameterization to accelerate training of deep neural networks[EB/OL]. arXiv:1602.07868(2016-06-04)[2021-04-15]. https://arxiv.org/abs/1602.07868. [26] Tanaka D, Ikami D, Yamasaki T, et al. Joint optimization framework for learning with noisy labels[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. June 18-23, 2018, Salt Lake City, UT, USA. IEEE, 2018:5552-5560. [27] Grandvalet Y, Bengio Y. Semi-supervised learning by entropy minimization[C]//NIPS'04:Proceedings of the 17th International Conference on Neural Information Processing Systems. December 13-18, 2004, Vancouver, British Columbia, Canada. MIT Press, 2004:529-536. [28] Iscen A, Tolias G, Avrithis Y, et al. Label propagation for deep semi-supervised learning[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). June 15-20, 2019, Long Beach, CA, USA. IEEE, 2019:5065-5074. [29] Chen T, Kornblith S, Norouzi M, et al. A simple framework for contrastive learning of visual representations[EB/OL]. arXiv:2002.05709. (2020-07-01)[2021-04-15]. https://arxiv.org/abs/2002.05709. [30] Zhang Y, Xiang T, Hospedales T M, et al. Deep mutual learning[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. June 18-23, 2018, Salt Lake City, UT, USA. IEEE, 2018:4320-4328. [31] Athiwaratkun B, Finzi M, Izmailov P, et al. There are many consistent explanations of unlabeled data:why you should average[EB/OL]. arXiv:1806.05594. (2019-02-21)[2021-04-15]. https://arxiv.org/abs/1806.05594. [32] Van der Maaten L, Hinton G. Visualizing data using t-SNE[J]. Journal of Machine Learning Research, 2008, 9(86):2579-2605. |