[1] 杨建斌, 张卫强, 刘加. 深度神经网络自适应中基于身份认证向量的归一化方法[J]. 中国科学院大学学报, 2017, 34(5): 633-639. DOI:10.7523/j.issn.2095-6134.2017.05.014. [2] 黄胜, 王博博, 朱菁. 基于文档结构与深度学习的金融公告信息抽取[J]. 计算机工程与设计, 2020, 41(1): 115-121. DOI:10.16208/j.issn1000-7024.2020.01.019. [3] Yurtsever E, Lambert J, Carballo A, et al. A survey of autonomous driving: common practices and emerging technologies[J]. IEEE Access, 2020, 8: 58443-58469. DOI:10.1109/ACCESS.2020.2983149. [4] Yuan X Y, He P, Zhu Q L, et al. Adversarial examples: attacks and defenses for deep learning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(9): 2805-2824. DOI:10.1109/TNNLS.2018.2886017. [5] Carlini N, Wagner D. Audio adversarial examples: Targeted attacks on speech-to-text [C]//2018 IEEE Security and Privacy Workshops (SPW). May 24, 2018, San Francisco, CA, USA. IEEE, 2018: 1-7. DOI:10.1109/SPW.2018.00009. [6] Eykholt K, Evtimov I, Fernandes E, et al. Robust physical-world attacks on deep learning visual classification [C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. June 18-23, Salt Lake City, UT, USA. 2018: 1625-1634. DOI:10.1109/CVPR.2018.00175. [7] Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks [EB/OL]. arXiv:1312.6199 (2014-02-19) [2020-04-22]. https://arxiv.org/abs/1312.6199. [8] Papernot N, McDaniel P, Wu X, et al. Distillation as a defense to adversarial perturbations against deep neural networks [C]//2016 IEEE Symposium on Security and Privacy (SP). May 22-26, 2016, San Jose, CA, USA. IEEE, 2016: 582-597. DOI:10.1109/SP.2016.41. [9] Carlini N, Wagner D. Towards evaluating the robustness of neural networks [C]//2017 IEEE Symposium on Security and Privacy (SP). May 22-26, San Jose, CA, USA. IEEE, 2017: 39-57. DOI:10.1109/SP.2017.49. [10] Athalye A, Carlini N, Wagner D. Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples [EB/OL]. arXiv:1802.00420(2018-07-31). https://arxiv.org/abs/1802.00420. [11] Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks[EB/OL]. arXiv:1706.06083(2019-09-04) [2020-04-22].https://avxiv.org/abs/1706.06083. [12] Tsipras D, Santurkar S, Engstrom L, et al. Robustness may be at odds with accuracy [C]//International Conference on Learning Representations. 2019. [13] Zhang H Y, Yu Y D, Jiao J, et al. Theoretically principled trade-off between robustness and accuracy [C]//International Conference on Machine Learning. 2019: 7472-7482. [14] Wang J Y, Zhang H C. Bilateral adversarial training: Towards fast training of more robust models against adversarial attacks [C]//2019 IEEE/CVE International Conference on Computer Vision(ICCV). October 27-November 2, 2019, Seoul, Korea(South). IEEE, 2019: 6628-6637. DOI:10.1109/ICCV.2019.00673. [15] Zhang H C, Wang J Y. Defense against adversarial attacks using feature scattering-based adversarial training [C]// Wallach H, Larochelle H, Beygelzimer A, et al. Advances in Neural Information Processing Systems(NIPS 2019). 2019: 1829-1839. [16] Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples [C]//3rd International Conference on Learning Representations, ICLR 2015-Conference Track Proceedings. 2015: 1000-2010. [17] Müller R, Kornblith S, Hinton G E. When does label smoothing help? [C]//Advances in Neural Information Processing Systems(NIPS 2019). 2019: 4696-4705. [18] Ilyas A, Santurkar S, Tsipras D, et al. Adversarial examples are not bugs, they are features [C]//Advances in Neural Information Processing Systems(NIPS 2019). 2019: 125-136. [19] Geirhos R, Rubisch P, Michaelis C, et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness [EB/OL]. arXiv:1811.12231(2019-01-14) [2020-04-22]. https://arxiv.org/abs/1811.12231. [20] Rahaman N, Baratin A, Arpit D, et al. On the spectral bias of neural networks [EB/OL]. arXiv:1806.08734(2019-05-31) [2020-04-22]. https://arxiv.org/abs/1806.08734. [21] Parkhi O M, Vedaldi A, Zisserman A. Deep face recognition [C]//Proceedings of the British Machine Vision Conference (BMVC 2015), Swansea. British Machine Vision Association, 2015: 41.1-41.12. DOI:10.5244/c.29.41. [22] Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images[J]. Handbook of Systemic Autoimmune Diseases, 2009, 1(4):1-10. [23] Netzer, Y, Wang, T, Coates, A, et al. Reading digits in natural images with unsupervised feature learning [C]//NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011: 5-15. [24] Zagoruyko S, Komodakis N. Wide residual networks [C]//Proceedings of the British Machine Vision Conference (BMVC 2016), York, UK. British Machine Vision Association, 2016: 87.1-87.12. DOI:10.5244/c.30.87. [25] Carlini N, Athalye A, Papernot N, et al. On evaluating adversarial robustness [EB/OL]. arXiv:1902.06705(2019-02-20) [2020-04-22].https://arxiv.org/abs/1902.06705. [26] Sandler M, Howard A, Zhu M, et al. Mobilenetv2: Inverted residuals and linear bottlenecks [C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. June 18-23, 2018, Salt Lake City, UT, USA. IEEE, 2018: 4510-4520. DOI:10.1109/CPVR.2018.00474. [27] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition [C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. June 27-30, 2016, Las Vegas, NV, USA. IEEE, 2016: 770-778. DOI:10.1109/CPVR.2016.90. [28] Huang G, Liu Z, van der Maaten L, et al. Densely connected convolutional networks [C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. July 21-26, 2017, Honolulu, HI, USA. IEEE, 2017: 2261-2269. DOI:10.1109/CPVR.2017.243. [29] Chen H Y, Liang J H, Chang S C, et al. Improving adversarial robustness via guided complement entropy [C]//2019 IEEE/CVF International Conference on Computer Vision(ICCV). October 27-November 2, 2019, Seoul, Korea (South). IEEE, 2019: 4880-4888. DOI:10.1109/ICCV.2019.00498. |