AUGMENTING THE TRAINING SET OF HISTOLOGICAL IMAGES WITH ADVERSARIAL EXAMPLES

封面

如何引用文章

全文:

开放存取 开放存取
受限制的访问 ##reader.subscriptionAccessGranted##
受限制的访问 订阅存取

详细

In this paper, we consider the problem of augmenting a set of histological images with adversarial examples to improve the robustness of the neural network classifiers trained on the augmented set against adversarial attacks. In recent years, neural network methods have been developed rapidly, achieving impressive results. However, they are subjected to the so-called adversarial attacks; i.e., they make incorrect predictions on input images with added imperceptible noise. Hence, the reliability of neural network methods remains an important area of research. In this paper, we compare different methods for training set augmentation to improve the robustness of neural histological image classifiers against adversarial attacks. For this purpose, we augment the training set with adversarial examples generated by several popular methods.

作者简介

N. LOKSHIN

Moscow State University

Email: lockshin1999@mail.ru
Moscow, Russia

A. KHVOSTIKOV

Moscow State University

Email: khvostikov@cs.msu.ru
Moscow, Russia

A. KRYLOV

Moscow State University

编辑信件的主要联系方式.
Email: kryl@cs.msu.ru
Moscow, Russia

参考

  1. Goodfellow I.J., Shlens J., Szegedy Ch. Explaining and harnessing adversarial examples // arXiv preprint arXiv:1412.6572, 2014
  2. Su Jiawei, Vargas Danilo Vasconcellos, Sakurai Kouichi. One pixel attack for fooling deep neural networks // IEEE Transactions on Evolutionary Computation. 2019. № 5. C. 828–841.
  3. Carlini N., Wagner D. Towards evaluating the robustness of neural networks // IEEE Symposium on Security and Privacy (SP). 2017. P. 39–57.
  4. Moosavi-Dezfooli, Seyed-Mohsen, Fawzi Alhussein, Frossard Pascal. Deepfool: a simple and accurate method to fool deep neural networks // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. P. 2574–2582.
  5. He Kaiming, Zhang Xiangyu, Ren Shaoqing, Sun Jian. Deep residual Learning for Image Recognition // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. P. 770–778.
  6. Kather J.N., Halama N., Marx A. 100 000 histological images of human colorectal cancer and healthy tissue // Zenodo10, 2018.
  7. Liang Bin, Li Hongcheng, Su Miaoqiang, Li Xirong, Shi Wenchang, Wang Xiaofeng. Detecting adversarial image examples in deep neural networks with adaptive noise reduction // IEEE Transactions on Dependable and Secure Computing. 2018. № 1. P. 72–85.
  8. Papernot N., McDaniel P., Wu Xi, Jha Somesh, Swami Ananthram. Distillation as a defense to adversarial perturbations against deep neural networks // IEEE Symposium on Security and Privacy (SP), 2016. P. 582–597.
  9. Xiao Chaowei, Li Bo, Zhu Jun-Yan, He Warren, Liu Mingyan, Song Dawn. Generating adversarial examples with adversarial networks // arXiv preprint arXiv:1801.02610, 2018.
  10. Goodfellow I., Pouget-Abadie J., Mirza Mehdi, Xu Bing, Warde-Farley D., Ozair Sherjil, Courville A., Bengio Yoshua. Generative adversarial nets // Advances in Neural Information Processing Systems, 2014.
  11. Madry A., Makelov A., Schmidt L., Tsipras D., Vladu A. Towards deep learning models resistant to adversarial attacks // arXiv preprint arXiv:1706.06083, 2017.
  12. Karras Tero, Laine Samuli, Aittala Miika, Hellsten Janne, Lehtinen Jaakko, Aila Timo. Analyzing and improving the image quality of stylegan // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. P. 8110–8119.
  13. Rony J., Hafemann L.G., Oliveira L.S., Ayed Ismail Ben, Sabourin R., Granger E. Decoupling direction and norm for efficient gradient-based l2 adversarial attacks and defenses // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. P. 4322–4330.
  14. Khvostikov A., Krylov A., Mikhailov I., Malkov P., Danilova N. Tissue Type Recognition in Whole Slide Histological Images. 2021.

补充文件

附件文件
动作
1. JATS XML
2.

下载 (768KB)
3.

下载 (2MB)

版权所有 © Н.Д. Локшин, А.В. Хвостиков, А.С. Крылов, 2023

##common.cookie##