Neural network interpretation techniques for analysis of histological images of breast abnormalities
- Authors: Fomina A.V.1, Borbat A.M.2, Karpulevich E.A.3, Naumov A.Y.3
-
Affiliations:
- Moscow Institute of Physics and Technology (National Research University)
- Russian State Research Center − Burnasyan Federal Medical Biophysical Center
- Ivannikov Institute for System Programming
- Issue: Vol 24, No 6 (2022)
- Pages: 529-537
- Section: ORIGINAL ARTICLE
- URL: https://journals.rcsi.science/2079-5831/article/view/148278
- DOI: https://doi.org/10.26442/20795696.2022.6.201990
- ID: 148278
Cite item
Full Text
Abstract
Background. Neural networks are actively used in digital pathology to analyze histological images and support medical decision-making. A common approach is to solve the classification problem, where only class labels are the only model responses. However, one should understand which areas of the image have the most significant impact on the model's response. Machine learning interpretation techniques help solve this problem.
Aim. To study the consistency of different methods of neural network interpretation when classifying histological images of the breast and to obtain an expert assessment of the results of the evaluated methods.
Materials and methods. We performed a preliminary analysis and pre-processing of the existing data set used to train pre-selected neural network models. The existing methods of visualizing the areas of attention of trained models on easy-to-understand data were applied, followed by verification of their correct use. The same neural network models were trained on histological data, and the selected interpretation methods were used to systematize histological images, followed by the evaluation of the results consistency and an expert assessment of the results.
Results. In this paper, several methods of interpreting machine learning are studied using two different neural network architectures and a set of histological images of breast abnormalities. Results of ResNet18 and ViT-B-16 models training on a set of histological images on the test sample: accuracy metric 0.89 and 0.89, ROC_AUC metric 0.99 and 0.96, respectively. The results were also evaluated by an expert using the Label Studio tool. For each pair of images, the expert was asked to select the most appropriate answer ("Yes" or "No") to the question: "The highlighted areas generally correspond to the Malignant class." The "Yes" response rate for the ResNet_Malignant category was 0.56; for ViT_Malignant, it was 1.0.
Conclusion. Interpretability experiments were conducted with two different architectures: the ResNet18 convolutional network and the ViT-B-16 attention-enhanced network. The results of the trained models were visualized using the GradCAM and Attention Rollout methods, respectively. First, experiments were conducted on a simple-to-interpret dataset to ensure they were used correctly. The methods are then applied to the set of histological images. In easy-to-understand images (cat images), the convolutional network is more consistent with human perception; on the contrary, in histological images of breast cancer, ViT-B-16 provided results much more similar to the expert's perception.
Keywords
Full Text
##article.viewOnOriginalSite##About the authors
Anna V. Fomina
Moscow Institute of Physics and Technology (National Research University)
Email: fomina@ispras.ru
ORCID iD: 0000-0002-2269-0271
Student, Moscow Institute of Physics and Technology (National Research University)
Russian Federation, MoscowArtem M. Borbat
Russian State Research Center − Burnasyan Federal Medical Biophysical Center
Email: aborbat@yandex.ru
ORCID iD: 0000-0002-9699-8375
Cand. Sci. (Med.), Russian State Research Center − Burnasyan Federal Medical Biophysical Center
Russian Federation, MoscowEvgeny A. Karpulevich
Ivannikov Institute for System Programming
Author for correspondence.
Email: karpulevich@ispras.ru
ORCID iD: 0000-0002-6771-2163
Res. Officer, Ivannikov Institute for System Programming
Russian Federation, MoscowAnton Yu. Naumov
Ivannikov Institute for System Programming
Email: anton-naymov@yandex.ru
ORCID iD: 0000-0003-4851-7677
Res. Assist., Ivannikov Institute for System Programming
Russian Federation, MoscowReferences
- Hou L, Samaras D, Kurc TM, et al. Patch-based convolutional neural network for whole slide tissue image classification. arXiv. 2016;1504.07947.
- O'Shea K, Nash R. An introduction to convolutional neural networks. arXiv. 2015;1511.08458.
- ROBINVC. Popular ML/NN/CNN/RNN Model code snippets. Available at: https://www.kaggle.com/code/nsff591/popular-ml-nn-cnn-rnn-model-code-snippets/notebook. Accessed: 9.11.2022.
- He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. arXiv. 2015;1512.03385.
- Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. arXiv. 2017;1706.03762.
- Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv. 2021;2010.11929.
- Selvaraju RR, Cogswell M, Das A, et al. Grad-cam: Visual explanations from deep networks via gradient-based localization. arXiv. 2019;1610.02391.
- Abnar S, Zuidema W. Quantifying attention flow in transformers. arXiv. 2020;2005.00928.
- Xie P, Zuo K, Zhang Y, et al. Interpretable classification from skin cancer histology slides using deep learning: A retrospective multicenter study. arXiv. 2019;1904.06156.
- Zhou B, Khosla A, Lapedriza A, et al. Learning deep features for discriminative localization. arXiv. 2015;1512.04150.
- Srivastava A, Kulkarni C, Huang K, et al. Imitating pathologist based assessment with interpretable and context based neural network modeling of histology images. Biomed Inform Insights. 2018;10:1178222618807481.
- Thennavan A, Beca F, Xia Y, et al. Molecular analysis of TCGA breast cancer histologic types. Cell Genom. 2021;1(3):100067.
- Борбат А.М., Лищук С.В. Первый российский набор данных гистологических изображений патологических процессов молочной железы. Врач и информационные технологии. 2020;3:25-30 [Borbat AM, Lishchuk SV. The first russian breast pathology histologic images data set. Vrach i informatsionnie tekhnologii. 2020;3:25-30 (in Russian)].
- Golle P. Machine learning attacks against the Asirra CAPTCHA. Proceedings of the 15th ACM conference on Computer and communications security. 2008:535-42.
- Bitton A, Esling P. ATIAM 2018-ML Project Regularized auto-encoders (VAE/WAEs) applied to latent audio synthesis. Available at: https://esling.github.io/documents/mlProj_bitton.pdf. Accessed: 9.11.2022.
- Wang L, Wu Z, Karanam S, et al. Reducing visual confusion with discriminative attention. arXiv. 2019;1811.07484.
- Abadi M, Agarwal A, Barham P, et al. TensorFlow: Large-scale machine learning on heterogeneous systems. arXiv. 2016;1603.04467.
- Gildenblat J. Class Activation Map methods implemented in Pytorch. Available at: https://github.com/jacobgil/pytorch-grad-cam. Accessed: 9.11.2022.
- Gildenblat J. Explainability for Vision Transformers (in PyTorch). Available at: https://github.com/jacobgil/vit-explain. Accessed: 9.11.2022.
Supplementary files
