<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE root>
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.2" xml:lang="en"><front><journal-meta><journal-id journal-id-type="publisher-id">ARTIFICIAL INTELLIGENCE AND DECISION MAKING</journal-id><journal-title-group><journal-title xml:lang="en">ARTIFICIAL INTELLIGENCE AND DECISION MAKING</journal-title><trans-title-group xml:lang="ru"><trans-title>Искусственный интеллект и принятие решений</trans-title></trans-title-group></journal-title-group><issn publication-format="print">2071-8594</issn></journal-meta><article-meta><article-id pub-id-type="publisher-id">278310</article-id><article-id pub-id-type="doi">10.14357/20718594240410</article-id><article-categories><subj-group subj-group-type="toc-heading" xml:lang="en"><subject>Analysis of Signals, Audio and Video Information</subject></subj-group><subj-group subj-group-type="toc-heading" xml:lang="ru"><subject>Анализ сигналов, аудио и видео информации</subject></subj-group><subj-group subj-group-type="article-type"><subject>Research Article</subject></subj-group></article-categories><title-group><article-title xml:lang="en">A model for explainable malignancy assessment of pulmonary nodules on CT images</article-title><trans-title-group xml:lang="ru"><trans-title>Модель для объяснимой оценки злокачественности легочных узелков на КТ-изображениях</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><name-alternatives><name xml:lang="en"><surname>Dumaev</surname><given-names>Rinat I.</given-names></name><name xml:lang="ru"><surname>Думаев</surname><given-names>Ринат Ильгизович</given-names></name></name-alternatives><address><country country="RU">Russian Federation</country></address><bio xml:lang="en"><p>Graduate student</p></bio><bio xml:lang="ru"><p>Аспирант</p></bio><email>dumaevrinat@gmail.com</email><xref ref-type="aff" rid="aff1"/></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="en"><surname>Molodyakov</surname><given-names>Sergey A.</given-names></name><name xml:lang="ru"><surname>Молодяков</surname><given-names>Сергей Александрович</given-names></name></name-alternatives><address><country country="RU">Russian Federation</country></address><bio xml:lang="en"><p>Doctor of technical sciences, docent, Professor</p></bio><bio xml:lang="ru"><p>Доктор технических наук, доцент, профессор</p></bio><email>samolodyakov@mail.ru</email><xref ref-type="aff" rid="aff1"/></contrib><contrib contrib-type="author"><name-alternatives><name xml:lang="en"><surname>Utkin</surname><given-names>Lev V.</given-names></name><name xml:lang="ru"><surname>Уткин</surname><given-names>Лев Владимирович</given-names></name></name-alternatives><address><country country="RU">Russian Federation</country></address><bio xml:lang="en"><p>Doctor of technical sciences, professor, Head of the Research Laboratory of Neural Network Technologies and Artificial Intelligence</p></bio><bio xml:lang="ru"><p>Доктор технических наук, профессор, заведующий научно-исследовательской лабораторией нейросетевых технологий и искусственного интеллекта высшей школы технологии искусственного интеллекта</p></bio><email>lev.utkin@gmail.com</email><xref ref-type="aff" rid="aff1"/></contrib></contrib-group><aff-alternatives id="aff1"><aff><institution xml:lang="en">Peter the Great St. Petersburg Polytechnic University</institution></aff><aff><institution xml:lang="ru">Санкт-Петербургский политехнический университет Петра Великого</institution></aff></aff-alternatives><pub-date date-type="pub" iso-8601-date="2024-12-10" publication-format="electronic"><day>10</day><month>12</month><year>2024</year></pub-date><issue>4</issue><issue-title xml:lang="en"/><issue-title xml:lang="ru"/><fpage>123</fpage><lpage>134</lpage><history><date date-type="received" iso-8601-date="2025-01-28"><day>28</day><month>01</month><year>2025</year></date><date date-type="accepted" iso-8601-date="2025-01-28"><day>28</day><month>01</month><year>2025</year></date></history><permissions><copyright-statement xml:lang="en">Copyright ©; ,</copyright-statement><copyright-statement xml:lang="ru">Copyright ©; ,</copyright-statement></permissions><self-uri xlink:href="https://journals.rcsi.science/2071-8594/article/view/278310">https://journals.rcsi.science/2071-8594/article/view/278310</self-uri><abstract xml:lang="en"><p>To increase the transparency of modern computer-aided diagnosis (CAD) systems for assessing the malignancy of lung nodules, an interpretable model based on applying the generalized additive models and the concept-based learning is proposed. The model detects a set of clinically significant attributes in addition to the final malignancy regression score and learns the association between the lung nodule attributes and a final diagnosis decision as well as their contributions into the decision. The proposed concept-based learning framework provides human-readable explanations in terms of different concepts (numerical and categorical), their values, and their contribution to the final prediction. Numerical experiments with the LIDC-IDRI dataset demonstrate that the diagnosis results obtained using the proposed model, which explicitly explores internal relationships, are in line with similar patterns observed in clinical practice. Additionally, the proposed model shows the competitive classification and the nodule attribute scoring performance, highlighting its potential for effective decision-making in the lung nodule diagnosis.</p></abstract><trans-abstract xml:lang="ru"><p>Для решения проблемы непрозрачности современных систем оценки злокачественности образований легких предложена основанная на понятиях объяснимая модель с использованием обобщенных аддитивных моделей. Модель обнаруживает набор клинически значимых признаков в дополнение к окончательному показателю злокачественности новообразований и изучает связь и вклад между атрибутами узелков в легких и окончательным решением. Она предоставляет понятные человеку объяснения с точки зрения различных признаков, таких как плотность, внутренняя текстура, их значений и вклада в окончательный прогноз. Численные эксперименты показали, что результаты диагностики, полученные с использованием модели, соответствуют аналогичным закономерностям, наблюдаемым в клинической практике между атрибутами узелков в легких и показателем злокачественности новообразований. Приведены примеры прогнозов, сгенерированных с помощью разработанной модели.</p></trans-abstract><kwd-group xml:lang="en"><kwd>explainable artificial intelligence</kwd><kwd>medical image processing</kwd><kwd>pulmonary nodules</kwd><kwd>generalized additive model</kwd><kwd>neural network</kwd><kwd>concept-based learning</kwd></kwd-group><kwd-group xml:lang="ru"><kwd>объяснимый искусственный интеллект</kwd><kwd>обработка медицинских изображений</kwd><kwd>легочные узелки</kwd><kwd>обобщенная аддитивная модель</kwd><kwd>нейронная сеть</kwd><kwd>обучение на основе понятий</kwd></kwd-group><funding-group/></article-meta></front><body></body><back><ref-list><ref id="B1"><label>1.</label><citation-alternatives><mixed-citation xml:lang="en">Majkowska A., Mittal S., Steiner D. F., Reicher J. J., McKinney S. M., Duggan G. E., Eswaran K., Cameron Chen P.-H., Liu Y., Kalidindi S. R., et al. Chest radiograph interpretation with deep learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation // Radiology. 2020. V. 294. No 2. P. 421–431.Xu Y., Kong M., Xie W., Duan R., Fang Z., Lin Y., Zhu Q., Tang S., Wu F., Yao Y.-F. Deep sequential feature learning in clinical image classification of infectious keratitis // Engineering. 2021. V. 7. No 7. P. 1002–1010.</mixed-citation><mixed-citation xml:lang="ru">Majkowska A., Mittal S., Steiner D. F., Reicher J. J., McKinney S. M., Duggan G. E., Eswaran K., Cameron Chen P.-H., Liu Y., Kalidindi S. R., et al. Chest radiograph interpretation with deep learning models: assessment with radiologist-adjudicated reference standards and population-adjusted evaluation // Radiology. 2020. V. 294. No 2. P. 421–431.</mixed-citation></citation-alternatives></ref><ref id="B2"><label>2.</label><citation-alternatives><mixed-citation xml:lang="en">Bonavita I., Rafael-Palou X., Ceresa M., Piella G., Ribas V., Ballester M. A. G. Integration of convolutional neural networks for pulmonary nodule malignancy assessment in a lung cancer classification pipeline // Computer methods and programs in biomedicine. 2020. V. 185. P. 105172.</mixed-citation><mixed-citation xml:lang="ru">Xu Y., Kong M., Xie W., Duan R., Fang Z., Lin Y., Zhu Q., Tang S., Wu F., Yao Y.-F. Deep sequential feature learning in clinical image classification of infectious keratitis // Engineering. 2021. V. 7. No 7. P. 1002–1010.</mixed-citation></citation-alternatives></ref><ref id="B3"><label>3.</label><citation-alternatives><mixed-citation xml:lang="en">Wang J., Zhu H., Wang S.-H., Zhang Y.-D. A review of deep learning on medical image analysis // Mobile Networks and Applications. 2021. V. 26. P. 351–380.</mixed-citation><mixed-citation xml:lang="ru">Bonavita I., Rafael-Palou X., Ceresa M., Piella G., Ribas V., Ballester M. A. G. Integration of convolutional neural networks for pulmonary nodule malignancy assessment in a lung cancer classification pipeline // Computer methods and programs in biomedicine. 2020. V. 185. P. 105172.</mixed-citation></citation-alternatives></ref><ref id="B4"><label>4.</label><citation-alternatives><mixed-citation xml:lang="en">Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead // Nature Machine Intelligence. 2019. V. 1. No 5. P. 206–215. Access mode: http://dx.doi.org/10.1038/s42256-019-0048-x.</mixed-citation><mixed-citation xml:lang="ru">Wang J., Zhu H., Wang S.-H., Zhang Y.-D. A review of deep learning on medical image analysis // Mobile Networks and Applications. 2021. V. 26. P. 351–380.</mixed-citation></citation-alternatives></ref><ref id="B5"><label>5.</label><citation-alternatives><mixed-citation xml:lang="en">Adebayo J., Gilmer J., Muelly M., Goodfellow I., Hardt M., Kim B. Sanity checks for saliency maps // Advances in neural information processing systems. 2018. V. 31.</mixed-citation><mixed-citation xml:lang="ru">Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead // Nature Machine Intelligence. 2019. V. 1. No 5. 206–215. Access mode: http://dx.doi.org/10.1038/s42256-019-0048-x.</mixed-citation></citation-alternatives></ref><ref id="B6"><label>6.</label><citation-alternatives><mixed-citation xml:lang="en">Hendricks L. A., Hu R., Darrell T., Akata Z. Grounding visual explanations // Proceedings of the European conference on computer vision (ECCV). 2018. P. 264–279.</mixed-citation><mixed-citation xml:lang="ru">Adebayo J., Gilmer J., Muelly M., Goodfellow I., Hardt M., Kim B. Sanity checks for saliency maps // Advances in neural information processing systems. 2018. V. 31.</mixed-citation></citation-alternatives></ref><ref id="B7"><label>7.</label><citation-alternatives><mixed-citation xml:lang="en">Zhang Z., Xie Y., Xing F., McGough M., Yang L. Mdnet: A semantically and visually interpretable medical image diagnosis network // Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. P. 6428–6436.</mixed-citation><mixed-citation xml:lang="ru">Hendricks L. A., Hu R., Darrell T., Akata Z. Grounding visual explanations // Proceedings of the European conference on computer vision (ECCV). 2018. P. 264–279.</mixed-citation></citation-alternatives></ref><ref id="B8"><label>8.</label><citation-alternatives><mixed-citation xml:lang="en">Kim B., Wattenberg M., Gilmer J., Cai C., Wexler J., Viegas F., et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav) // International conference on machine learning. PMLR. 2018. P. 2668–2677.</mixed-citation><mixed-citation xml:lang="ru">Zhang Z., Xie Y., Xing F., McGough M., Yang L. Mdnet: A semantically and visually interpretable medical image diagnosis network // Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. P. 6428–6436.</mixed-citation></citation-alternatives></ref><ref id="B9"><label>9.</label><citation-alternatives><mixed-citation xml:lang="en">Chen Z., Bei Y., Rudin C. Concept whitening for interpretable image recognition // Nature Machine Intelligence. 2020. V. 2. No 12. P. 772–782.</mixed-citation><mixed-citation xml:lang="ru">Kim B., Wattenberg M., Gilmer J., Cai C., Wexler J., Viegas F., et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav) // International conference on machine learning. PMLR. 2018. 2668–2677.</mixed-citation></citation-alternatives></ref><ref id="B10"><label>10.</label><citation-alternatives><mixed-citation xml:lang="en">Ribeiro M. T., Singh S., Guestrin C. ”Why should i trust you?”: Explaining the predictions of any classifier // Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016. P. 1135–1144.</mixed-citation><mixed-citation xml:lang="ru">Chen Z., Bei Y., Rudin C. Concept whitening for interpretable image recognition // Nature Machine Intelligence. 2020. V. 2. No 12. P. 772–782.</mixed-citation></citation-alternatives></ref><ref id="B11"><label>11.</label><citation-alternatives><mixed-citation xml:lang="en">Fong R., Patrick M., Vedaldi A. Understanding deep networks via extremal perturbations and smooth masks // Proceedings of the IEEE/CVF international conference on computer vision. 2019. P. 2950–2958.</mixed-citation><mixed-citation xml:lang="ru">Ribeiro M. T., Singh S., Guestrin C. ”Why should i trust you?”: Explaining the predictions of any classifier // Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016. 1135–1144.</mixed-citation></citation-alternatives></ref><ref id="B12"><label>12.</label><citation-alternatives><mixed-citation xml:lang="en">Wang H., Wang Z., Du M., Yang F., Zhang Z., Ding S., Mardziel P., Hu X. Score-CAM: Score-weighted visual explanations for convolutional neural networks // Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 2020. P. 24–25.</mixed-citation><mixed-citation xml:lang="ru">Fong R., Patrick M., Vedaldi A. Understanding deep networks via extremal perturbations and smooth masks // Proceedings of the IEEE/CVF international conference on computer vision. 2019. P. 2950–2958.</mixed-citation></citation-alternatives></ref><ref id="B13"><label>13.</label><citation-alternatives><mixed-citation xml:lang="en">Chen C., Li O., Tao D., Barnett A., Rudin C., Su J. K. This looks like that: deep learning for interpretable image recognition // Advances in neural information processing systems. 2019. V. 32.</mixed-citation><mixed-citation xml:lang="ru">Wang H., Wang Z., Du M., Yang F., Zhang Z., Ding S., Mardziel P., Hu X. Score-CAM: Score-weighted visual explanations for convolutional neural networks // Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 2020. P. 24–25.</mixed-citation></citation-alternatives></ref><ref id="B14"><label>14.</label><citation-alternatives><mixed-citation xml:lang="en">Fang Z., Kuang K., Lin Y., Wu F., Yao Y.-F. Concept-based explanation for fine-grained images and its application in infectious keratitis classification // Proceedings of the 28th ACM international conference on Multimedia. 2020. P. 700–708.</mixed-citation><mixed-citation xml:lang="ru">Chen C., Li O., Tao D., Barnett A., Rudin C., Su J. K. This looks like that: deep learning for interpretable image recognition // Advances in neural information processing systems. 2019. V. 32.</mixed-citation></citation-alternatives></ref><ref id="B15"><label>15.</label><citation-alternatives><mixed-citation xml:lang="en">Graziani M., Andrearczyk V., Mu ̈ller H. Regression concept vectors for bidirectional explanations in histopathology // Understanding and Interpreting Machine Learning in Medical Image Computing Applications: First International Workshops, MLCN 2018, DLF 2018, iMIMIC 2018, Held in Conjunction with MICCAI 2018, Granada, Spain. 2018. Proceedings 1 / Springer. 2018. P. 124–132.</mixed-citation><mixed-citation xml:lang="ru">Fang Z., Kuang K., Lin Y., Wu F., Yao Y.-F. Concept-based explanation for fine-grained images and its application in infectious keratitis classification // Proceedings of the 28th ACM international conference on Multimedia. 2020. 700–708.</mixed-citation></citation-alternatives></ref><ref id="B16"><label>16.</label><citation-alternatives><mixed-citation xml:lang="en">Lucieri A., Bajwa M. N., Braun S. A., Malik M. I., Dengel A., Ahmed S. On interpretability of deep learning based skin lesion classifiers using concept activation vectors // 2020 international joint conference on neural networks (IJCNN) / IEEE. 2020. P. 1–10.</mixed-citation><mixed-citation xml:lang="ru">Graziani M., Andrearczyk V., Mu ̈ller H. Regression concept vectors for bidirectional explanations in histopathology // Understanding and Interpreting Machine Learning in Medical Image Computing Applications: First International Workshops, MLCN 2018, DLF 2018, iMIMIC 2018, Held in Conjunction with MICCAI 2018, Granada, Spain. 2018. Proceedings 1 / Springer. 2018. P. 124–132.</mixed-citation></citation-alternatives></ref><ref id="B17"><label>17.</label><citation-alternatives><mixed-citation xml:lang="en">Truong M. T., Ko J. P., Rossi S. E., Rossi I., Viswanathan C., Bruzzi J. F., Marom E. M., Erasmus J. J. Update in the evaluation of the solitary pulmonary nodule // Radiographics. 2014. V. 34. No 6. P. 1658–1679.</mixed-citation><mixed-citation xml:lang="ru">Lucieri A., Bajwa M. N., Braun S. A., Malik M. I., Dengel A., Ahmed S. On interpretability of deep learning based skin lesion classifiers using concept activation vectors // 2020 international joint conference on neural networks (IJCNN) / IEEE. 2020. P. 1–10.</mixed-citation></citation-alternatives></ref><ref id="B18"><label>18.</label><citation-alternatives><mixed-citation xml:lang="en">Agarwal R., Melnick L., Frosst N., Zhang X., Lengerich B., Caruana R., Hinton G. E. Neural additive models: Interpretable machine learning with neural nets // Advances in neural information processing systems. 2021. V. 34. P. 4699–4711.</mixed-citation><mixed-citation xml:lang="ru">Truong M. T., Ko J. P., Rossi S. E., Rossi I., Viswanathan C., Bruzzi J. F., Marom E. M., Erasmus J. J. Update in the evaluation of the solitary pulmonary nodule // Radiographics. 2014. V. 34. No 6. P. 1658–1679.</mixed-citation></citation-alternatives></ref><ref id="B19"><label>19.</label><citation-alternatives><mixed-citation xml:lang="en">Yang Z., Zhang A., Sudjianto A. GAMI-Net: An explainable neural network based on generalized additive models with structured interactions // Pattern Recognition. 2021. V. 120. P. 108192.</mixed-citation><mixed-citation xml:lang="ru">Agarwal R., Melnick L., Frosst N., Zhang X., Lengerich B., Caruana R., Hinton G. E. Neural additive models: Interpretable machine learning with neural nets // Advances in neural information processing systems. 2021. V. 34. P. 4699–4711.</mixed-citation></citation-alternatives></ref><ref id="B20"><label>20.</label><citation-alternatives><mixed-citation xml:lang="en">Kumar N., Berg A. C., Belhumeur P. N., Nayar S. K. Attribute and simile classifiers for face verification // 2009 IEEE 12th international conference on computer vision / IEEE. 2009. P. 365–372.</mixed-citation><mixed-citation xml:lang="ru">Yang Z., Zhang A., Sudjianto A. GAMI-Net: An explainable neural network based on generalized additive models with structured interactions // Pattern Recognition. 2021. V. 120. P. 108192.</mixed-citation></citation-alternatives></ref><ref id="B21"><label>21.</label><citation-alternatives><mixed-citation xml:lang="en">Lampert C. H., Nickisch H., Harmeling S. Learning to detect unseen object classes by between-class attribute transfer // 2009 IEEE conference on computer vision and pattern recognition / IEEE. 2009. P. 951–958.</mixed-citation><mixed-citation xml:lang="ru">Kumar N., Berg A. C., Belhumeur P. N., Nayar S. K. Attribute and simile classifiers for face verification // 2009 IEEE 12th international conference on computer vision / IEEE. 2009. P. 365–372.</mixed-citation></citation-alternatives></ref><ref id="B22"><label>22.</label><citation-alternatives><mixed-citation xml:lang="en">Kazhdan D., Dimanov B., Jamnik M., Li`o P., Weller A. Now you see me (CME): concept-based model extraction // arXiv preprint arXiv:2010.13233. 2020.</mixed-citation><mixed-citation xml:lang="ru">Lampert C. H., Nickisch H., Harmeling S. Learning to detect unseen object classes by between-class attribute transfer // 2009 IEEE conference on computer vision and pattern recognition / IEEE. 2009. P. 951–958.</mixed-citation></citation-alternatives></ref><ref id="B23"><label>23.</label><citation-alternatives><mixed-citation xml:lang="en">Koh P. W., Nguyen T., Tang Y. S., Mussmann S., Pierson E., Kim B., Liang P. Concept bottleneck models // International conference on machine learning / PMLR. 2020. P. 5338–5348.</mixed-citation><mixed-citation xml:lang="ru">Kazhdan D., Dimanov B., Jamnik M., Li`o P., Weller A. Now you see me (CME): concept-based model extraction // arXiv preprint arXiv:2010.13233. 2020.</mixed-citation></citation-alternatives></ref><ref id="B24"><label>24.</label><citation-alternatives><mixed-citation xml:lang="en">Wickramanayake S., Hsu W., Lee M. L. Comprehensible convolutional neural networks via guided concept learning // 2021 International Joint Conference on Neural Networks (IJCNN) / IEEE. 2021. P. 1–8.</mixed-citation><mixed-citation xml:lang="ru">Koh P. W., Nguyen T., Tang Y. S., Mussmann S., Pierson E., Kim B., Liang P. Concept bottleneck models // International conference on machine learning / PMLR. 2020. 5338–5348.</mixed-citation></citation-alternatives></ref><ref id="B25"><label>25.</label><citation-alternatives><mixed-citation xml:lang="en">Chen S., Qin J., Ji X., Lei B., Wang T., Ni D., Cheng J.-Z. Automatic scoring of multiple semantic attributes with multi-task feature leverage: a study on pulmonary nodules in CT images // IEEE transactions on medical imaging. 2016. V. 36. No 3. P. 802–814.</mixed-citation><mixed-citation xml:lang="ru">Wickramanayake S., Hsu W., Lee M. L. Comprehensible convolutional neural networks via guided concept learning // 2021 International Joint Conference on Neural Networks (IJCNN) / IEEE. 2021. P. 1–8.</mixed-citation></citation-alternatives></ref><ref id="B26"><label>26.</label><citation-alternatives><mixed-citation xml:lang="en">Liu L., Dou Q., Chen H., Qin J., Heng P.-A. Multi-task deep model with margin ranking loss for lung nodule analysis // IEEE transactions on medical imaging. 2019. V. 39. No 3. P. 718–728.</mixed-citation><mixed-citation xml:lang="ru">Chen S., Qin J., Ji X., Lei B., Wang T., Ni D., Cheng J.-Z. Automatic scoring of multiple semantic attributes with multi-task feature leverage: a study on pulmonary nodules in CT images // IEEE transactions on medical imaging. 2016. V. 36. No 3. P. 802–814.</mixed-citation></citation-alternatives></ref><ref id="B27"><label>27.</label><citation-alternatives><mixed-citation xml:lang="en">Dai Y., Yan S., Zheng B., Song C. Incorporating automatically learned pulmonary nodule attributes into a convolutional neural network to improve accuracy of benign-malignant nodule classification // Physics in Medicine &amp; Biology. 2018. V. 63. No 24. P. 245004.</mixed-citation><mixed-citation xml:lang="ru">Liu L., Dou Q., Chen H., Qin J., Heng P.-A. Multi-task deep model with margin ranking loss for lung nodule analysis // IEEE transactions on medical imaging. 2019. V. 39. No 3. 718–728.</mixed-citation></citation-alternatives></ref><ref id="B28"><label>28.</label><citation-alternatives><mixed-citation xml:lang="en">Ost D. E., Gould M. K. Decision making in patients with pulmonary nodules // American journal of respiratory and critical care medicine. 2012. V. 185. No 4. P. 363–372.</mixed-citation><mixed-citation xml:lang="ru">Dai Y., Yan S., Zheng B., Song C. Incorporating automatically learned pulmonary nodule attributes into a convolutional neural network to improve accuracy of benign-malignant nodule classification // Physics in Medicine &amp; Biology. 2018. V. 63. No 24. P. 245004.</mixed-citation></citation-alternatives></ref><ref id="B29"><label>29.</label><citation-alternatives><mixed-citation xml:lang="en">MacMahon H., Naidich D. P., Goo J. M., Lee K. S., Leung</mixed-citation><mixed-citation xml:lang="ru">Ost D. E., Gould M. K. Decision making in patients with pulmonary nodules // American journal of respiratory and critical care medicine. 2012. V. 185. No 4. P. 363–372.</mixed-citation></citation-alternatives></ref><ref id="B30"><label>30.</label><citation-alternatives><mixed-citation xml:lang="en">N., Mayo J. R., Mehta A. C., Ohno Y., Powell C. A., Prokop M., et al. Guidelines for management of incidental pulmonary nodules detected on CT images: from the Fleischner Society 2017 // Radiology. 2017. V. 284. No 1. P. 228–243.</mixed-citation><mixed-citation xml:lang="ru">MacMahon H., Naidich D. P., Goo J. M., Lee K. S., Leung N., Mayo J. R., Mehta A. C., Ohno Y., Powell C. A., Prokop M., et al. Guidelines for management of incidental pulmonary nodules detected on CT images: from the Fleischner Society 2017 // Radiology. 2017. V. 284. No 1. 228–243.</mixed-citation></citation-alternatives></ref><ref id="B31"><label>31.</label><citation-alternatives><mixed-citation xml:lang="en">Cruickshank A., Stieler G., Ameer F. Evaluation of the soltary pulmonary nodule // Internal Medicine Journal. 2019. V. 49. No 3. P. 306–315.</mixed-citation><mixed-citation xml:lang="ru">Cruickshank A., Stieler G., Ameer F. Evaluation of the solitary pulmonary nodule // Internal Medicine Journal. 2019. 49. No 3. P. 306–315.</mixed-citation></citation-alternatives></ref><ref id="B32"><label>32.</label><mixed-citation>Shen S., Han S. X., Aberle D. R., Bui A. A., Hsu W. An interpretable deep hierarchical semantic convolutional neural network for lung nodule malignancy classification // Expert systems with applications. 2019. V. 128. P. 84–95.</mixed-citation></ref><ref id="B33"><label>33.</label><mixed-citation>Wu B., Zhou Z., Wang J., Wang Y. Joint learning for pulmonary nodule segmentation, attributes and malignancy prediction // 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) / IEEE. 2018. P. 1109–1113.</mixed-citation></ref><ref id="B34"><label>34.</label><mixed-citation>Mehta K., Jain A., Mangalagiri J., Menon S., Nguyen P., Chapman D. R. Lung nodule classification using biomarkers, volumetric radiomics, 3D CNNs // Journal of Digital Imaging. 2021. P. 1–20.</mixed-citation></ref><ref id="B35"><label>35.</label><citation-alternatives><mixed-citation xml:lang="en">Armato III S. G., McLennan G., Bidaut L., McNitt-Gray M. F., Meyer C. R., Reeves A. P., Zhao B., Aberle D. R., Henschke C. I., Hoffman E. A., et al. The lung image data-base consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans // Medical physics. 2011. V. 38. No 2. P. 915–931.</mixed-citation><mixed-citation xml:lang="ru">Armato III S. G., McLennan G., Bidaut L., McNitt-Gray M. F., Meyer C. R., Reeves A. P., Zhao B., Aberle D. R., Henschke C. I., Hoffman E. A., et al. The lung image data-base consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans // Medical physics. 2011. V. 38. No 2. 915–931.</mixed-citation></citation-alternatives></ref><ref id="B36"><label>36.</label><citation-alternatives><mixed-citation xml:lang="en">Hancock M. C., Magnan J. F. Lung nodule malignancy classification using only radiologist-quantified image features as inputs to statistical learning algorithms: probing the Lung Image Database Consortium dataset with two statistical learning methods // Journal of Medical Imaging. 2016. V. 3. No 4. P. 044504–044504.</mixed-citation><mixed-citation xml:lang="ru">Hancock M. C., Magnan J. F. Lung nodule malignancy classification using only radiologist-quantified image features as inputs to statistical learning algorithms: probing the Lung Image Database Consortium dataset with two statistical learning methods // Journal of Medical Imaging. 2016. V. No 4. P. 044504–044504.</mixed-citation></citation-alternatives></ref><ref id="B37"><label>37.</label><mixed-citation>He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition // Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. P. 770–778.</mixed-citation></ref><ref id="B38"><label>38.</label><mixed-citation>Huang G., Liu Z., Van Der Maaten L., Weinberger K. Q. Densely connected convolutional networks // Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. P. 4700–4708.</mixed-citation></ref><ref id="B39"><label>39.</label><mixed-citation>Hu J., Shen L., Sun G. Squeeze-and-excitation networks // Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. P. 7132–7141.</mixed-citation></ref><ref id="B40"><label>40.</label><mixed-citation>Snoeckx A., Reyntiens P., Desbuquoit D., Spinhoven M. J., Van Schil P. E., van Meerbeeck J. P., Parizel P. M. Evaluation of the solitary pulmonary nodule: size matters, but do not ignore the power of morphology // Insights into imaging. 2018. V. 9. P. 73–86.</mixed-citation></ref><ref id="B41"><label>41.</label><mixed-citation>Yip R., Yankelevitz D. F., Hu M., Li K., Xu D. M., Jirapatnakul A., Henschke C. I. Lung cancer deaths in the National Lung Screening Trial attributed to nonsolid nodules // Radiology. 2016. V. 281. No 2. P. 589–596.</mixed-citation></ref><ref id="B42"><label>42.</label><mixed-citation>Seemann M., Staebler A., Beinert T., Dienemann H., Obst B., Matzko M., Pistitsch C., Reiser M. Usefulness of morphological characteristics for the differentiation of benign from malignant solitary pulmonary lesions using HRCT // European radiology. 1999. V. 9. No 3. P. 409–417.</mixed-citation></ref><ref id="B43"><label>43.</label><mixed-citation>Gurney J. Determining the likelihood of malignancy in solitary pulmonary nodules with Bayesian analysis. Part I. Theory // Radiology. 1993. V. 186. No 2. P. 405–413.</mixed-citation></ref><ref id="B44"><label>44.</label><mixed-citation>Meldo A., Utkin L., Kovalev M., Kasimov E. The natural language explanation algorithms for the lung cancer computer-aided diagnosis system // Artificial intelligence in medicine. 2020. V. 108. P. 101952.</mixed-citation></ref><ref id="B45"><label>45.</label><mixed-citation>Dumaev R. I., Molodyakov S. A. Classification and Prediction of Lung Diseases According to Chest Radiography // 2023 IV International Conference on Neural Networks and Neurotechnologies (NeuroNT) / IEEE. 2023. P. 48–51.</mixed-citation></ref></ref-list></back></article>
