GENESIS OF RISKS OF ARTIFICIAL INTELLIGENCE AND THEIR IMPLEMENTATION IN AUTONOMOUS SYSTEMS FOR RESPONSIBLE PURPOSE
- Authors: Prokofiev O.V.1
-
Affiliations:
- Penza State Technological University
- Issue: No 1 (2025)
- Pages: 20-27
- Section: FUNDAMENTALS OF RELIABILITY AND QUALITY ISSUES
- URL: https://journals.rcsi.science/2307-4205/article/view/289651
- DOI: https://doi.org/10.21685/2307-4205-2025-1-3
- ID: 289651
Cite item
Full Text
Abstract
Background. Much research on artificial intelligence (AI) since its inception has been devoted to exploring a variety of problems and approaches for autonomous operation in application areas related to human health and life. The article is devoted to describing the origin of risks, systematizing the ways they manifest themselves in order to build systems for responsible purposes. Materials and methods. Since research in this area is not yet based on extensive implementation experience, the sources of risks discussed are a generalization of expert assessments of authoritative developers and are given as an example of autonomous weapons systems. Results. The main sources of risks and the trajectories of their development for various variants of cause-and-effect transformation have been identified. Conclusions. Directions for improving the processes of testing and debugging autonomous systems for critical purposes are formulated from the point of view of development and formation of application risks.
About the authors
Oleg V. Prokofiev
Penza State Technological University
Author for correspondence.
Email: prokof_ow@mail.ru
Candidate of technical sciences, associate professor, associate professor of the sub-department of informational technologies and systems
(1a/11 Baydukov passage/Gagarin street, Penza, Russia)References
- Boulanin V., Saalman L., Topychkanov P. et al. Artificial Intelligence, Strategic Stability and Nuclear Risk. 2020. Available at: https://www.sipri.org/sites/default/files/2020-06/artificial_intelligence_strategic_stability_and_nuclear_ risk.pdf
- Saalman L. (ed.). Integrating Cybersecurity and Critical Infrastructure. National, Regional and International Approaches. 2018. Available at: https://www.sipri.org/sites/default/files/2018-04/integrating_cybersecurity_0.pdf
- Boulanin V. (ed.). The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk. Vol. I. Euro-Atlantic Perspectives. 2020. Available at: https://www.sipri.org/publications/2020/other-publications/artificial-intelligence- strategic-stability-and-nuclear-risk
- Boulanin V., Verbruggen M. Mapping the Development of Autonomy in Weapon Systems. 2020. Available at: https://www.sipri.org/sites/default/files/2018-04/integrating_cybersecurity_0.pdf
- Boulanin V., Bruun L., Goussac N. Autonomous Weapon Systems and International Humanitarian Law. Identifying Limits and the Required Type and Degree of Human–Machine Interaction. 2021. Available at: https://www.sipri.org/sites/default/files/2021-06/2106_aws_and_ihl_0.pdf
- Saalman L., Su F., Saveleva Dovgal L. Cyber Posture Trends in China, Russia, the United States and the European Union. 2022. Available at: https://www.sipri.org/sites/default/files/2022-12/2212_cyber_postures_0.pdf
- Boulanin V. Mapping the development of autonomy in weapon systems. A primer on autonomy. 2017. Available at: https://www.sipri.org/sites/default/files/Mapping-development-autonomy-in-weapon-systems.pdf
- Boulanin V., Goussac N., Bruun L., Richards L. Responsible Military Use of Artificial Intelligence. Can the European Union Lead the Way in Developing Best Practice? 2020. Available at: https://www.sipri.org/publications/ 2020/other-publications/responsible-military-use-artificial-intelligence-can-european-union-lead-way-developing- best
- Boulanin V., Brockmann K., Richards L. Responsible Artificial Intelligence Research And Innovation For International Peace And Security. 2020. Available at: https://www.sipri.org/sites/default/files/2020-11/sipri_report_responsible_ artificial_intelligence_research_and_innovation_for_international_peace_and_security_2011.pdf
- Bromley M., Maletta G. The Challenge of Software and Technology Transfers to Non-Proliferation Efforts. Implementing and Complying with Export Controls. 2018. Available at: https://www.sipri.org/publications/ 2018/other-publications/challenge-software-and-technology-transfers-non-proliferation-efforts-implementing- and-complying
- Su F., Boulanin V., Turell J. Cyber-incident Management Identifying and Dealing with the Risk of Escalation. 2020. IPRI Policy Paper No. 55. Available at: https://www.sipri.org/publications/2020/sipri-policy-papers/cyberincident- management-identifying-and-dealing-risk-escalation
- Morgan F.E., Boudreaux B., Lohn A.J. et al. Military Applications of Artificial Intelligence Ethical Concerns in an Uncertain World. RAND Corporation, 2020:202. Available at: https://www.rand.org/pubs/research_reports/ RR3139-1.html
- Ruhl Ch. Autonomous weapon systems and military artificial intelligence (AI) applications report. 2022. Available at: https://www.founderspledge.com/research/autonomous-weapon-systems-and-military-artificial-intelligence-ai
- Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter. Future of Life Institute. Available at: https://futureoflife.org/data/documents/research_priorities.pdf (data obrashcheniya: 09.03.2024).
- Marr B. The 15 Biggest Risks Of Artificial Intelligence. Available at: https://www.forbes.com/sites/bernardmarr/ 2023/06/02/the-15-biggest-risks-of-artificial-intelligence/?sh=20c095a92706
- Pause Giant AI Experiments: An Open Letter. Available at: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
- AI Ethics Code. Available at: https://ethics.a-ai.ru/ (accessed 09.03.2024).
- Delaborde A. Risk assessment of artificial intelligence in autonomous machines. 1st international workshop on Evaluating Progress in Artificial Intelligence (EPAI 2020) in conjunction with ECAI. Available at: 2020. https://hal.science/hal-03009978
- ICRC position on autonomous weapon systems. International Committee of the Red Cross. Available at: https://www.icrc.org/en/document/icrc-position-autonomous-weapon-systems
- Macrae C. Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety and Sociotechnical Sources of Risk. SSRN Electronic Journal. 2021. doi: 10.2139/ssrn.3832621
- Hindriks F., Veluwenkamp H. The risks of autonomous machines: from responsibility gaps to control gaps. Synthese. 2023;201. doi: 10.1007/s11229-022-04001-5
- Radanliev P., De Roure D., Maple C. et al. Super forecasting the technological singularity risks from artificial intelligence. Evolving Systems. 2022;13:747–757. doi: 10.1007/s12530-022-09431-7
- The risks of Lethal Autonomous Weapons. The Future of Life Institute. Available at: https://autonomousweapons. org/the-risks
- Asilomar AI Principles. The Future of Life Institute. Available at: https://futureoflife.org/open-letter/ai-principles/
- Artificial Intelligence Risk & Governance. By Artificial Intelligence/Machine Learning Risk & Security Working Group (AIRS). The Wharton School, The University of Pennsylvania, 2024. Available at: https://ai.wharton.upenn. edu/white-paper/artificial-intelligence-risk-governance/
- Skelton S.K. AI experts question tech industry’s ethical commitments. TechTarget, 2024. Available at: https://www.computerweekly.com/feature/AI-experts-question-tech-industrys-ethical-commitments
- Ivanov A.I., Ivanov A.P., Savinov K.N., Eremenko R.V. Virtual enhancement of the effect of parallelization of computing during the transition from binary neurons to the use of q-ary artificial neurons. Nadezhnost’ i kachestvo slozhnykh system = Reliability and quality of complex systems. 2022;(4):89–97. (In Russ.)
- Shirinkina E.V. The mechanism of application of artificial intelligence in teaching. Nadezhnost’ i kachestvo slozhnykh system = Reliability and quality of complex systems. 2022. № 4. S. 24–30. (In Russ.)
Supplementary files
