Automated Intelligent Systems: Technological Determinism and Substantivism

Cover Page

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription Access

Abstract

Artificial Intelligence has become so firmly embedded in our lives that its direct influence on shaping the world of the future is inevitable. However, it has taken time for a constructive approach to risk prevention and regulation of technologies at all stages of their life cycle to gradually emerge alongside theoretical speculation about «machine uprising» and other threats to humanity. The subject of special attention is the so-called automated artificial systems, the regulation of which is still limited by normative and technical requirements. The peculiarity of this approach is the conviction of its proponents in the truth of technological determinism, for which “technology” is value neutral. The prevention of ethical risks from the perspective of this approach is practically impossible because regulatory issues are only concerned with the functional characteristics and operational violations of a particular system. This article contrasts technological determinism with technological substantivism, for which “technology” has an independent ethical value, regardless of its instrumental use. The ethical evaluation based on it consists in the procedure of regular correlation of social “good” and “reliability” of the system. The development of a methodology for such a correlation procedure requires special competences that distinguish a new professional field — ethics in the field of AI.

Full Text

Restricted Access

About the authors

Sergey V. Garbuk

Higher School of Economics

Author for correspondence.
Email: sgarbuk@hse.ru
ORCID iD: 0000-0001-5385-3961

Cand. Sc. in Technical Sciences, Director on Scientific Projects

Russian Federation, Moscow

Anastasia V. Ugleva

Higher School of Economics

Email: augleva@hse.ru
ORCID iD: 0000-0002-9146-1026

Cand. Sc. in Philosophy, Professor, School of Philosophy and Cultural Studies, Deputy Director of the Center for Transfer and Management of Socioeconomic Information

Russian Federation, Moscow

References

  1. Garbuk S. V. Metod ocenki vliyaniya parametrov standartizacii na effektivnost’ sozdaniya i primeneniya sistem iskusstvennogo intellekta [Method for assessing the influence of standardization parameters on the effectiveness of the creation and implementation of systems of experimental intelligence]. Information and economic aspects of standardisation and technical regulation. 2022. N 3. P. 4–14.
  2. Garbuk S.V. Osobennosti obosnovaniya predstavitel’nogo nabora trebovaniya k intellektual’nym avtotransportnym sredstvam [Features of justification of a representative set of requirements for intelligent vehicles]. Proceedings of NAMI. 2023. N 4. P. 69–86.
  3. Grunwald A., Zheleznyak V.N., Seredkina E.V. Bespilotnyj avtomobil’ v svete social’noj ocenki tekhniki [The unmanned car in the light of social valuation of technology]. Technologos. 2019. N 2. P. 41‒51.
  4. Shevchenko S.Y., Shkomova E.M., Lavrentieva S.V. Gumanitarnaya ekspertiza polnogo cikla [Full-cycle human health expertise]. Horizons of Humanitarian Knowledge. 2021. N 2. P. 3–17.
  5. Shtal B.K. Etika iskusstvennogo intellekta: Kejsy i varianty resheniya eticheskih problem [Ethics of Artificial Intelligence: Cases and Options for Addressing Ethical Challenges]. Moscow: HSE Publ., 2024.
  6. Ashok M., Rohit M., Anton J., Uthayasankar S. Ethical framework for Artificial Intelligence and Digital technologies. International Journal of Information Management, 2022. Vol. 62, N 2. P. 102433.
  7. Boddington P. AI and moral thinking: how can we live well with machines to enhance our moral agency? AI and Ethics. 2021. N 1. P. 109–111.
  8. Borenstein J., Howard А. Emerging challenges in AI and the need for AI ethics education. AI and Ethics. 2020. N 1. P. 61–65.
  9. Bostrom N. Superintelligence: Paths, Dangers, Strategies. Oxford Academ, 2016.
  10. Brendel A.B., Mirbabaie M., Lembcke T.-B., Hofeditz L. Ethical Management of Artificial Intelligence. Sustainability. 2021. N 13(4).
  11. Bruneault Fr., Sabourin Laflamme А. AI Ethics: how can information ethics provide a framework to avoid usual conceptual pitfalls? An Overview. AI & SOCIETY. 2021. N 36(3). P. 757–766.
  12. Daley K. Two arguments against human-friendly AI. AI and Ethics. 2021. N 1. P. 435–444.
  13. Douglas D.M., Howard D., Lacey J. Moral responsibility for computationally designed products. AI and Ethics. 2021. N 1. P. 273–281.
  14. Dreyfus H. (1990). Being-in-the-World: A Commentary on Heidegger’s Being and Time, Division I. The MIT Press, 1990.
  15. Floridi L., Cowls J., King T.C. et al. (2020). How to Design AI for Social Good: Seven Essential Factors. Sci Eng Ethics, 26: 1771–1796.
  16. Forbes K. Opening the path to ethics in artificial intelligence. AI Ethics. 2021. N 1. P. 297–300.
  17. Gabriel I. Artifcial Intelligence, Values, and Alignment. Minds and Machines. 2020. N 30. P. 411‒437.
  18. Gambelin O. Brave: what it means to be an AI Ethicist. AI Ethics. 2021. N 1. P. 87–91.
  19. Garbuk S.V. Intellimetry as a way to ensure AI trustworthiness. The Proceedings of the 2018 International Conference on Artificial Intelligence Applications and Innovations (IC-AIAI). Limassol, Cyprus, 6‒10.10.2018. P. 27–30.
  20. Gendron C. Penser l’acceptabilité sociale: au-delà de l’intérêt, les valeurs. Éthique et relations publiques: pratiques, tensions et perspectives, 2014. N 11. P. 117–129.
  21. Hagendorff T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds & Machines. 2020. N 30. P. 99–120.
  22. Hager G.D., Drobnis A., Fang F., Ghani R., Greenwald A., Lyons T., Parkes D. C. et al. Artificial intelligence for social good. 2017.
  23. Heidegger M. Die Frage nach der Technik. Die Kunste im technischen Zeitalter. München, 1954. S. 70–108.
  24. Iphofen R., Kritikos M. Regulating artificial intelligence and robotics: ethics by design in a digital society. Contemporary Social Science. 2019. N 16(2). P. 1–15.
  25. Kazim E., Soares Koshiyama A. A high-level overview of AI ethics. Patterns. 2021. N 2(9). P. 100314.
  26. Pekka A.-P., Bauer W., Bergmann U., Bieliková M., Bonefeld-Dahl C., Bonnet Y., Bouarfa L. et al. The European Commission’s high-level expert group on artificial intelligence: Ethics guidelines for trustworthy ai. Working Document for stakeholders’ consultation. Brussels, 2018. P. 1–37.
  27. Taddeo M., Floridi L. How AI Can be a Force for Good. Science. 2018. Vol. 361(6404). P. 751–752.
  28. ГОСТ Р 70889-2023 Информационные технологии. Искусственный интеллект. Структура жизненного цикла данных (ISO/IEC 8183:2023, MOD).
  29. ПНСТ 840 Искусственный интеллект. Обзор этических и общественных аспектов (ISO/IEC TR 24368:2022 MOD).
  30. ПНСТ 841-2023 Системная и программная инженерия. Требования и оценка качества систем и программного обеспечения (SQuaRE). Руководство по оценке качества систем ИИ (ISO/IEC DTS 25058, MOD).
  31. ПНСТ 776-2022 Информационные технологии. Интеллект искусственный. Управление рисками. (ISO/IEC FDIS 23894, NEQ).
  32. Федеральный закон «О персональных данных» от 27 июля 2006 г. № 152-ФЗ.
  33. SAE J3016 Surface vehicle recommended practice. Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. June, 2018.

Copyright (c) 2024 Russian Academy of Sciences

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies