Things to Keep in Mind When Thinking about Artificial Intelligence
- Authors: Tambovtsev V.L.1
-
Affiliations:
- Lomonosov Moscow State University
- Issue: Vol 6, No 2 (2024)
- Pages: 26-34
- Section: Discussion. Artificial intelligence: potentials and consequences of its application
- URL: https://journals.rcsi.science/2686-827X/article/view/268019
- DOI: https://doi.org/10.19181/smtp.2024.6.2.2
- EDN: https://elibrary.ru/FFDRFQ
- ID: 268019
Cite item
Full Text
Abstract
About the authors
Vitaly L. Tambovtsev
Lomonosov Moscow State University
Email: vitalytambovtsev@gmail.com
ORCID iD: 0000-0002-0667-3391
SPIN-code: 5938-6806
Doctor of Economics, Professor Moscow, Russia
References
- Cordeschi R. AI turns fifty: Revisiting its origins. Applied Artificial Intelligence. 2007;21(4–5):259–279. doi: 10.1080/08839510701252304.
- Müller V. C., Bostrom N. Future progress in artificial intelligence: A poll among experts. AI Matters. 2014;1(1):9–11. doi: 10.1145/2639475.2639478.
- Morikawa M. Who are afraid of losing their jobs to artificial intelligence and robots? Evidence from a survey. RIETI Discussion Paper Series. 17-E-069. 2017. May. Available at: https://rieti.go.jp/jp/publications/dp/17e069.pdf (accessed: 26.04.2024).
- Merenkov A. V., Campa R., Dronishinets N. P. Public opinion on artificial intelligence development. KnE Social Sciences. 2020;5(2):565–574. doi: 10.18502/kss.v5i2.8401.
- Kelley P. G., Yang Y., Heldreth C., Moessner C., Sedley A., Kramm A., Newman D. T., Woodruf A. Exciting, useful, worrying, futuristic: Public perception of artificial intelligence in 8 countries. In: AIES’21 : Proceedings of the 2021 AAAI/ACM conference on AI, ethics, and society. May 19–21, 2021, Virtual Event USA. New York : Association for Computing Machinery; 2021. P. 627–637. doi: 10.1145/3461702.3462605.
- European Commission, European Research Council Executive Agency. Foresight: Use and impact of artificial intelligence in the scientific process. Luxembourg : Publications Office of the European Union; 2023. 17 p. doi: 10.2828/10694.
- Gillespie N., Lockey S., Curtis C., Pool J., Akbari A. Trust in artificial intelligence: A global study. Brisbane ; New York : The University of Queensland ; KPMG Australia; 2023. 82 p. doi: 10.14264/00d3c94.
- Sun M., Hu W., Wu Y. Public perceptions and attitudes towards the application of artificial intelligence in journalism: From a China-based survey. Journalism Practice. 2024;18(3):548–570. doi: 10.1080/17512786.2022.2055621.
- Haesevoets T., Verschuere B., Van Severen R., Roets A. How do citizens perceive the use of Artificial Intelligence in public sector decisions? Government Information Quarterly. 2024;41(1):101906. doi: 10.1016/j.giq.2023.101906.
- Brauner P., Hick A., Philipsen R., Ziefle M. What does the public think about artificial intelligence? – A criticality map to understand bias in the public perception of AI. Frontiers in Computer Science. 2023;5:1113903. doi: 10.3389/fcomp.2023.1113903.
- Müller V. C. Risks of general artificial intelligence. Journal of Experimental & Theoretical Artificial Intelligence. 2024;26(3):297–301. doi: 10.1080/0952813X.2014.895110.
- McLean S., Read G. J. M., Thompson J., Baber C., Stanton N. A., Salmon P. M. The risks associated with Artificial General Intelligence: A systematic review. Journal of Experimental & Theoretical Artificial Intelligence. 2023;35(5):649–663. doi: 10.1080/0952813X.2021.1964003.
- Madan R., Ashok M. A public values perspective on the application of Artificial Intelligence in government practices: A synthesis of case studies. In: Saura J. R., Debasa F., eds. Handbook of research on artificial intelligence in government practices and processes. Hershey, PA : IGI Global; 2022. P. 162–189. doi: 10.4018/978-1-7998-9609-8.ch010.
- Alon-Barkat S., Busuioc M. Human–AI interactions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice. Journal of Public Administration Research and Theory. 2023;33(1):153–169. doi: 10.1093/jopart/muac007.
- Zhao Y., Yin D., Wang L., Yu Y. The rise of artificial intelligence, the fall of human wellbeing? International Journal of Social Welfare. 2024;33(1):75–105. doi: 10.1111/ijsw.12586.
- Keil F. C. Folkscience: Coarse interpretations of a complex reality. Trends in Cognitive Sciences. 2003;7(8):368–373. doi: 10.1016/s1364-6613(03)00158-x.
- Schapiro A., Turk-Browne N. Statistical learning. In: Toga A. W., ed. Brain mapping: An encyclopedic reference. Vol. 3. London : Elsevier/Academic Press; 2015. P. 501–506. doi: 10.1016/B978-0-12-397025-1.00276-1.
- Nickerson R. S. Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology. 1998;2(2):175–220. doi: 10.1037/1089-2680.2.2.175.
- Vitriol J. A., Marsh J. K. The illusion of explanatory depth and endorsement of conspiracy beliefs. European Journal of Social Psychology. 2018;48(7):955–969. doi: 10.1002/ejsp.2504.
- Rozenblit L., Keil F. The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science. Vol. 2002;26(5):521–562. doi: 10.1207/s15516709cog2605_1.
- Miller T. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence. 2019;267:1–38. doi: 10.1016/j.artint.2018.07.007.
- De Graaf M. M. A., Malle B. F. How people explain action (and autonomous intelligent systems should too). In: Artificial intelligence for human–robot interaction : Papers from the AAAI Fall Symposium, 2017. Palo Alto, CA : The AAAI Press; 2017. P. 19–26.
- Doshi-Velez F., Kim B. Towards a rigorous science of interpretable machine learning. arXiv. 2017. March 2. Available at: https://arxiv.org/abs/1702.08608 (accessed: 26.04.2024). doi: 10.48550/arXiv.1702.08608.
- Vapnik V. The nature of statistical learning theory. New York : Springer; 1995. xv, 193 p. ISBN 978-0-387-94559-0.
- Ordin M., Polyanskaya L., Soto D. Neural bases of learning and recognition of statistical regularities. Annals of the New York Academy of Sciences. 2020;1467(1):60–76. doi: 10.1111/nyas.14299.
- Alnuaimi A. F. A. H., Albaldawi T. H. K. Concepts of statistical learning and classification in machine learning: An overview. BIO Web of Conferences. 2024;97:00129. doi: 10.1051/bioconf/20249700129.
- Roli A., Jaeger J., Kauffman S. A. How organisms come to know the world: Fundamental limits on artificial general intelligence. Frontiers in Ecology and Evolution. 2022;9:806283. doi: 10.3389/fevo.2021.806283.
- Curtis V., Aunger R., Rabie T. Evidence that disgust evolved to protect from risk of disease. Proceedings of the Royal Society B: Biological Sciences. 2004;271(Suppl. 4):S131–S133. doi: 10.1098/rsbl.2003.0144.
- Rozin P., Haidt J. The domains of disgust and their origins: Contrasting biological and cultural evolutionary accounts. Trends in Cognitive Sciences. 2013; 17(8):367–368. doi: 10.1016/j.tics.2013.06.001.
- Libet B., Gleason C. A., Wright E. W., Pearl D. K. Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential): The unconscious initiation of a freely voluntary act. Brain. 1983;106(3):623–642. doi: 10.1093/brain/106.3.623.
- Braun M. N., Wessler J., Friese M. A meta-analysis of Libet-style experiments. Neuroscience & Biobehavioral Reviews. 2021;128:182–198. doi: 10.1016/j.neubiorev.2021.06.018.
Supplementary files
