Risks of applying artificial intelligence in psychological support: A psychologist and developer’s perspective
- Authors: Freimanis I.F.1, Shlyapov I.V.1
-
Affiliations:
- Perm State National Research University
- Issue: Vol 16, No 5 (2025)
- Pages: 659-687
- Section: Psychological Studies
- Published: 31.10.2025
- URL: https://journals.rcsi.science/2658-4034/article/view/363635
- DOI: https://doi.org/10.12731/2658-4034-2025-16-5-829
- EDN: https://elibrary.ru/WZCUOA
- ID: 363635
Cite item
Full Text
Abstract
Background. In recent years, there has been observed rapid evolution of artificial intelligence (AI) technologies and their increasing integration into various domains, including psychological support. Despite the growing popularity of AI-based services, their application in psychotherapeutic practice entails numerous ethical, technical, and socio-legal risks. This article examines the key challenges associated with the use of AI in psychological support, including the simulation of empathy, anthropomorphism, AI «hallucinations», data privacy, and issues of accountability.
Purpose. To analyze the risks of using AI tools psychological support.
Materials and methods. The main research method is systems analysis with a review of scientific literature and regulatory documents. The study is based on: academic publications (2020–2025) in psychology, AI, and digital ethics; empirical data on user interactions with AI services; legal regulations and recommendations concerning AI control; and real-world cases of AI application in psychological support.
Results. The study identifies key risks associated with the use of AI for psychological support. Empathy substitution: AI imitates emotional support without genuine understanding. Anthropomorphism: users attribute human traits to AI, which might lead to psychological dependence on AI. AI «hallucinations»: generation of false or harmful recommendations. Threats to confidentiality such as data leaks and no legal safeguards. Legal uncertainty: absence of clear norms on liability for AI-driven actions.
The findings highlight the need for clinical validation of AI-based services and the development of ethical standards for their implementation in mental health practice.
About the authors
Inga F. Freimanis
Perm State National Research University
Author for correspondence.
Email: inga73-08@maiI.ru
ORCID iD: 0000-0001-7996-810X
SPIN-code: 2658-0659
ResearcherId: KSL-6854-2024
Senior Lecturer, Department of General and Clinical Psychology
Russian Federation, 15, Bukireva Str., Perm, 614990, Russian Federation
Ivan V. Shlyapov
Perm State National Research University
Email: 7505427@mail.ru
Master's Student of the Department of Developmental Psychology
Russian Federation, 15, Bukireva Str., Perm, 614990, Russian Federation
References
- Trunov, D. G. (2013). Individual psychological counseling. Moscow: Eterna, 384 p.
- Freimans, I. F. (2024). Chatbot as a tool for psychological support: A study of user opinions. Social and Humanitarian Sciences: Theory and Practice, 4(11), 74–79. EDN: https://elibrary.ru/XZGPDX
- Freimans, I. F. (2024). Chatbot as a tool for psychological support: A review of research. Bulletin of Perm University. Philosophy. Psychology. Sociology, 2, 250–259. https://doi.org/10.17072/2078-7898/2024-2-250-259. EDN: https://elibrary.ru/OKMZWX
- Abercrombie, G., Cercas Curry, A., Dinkar, T., Rieser, V., & Talat, Z. (2023). Mirages: On anthropomorphism in dialogue systems. arXiv. https://arxiv.org/abs/2305.09800
- Airenti, G. (2015). The cognitive basis of anthropomorphism: From relatedness to empathy. International Journal of Social Robotics, 7(1), 117–127. https://doi.org/10.1007/s12369-014-0263-x. EDN: https://elibrary.ru/GJFUAH
- Ayers, J., Poliak, A., Dredze, M., Leas, E., Zechariah, Z., Kelley, J., & Saper, S. (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Internal Medicine, 183(6), 589–596. https://doi.org/10.1001/jamainternmed.2023.1838. EDN: https://elibrary.ru/RFVQOH
- Chaves, A. P., & Marco, A. G. (2021). How should my chatbot interact? A survey on social characteristics in human-chatbot interaction design. International Journal of Human-Computer Interaction, 37(8), 729–758.
- Coghlan, S., Leins, K., Sheldrick, S., Cheong, M., Gooding, P., & Alfonso, S. (2023). To chat or bot to chat: Ethical issues with using chatbots in mental health. Digital Health, 9, 20552076231183542. https://doi.org/10.1177/20552076231183542. EDN: https://elibrary.ru/XDTXQU
- Defining medical liability when artificial intelligence is applied in clinical practice. (2023). Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10711067
- Ellis, B., & Bjorklund, D. (2004). Origins of the social mind: Evolutionary psychology and child development. New York: The Guilford Press.
- Epley, N., Waytz, A., Akalis, S., & Cacioppo, J. T. (2008). When we need a human: Motivational determinants of anthropomorphism. Social Cognition, 26(2), 143–155.
- Fang, C. M., et al. (2025). How AI and human behaviors shape psychosocial effects of chatbot use: A longitudinal randomized controlled study. arXiv preprint arXiv:2503.17473.
- Fitzpatrick, L., & Hutchinson, T. (2023). AI and empathy: Challenges and opportunities in automated mental health interventions. Journal of Digital Psychology, 12(2), 45–62.
- Grant, C., & Ernst, J. (2023). AI companions and the illusion of empathy: Emerging risks in mental health applications. Journal of AI Ethics, 9(1), 24–38.
- Hatch, S. G., Goodman, Z. T., Vowels, L., Hatch, H. D., Brown, A. L., Guttman, S., et al. (2025). When ELIZA meets therapists: A Turing test for the heart and mind. PLOS Mental Health, 2(2), e0000145. https://doi.org/10.1371/journal.pmen.0000145. EDN: https://elibrary.ru/CBTBQP
- Holohan, M., & Fiske, A. (2021). “Like I’m talking to a real person”: Exploring the meaning of transference for the use and design of AI-based applications in psychotherapy. Frontiers in Psychology, 12, 720476. https://doi.org/10.3389/fpsyg.2021.720476. EDN: https://elibrary.ru/LXUHRL
- Jacobsen, R., Cox, S., Griggio, C., & van Berkel, N. (2025). Chatbots for data collection in surveys: A comparison of four theory-based interview probes. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1–21). https://doi.org/10.1145/3706598.3714128
- Johnson, J. (2023). Finding AI faces in the moon and armies in the clouds: Anthropomorphising artificial intelligence in military human-machine interactions. Global Society, 38, 1–16.
- Jones, V. K., Yan, C., Shade, M. Y., Boron, J. B., Yan, Z., Heselton, H. J., & Dube, V. (2024). Reducing loneliness and improving social support among older adults through different modalities of personal voice assistants. Geriatrics, 9(2), 22. https://doi.org/10.3390/geriatrics9020022. EDN: https://elibrary.ru/VKTNZS
- Li, C., Wang, J., Zhang, Y., Zhu, K., Wang, X., Hou, W., & Xie, X. (2023). The good, the bad, and why: Unveiling emotions in generative AI. arXiv. https://arxiv.org/abs/2312.11111
- Magsumov, T. A. (2017). Family and school in Russia at the beginning of the 20th century: Attempts to bridge the gap. European Journal of Contemporary Education, 6(4), 837–846. https://doi.org/10.13187/ejced.2017.4.837. EDN: https://elibrary.ru/ZWREPV
- Martinez Martin, N. (2023). Viewing CAI as a tool within the mental health care system. American Journal of Bioethics, 23(5), 57–59. https://doi.org/10.1080/15265161.2023.2191058. EDN: https://elibrary.ru/PXKHTW
- McStay, A. (2018). Emotional AI: The rise of empathic media. London: SAGE Publications.
- Mitchell, R. W., Thompson, N. S., & Miles, L. H. (1997). Anthropomorphism, anecdotes, and animals. Albany: SUNY Press.
- Opel, D. J., Kious, B. M., & Cohen, I. G. (2023). AI as a mental health therapist for adolescents. JAMA Pediatrics, 177(12), 1253–1254. https://doi.org/10.1001/jamapediatrics.2023.4215. EDN: https://elibrary.ru/BOXYAM
- Placani, A. (2024). Anthropomorphism in AI: Hype and fallacy. AI Ethics, 4, 691–698. https://doi.org/10.1007/s43681-024-00419-4. EDN: https://elibrary.ru/ECSYPA
- Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.
- Sexton, M., Sergeeva, A., & Soekijad, M. (2025). Do robots have to be human like? A practice-based perspective on relating to robots in the wild. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3706599.3720154
- Iglesias, M., Sinha, C., Vempati, R., Grace, S. E., Roy, M., Chapman, W. C., & Rinaldi, M. L. (2023). Evaluating a digital mental health intervention (Wysa) for workers’ compensation claimants: Pilot feasibility study. Journal of Occupational and Environmental Medicine, 65(2), e93–e99. https://doi.org/10.1097/JOM.0000000000002762. EDN: https://elibrary.ru/USJWQH
- Watson, D. (2019). The rhetoric and reality of anthropomorphism in artificial intelligence. Minds and Machines, 29, 417–440. https://doi.org/10.1007/s11023-019-09506-6. EDN: https://elibrary.ru/OVAFYF
- Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. San Francisco: W. H. Freeman.
- Zhang, A., Lipton, Z. C., Li, M., & Smola, A. J. (2021). Dive into deep learning. Retrieved from: https://d2l.ai
Supplementary files


