Toward a New Level of Human-Chatbot Communication: Goal Management and Mutual Verbal Adaptation
- Authors: Palenova V.V.1, Voronin A.N.2
-
Affiliations:
- State Academic University for the Humanities
- Institute of Psychology, Russian Academy of Sciences
- Issue: Vol 22, No 1 (2025)
- Pages: 96-122
- Section: PERSONALITY AND DIGITAL TECHNOLOGIES: OPPORTUNITIES AND CHALLENGES
- URL: https://journals.rcsi.science/2313-1683/article/view/326279
- DOI: https://doi.org/10.22363/2313-1683-2025-22-1-96-122
- EDN: https://elibrary.ru/UBVZFJ
- ID: 326279
Cite item
Full Text
Abstract
As artificial intelligence becomes increasingly integrated into everyday communication, understanding the dynamics of human-chatbot interaction has become a matter of both theoretical importance and practical urgency. This study explores the goals, communicative tactics, and adaptive strategies employed by users and AI chatbots in dialogue, using grounded theory methodology. Based on a corpus of 316 dialogues with ChatGPT, we conducted multi-level coding - substantive, selective, and theoretical - to identify recurring patterns in the organization of digital communication. The analysis revealed a wide range of user goals, including informational, task-oriented, generative, emotional, and exploratory intentions. Chatbots, in turn, pursued structurally narrower but functionally adaptive goals aimed at supporting dialogue coherence and user engagement. Both sides employed diverse communicative tactics, including primary, combined, and compensatory strategies. While users initiated goal setting and frequently adjusted their tactics, chatbots demonstrated reactive behavior through clarification, tone adaptation, and metacommunicative responses. A key result is the identification of six basic communicative scenarios in user-chatbot interaction: informational-analytical, practical, creative, emotional-reflective, entertaining-playful, and exploratory-provocative. Each scenario reflects a stable alignment of goals and tactics between the participants, revealing the functional architecture of digital dialogue. The study demonstrates that interaction with generative chatbots is not random, but unfolds within structured communicative configurations. These findings contribute to the theoretical understanding of digital interaction and provide a typological framework for analyzing, designing, and optimizing AI-based communication systems across various domains.
About the authors
Violetta V. Palenova
State Academic University for the Humanities
Author for correspondence.
Email: violetta.palenova@yandex.ru
ORCID iD: 0000-0001-8552-5639
PhD Student
26 Maronovskiy Lane, Moscow, 119049, Russian FederationAnatoly N. Voronin
Institute of Psychology, Russian Academy of Sciences
Email: voroninan@bk.ru
ORCID iD: 0000-0002-6612-9726
SPIN-code: 2852-2031
Scopus Author ID: 7103245935
Doctor of Psychology, Professor, Head of the Laboratory of Speech Psychology and Psycholinguistics
13-1 Yaroslavskaya St, Moscow, 129366, Russian FederationReferences
- Altay, S., Hacquin, A.-S., Chevallier, C., & Mercier, H. (2023). Information delivered by a chatbot has a positive impact on COVID-19 vaccines attitudes and intentions. Journal of Experimental Psychology: Applied, 29(1), 52–62. https://doi.org/10.1037/xap0000400
- Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051
- Bedrina, I.S. (2010). Functional semantic stylistic text analyses. Lingua Mobilis, (7), 19–26. (In Russ.). EDN: MWCGAH
- Brown, P., & Levinson, S.C. (1987). Politeness: Some universals in language usage. Cambridge: Cambridge University Press. https://doi.org/10.1017/cbo9780511813085
- Cheng, X., Yin, L., Lin, C., Shi, Z., Zheng, H., Zhu, L., Liu, X., Chen, K., & Dong, R. (2024). Chatbot dialogic reading boosts comprehension for Chinese kindergarteners with higher language skills. Journal of Experimental Child Psychology, 240, 105842. https://doi.org/10.1016/j.jecp.2023.105842
- Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92, 539–548. https://doi.org/10.1016/j.future.2018.01.055
- Croes, E.A.J., & Antheunis, M.L. (2021). Can we be friends with Mitsuku? A longitudinal study on the process of relationship formation between humans and a social chatbot. Journal of Social and Personal Relationships, 38(1), 279–300. https://doi.org/10.1177/0265407520959463
- De Gennaro, M., Krumhuber, E.G., & Lucas, G. (2020). Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Frontiers in Psychology, 10, 3061. https://doi.org/10.3389/fpsyg.2019.03061
- Dillard, J.P., Segrin, C., & Harden, J.M. (1989). Primary and secondary goals in the production of interpersonal influence messages. Communication Monographs, 56(1), 19–38. https://doi.org/10.1080/03637758909390247
- Gayanova, M.M., & Vulfin, A.M. (2022). Structural and semantic analysis of scientific publications in a selected subject area. Systems Engineering and Information Technologies, 4(1), 37–43. (In Russ.). https://doi.org/10.54708/26585014_2022_41837 EDN: SRLPRF
- Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago, IL: Aldine.
- Grice, H.P. (1975). Logic and conversation. In P. Cole, & J.L. Morgan (Eds.). Syntax and semantics. Vol. 3. Speech acts (pp. 41–58). New York: Academic Press. https://doi.org/10.1163/9789004368811_003
- Hancock, J.T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89–100. https://doi.org/10.1093/jcmc/zmz022
- Ischen, C., Butler, J., & Ohme, J. (2024). Chatting about the unaccepted: Self-disclosure of unaccepted news exposure behaviour to a chatbot. Behaviour & Information Technology, 43(10), 2044–2056. https://doi.org/10.1080/0144929x.2023.2237605
- Janson, A. (2023). How to leverage anthropomorphism for chatbot service interfaces: The interplay of communication style and personification. Computers in Human Behavior, 149, 107954. https://doi.org/10.1016/j.chb.2023.107954
- Jiang, Y., Yang, X., & Zheng, T. (2023). Make chatbots more adaptive: Dual pathways linking human-like cues and tailored response to trust in interactions with chatbots. Computers in Human Behavior, 138, 107485. https://doi.org/10.1016/j.chb.2022.107485
- Konya-Baumbach, E., Biller, M., & von Janda, S. (2023). Someone out there? A study on the social presence of anthropomorphized chatbots. Computers in Human Behavior, 139, 107513. https://doi.org/10.1016/j.chb.2022.107513
- Liu, B., & Sundar, S.S. (2018). Should machines express sympathy and empathy? Experiments with a health advice chatbot. Cyberpsychology, Behavior, and Social Networking, 21(10), 625–636. https://doi.org/10.1089/cyber.2018.0110
- Lu, L., McDonald, C., Kelleher, T., Lee, S., Chung, Y.J., Mueller, S., Vielledent, M., & Yue, C.A. (2022). Measuring consumer-perceived humanness of online organizational agents. Computers in Human Behavior, 128, 107092. https://doi.org/10.1016/j.chb.2021.107092
- Markowitz, D.M., Hancock, J.T., & Bailenson, J.N. (2024). Linguistic markers of inherently false AI communication and intentionally false human communication: Evidence from hotel reviews. Journal of Language and Social Psychology, 43(1), 63–82. https://doi.org/10.1177/0261927x231200201
- Maurya, R.K. (2024). A qualitative content analysis of ChatGPT’s client simulation role-play for practising counselling skills. Counselling and Psychotherapy Research, 24(2), 614–630. https://doi.org/10.1002/capr.12699
- McGowan, A., Gui, Y., Dobbs, M., Shuster, S., Cotter, M., Selloni, A., Goodman, M., Srivastava, A., Cecchi, G.A., & Corcoran, C.M. (2023). ChatGPT and Bard exhibit spontaneous citation fabrication during psychiatry literature search. Psychiatry Research, 326, 115334. https://doi.org/10.1016/j.psychres.2023.115334
- Palomares, N.A. (2014). The goal construct in interpersonal communication. In C.R. Berger (Ed.), Interpersonal Communication (pp. 77–100). Berlin, Boston: De Gruyter Mouton. https://doi.org/10.1515/9783110276794.77
- Park, G., Chung, J., & Lee, S. (2022). Effect of AI chatbot emotional disclosure on user satisfaction and reuse intention for mental health counseling: A serial mediation model. Current Psychology, 42(32), 28663–28673. https://doi.org/10.1007/s12144-022-03932-z
- Park, G., Yim, M.C., Chung, J., & Lee, S. (2023). Effect of AI chatbot empathy and identity disclosure on willingness to donate: The mediation of humanness and social presence. Behaviour & Information Technology, 42(12), 1998–2010. https://doi.org/10.1080/0144929x.2022.2105746
- Prescott, J., Ogilvie, L., & Hanley, T. (2024). Student therapists’ experiences of learning using a machine client: A proof-of-concept exploration of an emotionally responsive interactive client (ERIC). Counselling and Psychotherapy Research, 24(2), 524–531. https://doi.org/10.1002/capr.12685
- Rashkin, H., Smith, E.M., Li, M., & Boureau, Y.-L. (2019). Towards empathetic open-domain conversation models: A new benchmark and dataset. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 5370–5381). Florence, Italy: Association for Computational Linguistics. https://doi.org/10.18653/v1/p19-1534
- Rhee, C.E., & Choi, J. (2020). Effects of personalization and social role in voice shopping: An experimental study on product recommendation by a conversational voice agent. Computers in Human Behavior, 109, 106359. https://doi.org/10.1016/j.chb.2020.106359
- Rhim, J., Kwak, M., Gong, Y., & Gweon, G. (2022). Application of humanization to survey chatbots: Change in chatbot perception, interaction experience, and survey data quality. Computers in Human Behavior, 126, 107034. https://doi.org/10.1016/j.chb.2021.107034
- Ricon, T. (2024). How chatbots perceive sexting by adolescents. Computers in Human Behavior: Artificial Humans, 2(1), 100068. https://doi.org/10.1016/j.chbah.2024.100068
- Sahab, S., Haqbeen, J., Hadfi, R., Ito, T., Imade, R.E., Ohnuma, S., & Hasegawa, T. (2024). E-contact facilitated by conversational agents reduces interethnic prejudice and anxiety in Afghanistan. Communications Psychology, 2(1), 22. https://doi.org/10.1038/s44271-024-00070-z
- Schrader, D.C., & Dillard, J.P. (1998). Goal structures and interpersonal influence. Communication Studies, 49(4), 276–293. https://doi.org/10.1080/10510979809368538
- Seitz, L. (2024). Artificial empathy in healthcare chatbots: Does it feel authentic? Computers in Human Behavior: Artificial Humans, 2(1), 100067. https://doi.org/10.1016/j.chbah.2024.100067
- Shaikh, S., Yayilgan, S.Y., Klimova, B., & Pikhart, M. (2023). Assessing the usability of ChatGPT for formal English language learning. European Journal of Investigation in Health, Psychology and Education, 13(9), 1937–1960. https://doi.org/10.3390/ejihpe13090140
- Shin, H., Bunosso, I., & Levine, L.R. (2023). The influence of chatbot humour on consumer evaluations of services. International Journal of Consumer Studies, 47(2), 545–562. https://doi.org/10.1111/ijcs.12849
- Skjuve, M., Følstad, A., Fostervold, K.I., & Brandtzaeg, P.B. (2021). My chatbot companion — A study of human-chatbot relationships. International Journal of Human-Computer Studies, 149, 102601. https://doi.org/10.1016/j.ijhcs.2021.102601
- Sperber, D., & Wilson, D. (1995). Relevance: Communication and cognition (2nd ed.). Oxford: Blackwell Publishers Ltd.
- Stamp, G.H., & Knapp, M.L. (1990). The construct of intent in interpersonal communication. Quarterly Journal of Speech, 76(3), 282–299. https://doi.org/10.1080/00335639009383920
- Titscher, S., Meyer, M., Wodak, R., & Vetter, E. (2000). Methods of text and discourse analysis. London: SAGE Publications Ltd. https://doi.org/10.4135/9780857024480
- Wang, X. (2020). Semantic and structural analysis of Internet texts. E-Scio, (4), 51–60. (In Russ.). EDN: PBIGEH
- Wilson, S.R. (1995). Elaborating the cognitive rules model of interaction goals: The problem of accounting for individual differences in goal formation. Annals of the International Communication Association, 18(1), 3–25. https://doi.org/10.1080/23808985.1995.11678905
- Youn, S., & Jin, S.V. (2021). “In A.I. we trust?” The effects of parasocial interaction and technopian versus luddite ideological views on chatbot-based customer relationship management in the emerging “feeling economy”. Computers in Human Behavior, 119, 106721. https://doi.org/10.1016/j.chb.2021.106721
Supplementary files

