No 4(123) (2025)
THEMED ISSUE EDITOR’S COLUMN
АЛГЕБРА И ГАРМОНИЯ:ТЕХНОЛОГИИ ИСКУССТВЕННОГО ИНТЕЛЛЕКТА В СОЦИОГУМАНИТАРНЫХ НАУКАХ
10-14
SOCIETY OF COEXISTENCE OF NATURAL AND ARTIFICIAL INTELLIGENCE
Legal Socialization of Artificial Intelligence and the Creation of a New Global Governance System
Abstract
This article analyzes the process of socializing artificial intelligence technology and the formation of a legal framework governing its implementation. It examines regulatory efforts at the national, supranational, and international levels, identifying ideological, doctrinal, and political challenges that accompany the expanding use of this technology. The article provides a critical review of the provisions within the Global Digital Compact and outlines the associated legal and political risks of its implementation. Drawing on philosophical research, it identifies prospects for building the institutional and legal mechanisms required for the full-fledged functioning of AI on a global scale. The importance of the principles of equality and justice is emphasized for maintaining a sustainable legal order, upholding human rights and freedoms, reducing the technological divide, and developing a constructive strategy for the advancement of a technological civilization is emphasized. Using comparative legal methodology, the article formulates the main regulatory approaches emerging within states and interstate associations concerning the development and application of artificial intelligence
15-24
Economic and Political Challenges of Artificial Intelligence Development
Abstract
The article analyzes the current state and prospects for the development of artificial intelligence (AI) technologies, as well as the conditions for maximizing their effects on economic growth. The article uses the example of AI use in the United States, the European Union, and China to highlight the limitations of technology diffusion and its impact on socio-economic indicators. The formation of human capital and dynamic markets are identified as key factors in AI development. However, the article notes that the implementation of Schumpeter's “creative destruction” processes is currently suboptimal due to several factors. These include the specific nature of AI development, including its high capital intensity and requirements for large amounts of data. The structure of digital markets also plays a significant role, with large digital corporations monopolizing AI markets. The conclusion is that the current market situation requires regulatory intervention. Nevertheless, the evolution of AI regulation itself faces numerous challenges. In addition to the novelty of the technology and the lack of clear solutions to its core developmental issues, there is a growing trend of AI securitization and the influence of other political and geopolitical considerations. It is concluded that this problem is significant in the long term, as it negatively affects the development of approaches to AI regulation and creates new developmental challenges, including at the global level.
25-33
On the Threshold of Digital Civilization: the Dialectics of Artificial Intelligence and Humanism
Abstract
The article discusses the challenges and findings of researching the dialectics of the relationship between digital technologies and humanistic principles and values in the context of an emerging global information civilization. The authors base their analysis on contemporary theories and practices of digital society and on V.I. Vernadsky’s concept of the noosphere. This framework allows for understanding the transition to a digital civilization as a stage in the evolution of the biosphere, where human thought plays a key role. Empirical data is drawn from the All-Russian Sociological Monitoring survey “How Are You Living, Russia?” (conducted by the Institute of Social and Political Research of the Federal Research Sociological Center of the Russian Academy of Sciences), which reflects specific public concerns regarding the widespread implementation of artificial intelligence, as well as the paradox of trust in information disseminated online. The article outlines key challenges associated with growing dependence on AI, the declining quality of interpersonal interaction, the concentration of power in the hands of tech corporations, and a crisis of trust in information. Special attention is paid to the ambivalent consequences of digitalization. While information and communication technologies (ICT), neural networks, and artificial intelligence offer tools for addressing global ecological and resource management problems, they simultaneously exacerbate social, material, and spiritual disparities and contribute to information fragmentation. The authors conclude that it is necessary to uphold ethical norms and moral humanistic imperatives, considering technological progress, social responsibility, and sociocultural as well as ethnopolitical specificities.
34-45
The Ethical Aspect of Using Artificial Intelligence Technologies in the Field of Indigenous Languages Preservation
Abstract
The paper discusses the ethical aspects of applying artificial intelligence technologies to indigenous languages. The analysis focuses on several prominent projects in this field. Some initiatives are not primarily aimed at language revitalization or cultural preservation, which may attract criticism from the language communities themselves. The paper also explores examples of successful projects created and/or actively supported by indigenous individuals. The proposed initiatives cover a wide range of regions across the globe, including Africa, North and South America, Southeast Asia, and Oceania. The article also provides recommendations for further development and support of these initiatives, emphasizing the importance of ethical considerations and respecting the rights and interests of indigenous peoples. This work aims to raise awareness about the significance of preserving indigenous languages through modern technology, which is particularly relevant for the Russian Federation with its unique linguistic diversity.
46-55
Application of Large Language Models for the Analysis of Value-Patriotic Discourse of Russian-Speaking Users
Abstract
The article explores the potential of using large language models (LLMs) for the automated analysis of value-laden patriotic discourse among Russian-speaking social media users. Drawing on a corpus of messages from VK, Odnoklassniki, and Telegram (2023–2025), it investigates the degree of alignment between automated coding results and expert annotations based on a specially developed categorical scheme. The codebook includes eight dimensions: Sh. Schwartz's basic values; R. Inglehart’s two axes (traditionalism/secularism and survival/self-expression); A. Maslow’s hierarchy of needs; types of patriotism (constructive/aggressive), drawing on the concepts of K.D. Ushinsky and V.S. Solovyov; dominant speech act types per J. Austin; and binary indicators for explicit patriotism and civic identity. The experiment was conducted on the Pride and Patriotism message cluster (N = 456), where the density of value markers is highest; the comparison was implemented through error matrices, accuracy, macro/weighted F1, and Cohen's κ coefficient. It was shown that while the LLM reliably identifies explicit patriotic themes, its agreement with experts is significantly lower in multi-class and fine-grained value classification (Schwartz, Maslow, Inglehart scales, types of patriotism, Austin's speech acts). The model demonstrated systematic biases and a tendency to over-diagnose certain categories. It is concluded that LLMs in their current configuration can serve as auxiliary tools for preliminary markup and hypothesis generation but cannot function as an autonomous substitute for expert-led content analysis of value discourse.
56-69
ARTIFICIAL INTELLIGENCE AS A MEANS OF COGNITION AND SUPPORT FOR HUMANS AND SOCIETY
Applications of Generative Artificial Intelligence in Socioeconomic Forecasting
Abstract
This article examines the application of using artificial intelligence (AI) and large language models (LLMs) for forecasting key macroeconomic indicators, including gross domestic product (GDP), inflation, unemployment rates, interest rates, and the Gini coefficient. It analyzes the capabilities of these novel approaches compared to traditional forecasting methods – such as econometric, equilibrium, and agent-based models – across different time horizons. The paper summarizes both academic research and practical implementations, including central bank experiments on using fundamental models and LLM-like architectures (such as GPT) for macroeconomic forecasting. Special attention is given to the ability of LLMs to analyze textual information and generate predictions that are comparable in accuracy to, and in some instances superior to, those produced by professional experts. The review also covers the latest fundamental time-series models, such as TimeGPT, TimesFM, and Moirai, which employ transformer architectures tailored to economic data. The main findings indicate that AI and LLMs provide a significant advantage in terms of flexibility, adaptability, and the capacity to process diverse information sources, especially in environments characterized by high volatility or information saturation. However, challenges remain regarding the interpretability, stability, and long-term consistency of predictions. The article concludes that the best prospects for advancing macroeconomic forecasting lie in hybrid approaches that combine the computational power and adaptability of AI with the theoretical rigor and explainability of traditional economic and mathematical models.
70-88
Artificial Intelligence in Historical Research: a Virtual Assistant or a Generator of Quasi-Knowledge?
Abstract
The article examines the current issues of the application of artificial intelligence (AI) methods and technologies in historical research. It outlines two waves of AI development, with the second wave focusing on artificial neural networks, machine learning (including deep learning), generative AI, and large language models (LLMs). Two areas of AI use by historians are explored: the recognition and transcription of handwritten and early-printed historical texts, and the integration of large language models, chatbots, and generative neural networks into research practices. The article highlights the methodological and ethical challenges that arise when testing generative AI in historical research. A brief overview of relevant research is provided, covering areas such as the virtual reconstruction of lost (fully or partially) cultural heritage sites and the attribution of historical texts
87-98
Artificial Intelligence and the Study of the Written Legacy of Peter the Great and his Associates
Abstract
The article presents the results of a collaborative project between historians and paleographers from the St. Petersburg Institute of History of the Russian Academy of Sciences and data analysis specialists from Sberbank. The partnership between the two institutions began in 2020,initiated by the Russian Historical Society. The team was tasked with training an artificial intelligence model to recognize the autographs of Peter the Great. The successful completion of this task and the creation of the “Digital Peter” model enabled them, in 2024, to begin developing a new algorithm for recognizing the handwriting of individuals from the Petrine era. The article details the team’s work on creating data sets for both models, which involved establishing principles for document selection, transcribing the texts, and annotating digital copies of the autographs. Furthermore, it outlines the key features of the electronic resources developed over the years of working with Peter's autographs, namely the "Digital Peter" recognition model and the “Autographs of Peter the Great” website.
99-109
Reflection and Artificial Intelligence: From the Psychology of Reflection to Reflexive Digital Practices of Human Development
Abstract
This article traces the evolution of the concept of reflection from its origins with J. Locke and introspective psychology to its establishment as a distinct field within psychological science by the end of the 20th century. The author emphasizes the importance of the institutionalization of the psychology of reflection in the 1990s for developing reflective practices and psychotechniques used in counseling, training, psychotherapy, and education. The main semantic focus of the article is on the revolutionary impact of digital technologies and artificial intelligence (AI) on uncovering new possibilities for reflection in the 21st century. It demonstrates that the integration of AI into human activity transforms reflective processes and related practices. This gives rise to the phenomenon of “bi-reflection” during the interaction between human consciousness and machine intelligence, manifesting in the emergence of “digital centaurs” and new reflective-digital practices. These practices expand cognitive and personal resources while simultaneously raising questions about changing subjectivity, identity, and ethics. They also stimulate the "humanization" of AI by endowing it with functions of creativity and reflectivity. The author presents two promising conceptual developments. The first is the Digital Angel (DA): a personal AI agent that acts as a psychological buffer and protection against the risks of the digital world (such as cyber threats and competition with AGI). The DA provides a secure digital space and analyzes user data to enhance their reflection and self-development. The second is the Digital Alter-Ego (DAE): a technology for “constructing” and “experiencing” new personal roles in virtual and augmented realities. The DAE allows individuals to activate latent abilities and resources, creating not a “zone of proximal development” (per L.S. Vygotsky) but a “zone of distal development” – offering prospects for self-realization through interaction with AI. The article argues that DA and DAE technologies, grounded in domestic advancements within the psychology of reflection, define a new vector for the digital humanities and hybrid reflective practices.
110-121
PARALLELS BETWEEN NATURAL AND ARTIFICIAL INTELLIGENCE
Multilinguality in Language Modeling: Tasks, Data, and Opportunities for Typological Resources
Abstract
This paper addresses the significant challenge of building language technologies for the majority of the world's under-resourced languages, which lack the large text corpora and annotated datasets necessary for modern machine learning. While advances in Large Language Models (LLMs) have revolutionized machine translation and reading comprehension, these models often underperform or fail entirely for languages with limited written resources. We present an overview of current multilingual support in LLMs and evaluate their ability to understand the primary available knowledge source for such languages: descriptive grammars. To effectively utilize this structured but complex information, we propose a Retrie valAugmented Generation (RAG) framework. This approach enables models to accurately extract and interpret linguistic features from grammatical texts, facilitating downstream tasks like machine translation. Our evaluation provides the first comprehensive assessment of model performance on this critical task, covering grammatical descriptions of 248 languages from 142 language families. The analysis focuses on the typological characteristics of the WALS [1] and Grambank [2] databases. The proposed approach demonstrates the first comprehensive assessment of the ability of language models to accurately interpret and extract linguistic features in context, creating a critical resource for scaling technologies to under-resourced languages. Code and data from this study are made publicly available: https://github.com/al-the-eigenvalue/RAG-on-grammars.
122-135

