Vol 15, No 1 (2025)
EDITORIAL
The path to science



GENERAL ISSUES OF FORMALIZATION IN THE DESIGNING: ONTOLOGICAL ASPECTS
On misconceptions in modern logic
Abstract
The article discusses the inconsistency of three "indisputable" principles in modern logic: the inconsistency of the concept of "set," the absolute necessity of axioms in logic, and the infallibility of syllogistic reasoning. To address the first misconception, the authors propose incorporating the algebra of sets into the foundations of logic, following the approach outlined in R. Courant and G. Robbins' book What is Mathematics? The second misconception is addressed by deriving known laws of algebra of sets, corresponding to classical logic, through the enumeration method. The third misconception is resolved by developing a mathematical model of polysyllogistic reasoning based on the laws of algebra of sets. The novelty of this proposed reasoning model lies in introducing restrictions alongside premises, with any violation of these restrictions signaling errors in reasoning. This model enhances the analytical capabilities of logical analysis, enabling the detection of errors in traditional syllogistic reasoning, including instances where some correct inferences are classified as "incorrect" modes. Furthermore, new laws of algebra of sets are formulated and justified: the law of paradox, the condition of non-empty intersection, and the law of existence.



An approach to solid modeling of geometric objects in point calculus
Abstract
The development of computer-aided design systems involves a combination of fundamental and applied research. The conceptual foundation of the mathematical framework for such systems lies in the notion of a complete geometric body—a geometric set of points where the number of active parameters matches the dimensionality of the space, with the geometric body represented as a distinct part of that space. The analytical representation of these point sets is achieved through the mathematical apparatus of point calculus, which can be generalized to multidimensional spaces. The article compares this proposed approach to solid modeling of geometric objects with existing methods. Examples demonstrating the modeling of geometric bodies using the new approach are provided. The advantages of this approach are emphasized, including the compactness of the analytical description, the elimination of transformation matrices, the facilitation of parallel computations within the mathematical framework, and more. Additionally, the article explores the capabilities of modeling geometric bodies in point calculus, such as the representation of isotropic and anisotropic bodies as solid geometric objects with a functionally controlled linear or nonlinear spatial structure.



Conceptual and notional analysis: a general approach
Abstract
The paper suggests distinguishing between the methodologies of conceptual analysis and notional analysis. The difference lies in how the notion and concept are interpreted. In the paper, a notion is viewed as an individual opinion or understanding of something, whereas a concept represents a general notion or abstract idea. It is assumed that many similar concepts exist with varying scopes and contents. A concept is treated as an objectified notion developed and solidified through social experience. The primary representation of the results of established conceptual analysis methods is shown to be a semantic network, where the vertices are concepts and the arcs represent relationships between them. However, it is noted that classical set theory and semantic networks are insufficient to formalize notions and concepts; instead, the theory of structures must be employed. Conceptual analysis is described as a technique for synthesizing formal concept descriptions using Boolean and Cartesian construction operations. The challenges of applying conceptual analysis within the framework of the theory of structures are outlined. The development of notional analysis is linked to the demand for precise, unambiguous, and consistent notion descriptions, driven by the variability of their properties. Notional analysis is defined as a method for formal normal descriptions using generalization and association operations. Generalization entails combining notions such that the entities of the generalized notions become entities of the resulting notion. Association involves merging notions, with each entity of the resulting notion including one entity from each of the associated notions. Notional analysis addresses issues encountered when describing concepts using a corpus-based approach by generating diverse descriptions of notions. It is demonstrated that both conceptual and notional analyses are rooted in the theory of structures. The paper identifies areas where these analyses are effectively applied and provides an example of using notional analysis in the design of a technical device.



APPLIED ONTOLOGY OF DESIGNING
Building a knowledge graph from telecommunication data
Abstract
The paper presents a method for building a knowledge graph from telecommunication data using proprietary and reference models commonly employed in the telecommunications domain. Reference models are based on those included in the framework developed by the TM Forum consortium. The proposed approach involves building a knowledge graph for proprietary models through automated processing of autotest log files and billing system database tables. The relevance of knowledge graphs stems from their structured and semantic nature, as well as their potential for applying machine learning algorithms to generate recommendations for optimizing telecommunication processes and systems. A method based on a multi-step reasoning approach is proposed for creating interpretable recommendations by predicting and restoring missing links in a proprietary knowledge graph. This approach treats multi-step reasoning as a question-answering task using natural language processing techniques. The implementation of the proposed solution, leveraging a transformer-based neural network architecture, yielded interpretable results while maintaining metric values comparable to existing methods.



Analysis of patient reviews using machine learning and linguistic methods
Abstract
With the advancement of digitalization, traditional methods of surveying consumers to assess their satisfaction with service quality are being replaced by approaches based on the automatic processing of text data from social media. This study aims to determine the degree of patient satisfaction with the quality of medical services by developing and testing an algorithm for classifying Russian-language text reviews collected from social media platforms. The focus is on analyzing the sentiment (positive/negative) of patient reviews about medical institutions and doctors, as well as identifying the review's subject—either the quality of medical services provided or the organization of patient care by the institution. A method was developed for classifying text reviews about the work of medical institutions posted by patients on two Russian doctor review platforms. Approximately 60,000 reviews were analyzed. Machine learning techniques, including various artificial neural network architectures, were tested. The classification algorithm demonstrated high efficiency, with the best performance achieved using a recurrent neural network architecture (accuracy = 0.9271). Incorporating named entity recognition into text analysis further enhanced the classification efficiency across all neural network-based classifiers. To improve classification quality, the study highlights the need for semantic segmentation of reviews by their subject and sentiment, followed by the separate analysis of these fragments.



A method for developing intelligent simulators based on ontology of the subject domain
Abstract
In academic disciplines, students are required to learn many new concepts, which necessitates extensive training with feedback. An intelligent simulator enables students to practice solving simple problems while receiving explanations for their mistakes, allowing teachers to focus on addressing more complex issues during lessons. This paper presents a method for developing intelligent simulators based on the ontology of the subject domain, implemented as web applications suitable for both classroom and extracurricular use. Representing the problem and the subject area model in RDF format enables logical inference using the Apache Jena Reasoner inference engine. An example is provided of an intelligent simulator designed for learning the order of operations in expressions, supporting the C++, C#, and Python programming languages. The simulator can explain errors, generate explanatory hints, and engage in educational dialogue through guiding questions. The simulator was tested with undergraduate and graduate students of the Faculty of Electronics and Computer Engineering at Volgograd State Technical University. Most students found the simulator to be more useful than traditional training tests. It can be employed both for independent study and as part of the educational process in classroom settings.



Comparative assessment of approaches to training bachelor's students in aircraft engine design
Abstract
The paper presents preliminary results of a comparative assessment of two modern approaches to bachelor's degree training: traditional classroom methods and training with immersive technologies, evaluated by the criterion of knowledge acquisition efficiency in the field of 24.03.05 Aircraft Engines. The platform "Virtual pavilion for studying of aircraft engines design" was used as an immersive educational environment. It is a set of interconnected educational spaces: a virtual hangar, a training laboratory, an engine design class and a workshop-simulator. The platform is equipped with tools for interacting with objects during practical tasks, IT databases aintegrated into the environment with information about the studied engines, and their interactive models as digital replicas of engines displayed in the Center for the History of Aircraft Engines at Samara University. An attempt was made to assess the quality of students' learning in the discipline "Introduction to the Specialty" using testing methods. The analysis results suggest that employing immersive technologies in studying complex technical objects can reduce reliance on traditional methods such as drawings, diagrams, cutaway models, and full-scale samples. Instead, virtual simulators, laboratories, libraries, and other tools, including those supporting distance learning, can be increasingly utilized.



ONTOLOGY ENGINEERING
Detecting and explaining anomalies in industrial Internet of things systems using an autoencoder
Abstract
In industrial Internet of Things (IoT) systems, explaining anomalies plays a crucial role in identifying bottlenecks and optimizing processes. This paper proposes an approach to anomaly detection using an autoencoder and its explanation based on the SHAP method. The purpose of the anomaly explanation is to provide a set of data features in industrial IoT systems that most significantly influence anomaly detection. The novelty of this approach lies in its ability to quantify the contribution of individual features for specific data samples and to calculate an average contribution across the dataset, providing a feature importance ranking. The proposed approach is tested on Industrial IoT datasets with varying feature counts and data volumes. The anomaly detection achieves an F-measure of 88-93%, outperforming the comparable methods discussed. The study demonstrates how explainable artificial intelligence can identify the causes of anomalies in both individual samples and datasets as a whole. The theoretical importance of the proposed approach lies in its ability to shed light on the workings of intelligent detection models, enabling the identification of factors influencing their outcomes and uncovering previously unnoticed patterns. In practice, this method enhances security system operators' understanding of ongoing processes, aiding in threat identification and error detection within data.



Information extraction from texts based on ontology and large language models
Abstract
The article examines the extraction of information from texts using the ontology of a subject area combined with neural network-based text analysis methods, including the use of large language models. It discusses the expert's role in developing and maintaining systems, illustrated through the task of extracting information from analytical articles and constructing ontologies in computational linguistics to represent key concepts relevant to the system's user or customer. The process of ontology creation is accompanied by the development of a dictionary that forms the ontology's terminological core, followed by methods for extracting new terms within the specified subject area. This task is considered as a named entity recognition problem, traditionally addressed by training a neural network model on a representative dataset. The study compares this approach with a methodology leveraging large language models. For this, lexical and syntactic patterns, as well as instruction patterns for hypothesis testing regarding new term-phrases and result verification, were developed. The developed instructions for solving the problem of relation extraction also include the automated generation of natural language competency assessment questions for each ontology relation. The novelty of the proposed approach lies in the integration of ontological, linguistic and neural network approaches to extract information from texts. The study demonstrates the possibility of solving tasks of text analysis and information extraction problems through a chain of large language models, with dynamically generated instructions based on the outcomes of prior analysis stages. The following F1-measure scores were achieved in the experiments: F1=0.8 for term extraction and classification and F1=0.87 for relation extraction.



METHODS AND TECHNOLOGIES OF DECISION MAKING
Assessment of opinion consistency of heterogeneous actors at the pre-project stage
Abstract
The findings presented in this paper rely on system archetypes, which serve as unified conceptual models for addressing organizational challenges in managing complex systems. These archetypes facilitate the creation of comparable descriptions of problem situations by accommodating differing perspectives of stakeholders (actors) on the factors hindering the integration of local systems. The research framework integrates system archetypes with the House of Quality model and employs correlation coefficients—rank, partial, and multiple. An example of multi-faceted modeling is provided, wherein the calculated correlation coefficients act as indicators of the degree of alignment in actors' views regarding the factors of the problem situation. Multiple correlation coefficients are interpreted as measures of the consistency of an individual actor’s opinions with those of the collective group, while partial correlation coefficients gauge the alignment of opinions between specific pairs of actors. The proposed methodology for quantitatively assessing the alignment of heterogeneous actors' perspectives enables the evaluation of the effectiveness of initiatives aimed at building a communication framework that supports the development of a shared understanding of the significance of various factors in a problem situation.



Mathematical modeling of non-stationary heat transfer in selective laser melting based on machine learning
Abstract
The article considers numerical modeling of thermal processes in 3D printing using selective laser melting technology based on machine learning. A mathematical model of non-stationary heat transfer in a rod with a variable cross section is developed as a partial differential equation describing the rod's temperature. An algorithm for numerically solving this equation is proposed, implemented using the Matlab system. It is shown that, for certain initial conditions, the temperature distribution becomes quasi-stationary, and for this case, a simple analytical expression for the temperature field is obtained. A neural network is constructed and trained using the TensorFlow library, with training data obtained from the analytical solution of the thermal problem. The neural network's calculation results align with the solutions of the original mathematical model. The article highlights that three-dimensional modeling of the printing process for real-world products demands substantial computational resources. It is shown that machine learning-based models can effectively approximate the temperature field in 3D printing with selective laser melting technology for components of similar geometry.


