Vol 23, No 6 (2024)
Mathematical modeling and applied mathematics
A Compositional Approach to the Simulation of Queuing Systems with Random Parameters
Abstract



Research of Options for Constructing Information Management Systems Based on Network Models of Queuing Systems
Abstract



Solving Paths Search Problems in Complex Graphs
Abstract



Computational Technology for Shell Models of Magnetohydrodynamic Turbulence Constructing
Abstract



Increase of Reliability of Anomalies Detection on Images at Formation of Their Feature Vectors in Wavelet Bases
Abstract



Artificial intelligence, knowledge and data engineering
Approaches for Behavior Intensity Estimation in Groups of Heterogeneous Individuals: Precision and Applicability for Data with Uncertainty
Abstract



Enhanced Machine Learning Framework for Autonomous Depression Detection Using Modwave Cepstral Fusion and Stochastic Embedding
Abstract
Depression is a prevalent mental illness that requires autonomous detection systems due to its complexity. Existing machine learning techniques face challenges such as background noise sensitivity, slow adaptation speed, and imbalanced data. To address these limitations, this study proposes a novel ModWave Cepstral Fusion and Stochastic Embedding Framework for depression prediction. Then, the Gain Modulated Wavelet Technique removes background noise and normalises audio signals. Difficulties with generalisation, which results in a lack of interpretability, hinder extracting relevant characteristics from speech. To address these issues, an Auto Cepstral Fusion extracts relevant features from speech, capturing temporal and spectral characteristics caused by background voice. Feature selection becomes imperative when choosing relevant features for classification. Selecting irrelevant features can result in overfitting, the curse of dimensionality, and less robustness to noise. Hence, the Principal Stochastic Embedding technique handles high-dimensional data, minimising noise influence and dimensionality. Furthermore, the XGBoost classifier differentiates between depressed and non-depressed individuals. As a result, the proposed method uses the DAIC-WOZ dataset from USC for detecting depressions, achieving an accuracy of 97.02%, precision of 97.02%, recall of 97.02%, F1-score of 97.02%, RMSE of 2.00, and MAE of 0.9, making it a promising tool for autonomous depression detection.



Phoneme-by-Phoneme Speech Recognition as a Classification of Series on a Set of Sequences of Elements of Complex Objects Using an Improved Trie-Tree
Abstract



Ruzicka Indexive Throttled Deep Neural Learning for Resource-Efficient Load Balancing in a Cloud Environment
Abstract
Cloud Computing (CC) is a prominent technology that permits users as well as organizations to access services based on their requirements. This computing method presents storage, deployment platforms, as well as suitable access to web services over the internet. Load balancing is a crucial factor for optimizing computing and storage. It aims to dispense workload across every virtual machine in a reasonable manner. Several load balancing techniques have been conventionally developed and are available in the literature. However, achieving efficient load balancing with minimal makespan and improved throughput remains a challenging issue. To enhance load balancing efficiency, a novel technique called Ruzicka Indexive Throttle Load Balanced Deep Neural Learning (RITLBDNL) is designed. The primary objective of RITLBDNL is to enhance throughput and minimize the makespan in the cloud. In the RITLBDNL technique, a deep neural learning model contains one input layer, two hidden layers, as well as one output layer to enhance load balancing performance. In the input layer, the number of cloud user tasks is collected and sent to hidden layer 1. In that layer, the load balancer in the cloud server analyzes the virtual machine resource status depending on energy, bandwidth, memory, and CPU using the Ruzicka Similarity Index. Then, it is classified VMs as overloaded, less loaded, or balanced. The analysis results are then transmitted to hidden layer 2, where Throttled Load Balancing is performed to dispense the workload of weighty loaded virtual machines to minimum loaded ones. The cloud server efficiently balances the workload between the virtual machines in higher throughput and lower response time and makespan for handling a huge number of incoming tasks. To evaluate experiments, the proposed technique is compared with other existing load balancing methods. The result shows that the proposed RITLBDNL provides better performance of higher load balancing efficiency of 7%, throughput of 46% lesser makespan of 41%, and response time of 28% than compared to conventional methods.



Information security
Synergistic Approaches to Enhance IoT Intrusion Detection: Balancing Features through Combined Learning
Abstract
The Internet of Things (IoT) plays a crucial role in ensuring security by preventing unauthorized access, malware infections, and malicious activities. IoT monitors network traffic as well as device behaviour to identify potential threats and take appropriate mitigation measures. However, there is a need for an IoT Intrusion Detection system with enhanced generalization capabilities, leveraging deep learning and advanced anomaly detection techniques. This study presents an innovative approach to IoT IDS that combines SMOTE-Tomek link and BTLBO, CNN with XGB classifier which aims to address data imbalances, improve model performance, reduce misclassifications, and improve overall dataset quality. The proposed IoT IDS system, using the IoT-23 dataset, achieves 99.90% accuracy and a low error rate, all while requiring significantly less execution time. This work represents a significant step forward in IoT security, offering a robust and efficient IDS solution tailored to the changing challenges of the interconnected world.



Convolutional-free Malware Image Classification using Self-attention Mechanisms
Abstract
Malware analysis is a critical aspect of cybersecurity, aiming to identify and differentiate malicious software from benign programmes to protect computer systems from security threats. Despite advancements in cybersecurity measures, malware continues to pose significant risks in cyberspace, necessitating accurate and rapid analysis methods. This paper introduces an innovative approach to malware classification using image analysis, involving three key phases: converting operation codes into RGB image data, employing a Generative Adversarial Network (GAN) for synthetic oversampling, and utilising a simplified Vision Transformer (ViT)-based classifier for image analysis. The method enhances feature richness and explainability through visual imagery data and addresses imbalanced classification using GAN-based oversampling techniques. The proposed framework combines the strengths of convolutional autoencoders, hybrid classifiers, and adapted ViT models to achieve a balance between accuracy and computational efficiency. As shown in the experiments, our convolutional-free approach possesses excellent accuracy and precision compared with convolutional models and outperforms CNN models on two datasets, thanks to the multi-head attention mechanism. On the Big2015 dataset, our model outperforms other CNN models with an accuracy of 0.8369 and an AUC of 0.9791. Specifically, our model reaches an accuracy of 0.9697 and an F1 score of 0.9702 on MALIMG, which is extraordinary.



Enhancing Video Anomaly Detection with Improved UNET and Cascade Sliding Window Technique
Abstract
Computer vision video anomaly detection still needs to be improved, especially when identifying images with unusual motions or objects. Current approaches mainly concentrate on reconstruction and prediction methods, and unsupervised video anomaly detection faces difficulties because there are not enough tagged abnormalities, which reduces accuracy. This paper presents a novel framework called the Improved UNET (I-UNET), designed to counteract overfitting by addressing the need for complex models that can extract subtle information from video anomalies. Video frame noise can be eliminated by preprocessing the frames with a Weiner filter. Moreover, the system uses Convolution Long Short-Term Memory (ConvLSTM) layers to smoothly integrate temporal and spatial data into its encoder and decoder portions, improving the accuracy of anomaly identification. The Cascade Sliding Window Technique (CSWT) is used post-processing to identify anomalous frames and generate anomaly scores. Compared to baseline approaches, experimental results on the UCF, UCSDped1, and UCSDped2 datasets demonstrate notable performance gains, with 99% accuracy, 90.8% Area Under Curve (AUC), and 10.9% Equal Error Rate (EER). This study provides a robust and accurate framework for video anomaly detection with the highest accuracy rate.


