Programming and Computer Software

Programming and Computer Software is a peer-reviewed journal devoted to various problems in all areas of computer science: operating systems, compiler technology, software engineering, artificial intelligence, etc. The journal publishes original manuscripts submitted in English, as well as works translated from several other journals. The sources of content are indicated at the article level. The peer review policy of the journal is independent of the manuscript source, ensuring a fair and unbiased evaluation process for all submissions. As part of its aim to become an international publication, the journal welcomes submissions in English from all countries.
 

Peer review and editorial policy

The journal follows the Springer Nature Peer Review Policy, Process and Guidance, Springer Nature Journal Editors' Code of Conduct, and COPE's Ethical Guidelines for Peer-reviewers.

Each manuscript is assigned to at least one peer reviewer. The journal follows a double-blind reviewing procedure. The period from submission to the first decision is up to 52 days. The approximate rejection rate is 80%. The final decision on the acceptance of a manuscript for publication is made by the Editor-in-Chief.

If Editors, including the Editor-in-Chief, publish in the journal, they do not participate in the decision-making process for manuscripts where they are listed as co-authors.

Special issues published in the journal follow the same procedures as all other issues. If not stated otherwise, special issues are prepared by the members of the editorial board without guest editors.

Current Issue

Open Access Open Access  Restricted Access Access granted  Restricted Access Subscription Access

Vol 45, No 8 (2019)

Article

Entity-Level Classification of Adverse Drug Reaction: A Comparative Analysis of Neural Network Models
Alimova I.S., Tutubalina E.V.
Abstract

An experimental work on the analysis of effectiveness of neural network models applied to the classification of adverse drug reactions at the entity level is described. Aspect-level sentiment analysis, which aims to determine the sentimental class of a specific aspect conveyed in user opinions, has been actively studied for more than 10 years. A number of neural network architectures have been proposed. Even though the models based on these architectures have much in common, they differ in certain components. In this paper, the applicability of the neural network models developed for the aspect-level sentiment analysis to the problem of the classification of adverse drug reactions is studied. Extensive experiments on English language texts of biomedical topic, including health records, scientific literature, and social media have been conducted. The proposed models mentioned above are compared with one of the best model based on the support vector machine method and a large set of features.

Programming and Computer Software. 2019;45(8):439-447
pages 439-447 views
On Online Algorithms for Bin, Strip, and Box Packing, and Their Worst-Case and Average-Case Analysis
Lazarev D.O., Kuzyurin N.N.
Abstract

In this survey, we consider online algorithms for bin packing and strip packing problems, as well as their generalizations (multidimensional bin packing, multiple strip packing, and packing into strips of different widths). For the latter problem, only the worst-case analysis is described; for the other problems, both the worst-case and average-case (probabilistic) asymptotic ratios are presented. The best lower and upper bounds are considered. Basic algorithms and methods for their analysis are discussed.

Programming and Computer Software. 2019;45(8):448-457
pages 448-457 views
Building the Software-Defined Data Center
Shabanov B.M., Samovarov O.I.
Abstract

Data center is the most effective way of providing computational resources to a large number of users. The software-defined model is a modern approach to the creation of the computing infrastructure for the data center, which allows user tasks to be processed in acceptable time and at acceptable cost. This paper formulates the general design requirements for the interagency data center and describes some problems and methods of planning and building software-defined data centers (deployment of computing systems optimized for maximum hardware utilization, software support for different classes of tasks, etc.).

Programming and Computer Software. 2019;45(8):458-466
pages 458-466 views
Minimal Basis of the Syzygy Module of Leading Terms
Shokurov A.V.
Abstract

Systems of polynomial equations are one of the most universal mathematical objects. Almost all problems of cryptographic analysis can be reduced to solving systems of polynomial equations. The corresponding direction of research is called algebraic cryptanalysis. In terms of computational complexity, systems of polynomial equations cover the entire range of possible variants, from the algorithmic insolubility of Diophantine equations to well-known efficient methods for solving linear systems. Buchberger’s method [5] brings the system of algebraic equations to a system of a special type defined by the Gröbner original system of equations, which enables the elimination of dependent variables. The Gröbner basis is determined based on an admissible ordering on a set of terms. The set of admissible orderings on the set of terms is infinite and even continual. The most time-consuming step in finding the Gröbner basis by using Buchberger’s algorithm is to prove that all S-polynomials represent a system of generators of K[X]-module S-polynomials. Thus, a natural problem of finding this minimal system of generators arises. The existence of this system follows from Nakayama’s lemma. In this paper, we propose an algorithm for constructing this basis for any ordering.

Programming and Computer Software. 2019;45(8):467-472
pages 467-472 views
A Method for Analyzing Code-Reuse Attacks
Vishnyakov A.V., Nurmukhametov A.R., Kurmangaleev S.F., Gaisaryan S.S.
Abstract

Nowadays, ensuring software security is of paramount importance. Software failures can have significant consequences, and malicious vulnerability exploitation can inflict immense losses. Large corporations pay particular attention to the investigation of computer security incidents. Code-reuse attacks based on return-oriented programming (ROP) are gaining popularity each year and can bypass even modern operating system protection mechanisms. Unlike ordinary shellcode, where instructions are placed sequentially in memory, a ROP chain consists of multiple small instruction blocks (called gadgets) and uses the stack to chain them together. This makes the analysis of ROP exploits more difficult. The main goal of this work is to simplify reverse engineering of ROP exploits. A method for analyzing code-reuse attacks that allows one to split the chain into gadgets, restore the semantics of each particular gadget, and restore the prototypes and parameter values of the system calls and functions invoked during the execution of the ROP chain is proposed. The semantics of each gadget is determined by its parameterized type. Each gadget type is defined by a postcondition (Boolean predicate) that must always be true after the gadget execution. The proposed method was implemented as a software tool and tested on real-world ROP exploits found on the Internet.

Programming and Computer Software. 2019;45(8):473-484
pages 473-484 views
On Representation of Simulation Time in Functional Programming Style
Buzdalov D.V., Petrenko A.K., Khoroshilov A.V.
Abstract

Functional programming is becoming increasingly useful in the modern computerized world. This approach helps create code that is more reliable, easier to reason about, and automatically verifiable. However, these techniques are rarely employed for developing design tools and modeling critical systems. In this work, we try to apply some suitable techniques of functional programming to create a modeling system, namely, a simulation system for analyzing temporal behavioral properties of critical systems. As the first step, we design a representation of simulation time in terms of abstractions used in functional programming and try to investigate its compositionability.

Programming and Computer Software. 2019;45(8):485-496
pages 485-496 views
Optimizing Access to Memory Pages in Software-Implemented Global Page Cache Systems
Gusev E.I.
Abstract

This paper is based on a dissertation “Techniques for organizing shared access to distributed memory pages in cloud computing systems” defended at the Igor Sikorsky Kyiv Polytechnic Institute in 2017. The paper describes distributed page processing in Oracle Real Application Clusters (Oracle RAC) and compares it with other well-known processing methods. The comparison includes analysis of different architectures (including shared nothing, shared disk, and replication-based architectures) in terms of SQL query processing and asserts the soundness of the distributed page approach (also known as global cache fusion) to cloud database management systems (DBMSs). As a result of analyzing the global cache fusion approach, the main drawback of Oracle RAC systems—increasing queue problem—is revealed; it causes the impossibility to process queries once their rate exceeds a certain threshold inversely proportional to the packet delivery time between nodes. To eliminate the increasing queue problem when accessing distributed pages, a new access method is proposed that introduces an additional page state—unloading state—which improves the efficiency of distributed page processing by reducing the number of transfers between nodes during hot page processing. In addition to cloud DBMSs, the proposed method can also be used in other cloud systems with page-organized distributed memory architecture.

Programming and Computer Software. 2019;45(8):497-505
pages 497-505 views
Data-Oriented Scheduling with Dynamic-Clustering Fault-Tolerant Technique for Scientific Workflows in Clouds
Ahmad Z., Jehangiri A.I., Iftikhar M., Umer A.I., Afzal I.
Abstract

Cloud computing is one of the most prominent parallel and distributed computing paradigm. It is used for providing solution to a huge number of scientific and business applications. Large scale scientific applications which are structured as scientific workflows are evaluated through cloud computing. Scientific workflows are data-intensive applications, as a single scientific workflow may consist of hundred thousands of tasks. Task failures, deadline constraints, budget constraints and improper management of tasks can also instigate inconvenience. Therefore, provision of fault-tolerant techniques with data-oriented scheduling is an important approach for execution of scientific workflows in Cloud computing. Accordingly, we have presented enhanced data-oriented scheduling with Dynamic-clustering fault-tolerant technique (EDS-DC) for execution of scientific workflows in cloud computing. We have presented data-oriented scheduling as a proposed scheduling technique. We have also equipped EDS-DC with Dynamic-clustering fault-tolerant technique. To know the effectiveness of EDS-DC, we compared its results with three well-known enhanced heuristic scheduling policies referred to as: (a) MCT-DC, (b) Max-min-DC, and (c) Min-min-DC. We considered scientific workflow of CyberShake as a case study, because it contains most of the characteristics of scientific workflows such as integration, disintegration, parallelism, and pipelining. The results show that EDS-DC reduced make-span of 10.9% as compared to MCT-DC, 13.7% as compared to Max-min-DC, and 6.4% as compared to Min-min-DC scheduling policies. Similarly, EDS-DC reduced the cost of 4% as compared to MCT-DC, 5.6% as compared to Max-min-DC, and 1.5% as compared to Min-min-DC scheduling policies. These results in respect of make-span and cost are highly significant for EDS-DC as compared with above referred three scheduling policies. The SLA is not violated for EDS-DC in respect of time and cost constraints, while it is violated number of times for MCT-DC, Max-min-DC, and Min-min-DC scheduling techniques.

Programming and Computer Software. 2019;45(8):506-516
pages 506-516 views
A semi-Automatic Approach for Parallel Problem Solving using the Multi-BSP Model
Alaniz M., Nesmachnow S.
Abstract

The Multi-Bulk Synchronous Parallel (Multi-BSP) model is a recently proposed parallel programming model for multicore machines that extends the classic Bulk Synchronous Parallel model. Multi-BSP aims to be a useful model to design algorithms and estimate their running time. This model heavily relies on the right computation of parameters that characterize the hardware. Of course, the hardware utilization also depends on the specific features of the problems and the algorithms applied to solve them. This article introduces a semi-automatic approach for solving problems applying parallel algorithms using the Multi-BSP model. First, the specific multicore computer to use is characterized by applying an automatic procedure. After that, the hardware architecture discovered in the previous step is considered in order to design a portable parallel algorithm. Finally, a fine tuning of parameters is performed to improve the overall efficiency. We propose a specific benchmark for measuring the parameters that characterize the communication and synchronization costs in a particular hardware. Our approach discovers the hierarchical structure of the multicore architecture and compute both parameters for each level that can share data and make synchronizations between computing units. A second contribution of our research is a proposal for a Multi-BSP engine. It allows designing algorithms by applying a recursive methodology over the hierarchical tree already built by the benchmark, focusing on three atomic functions based in a divide-and-conquer strategy. The validation of the proposed method is reported, by studying an algorithm implemented in a prototype of the Multi-BSP engine, testing different parameter configurations that best fit to each problem and using three different high-performance multicore computers.

Programming and Computer Software. 2019;45(8):517-531
pages 517-531 views
Positional Characteristics for Efficient Number Comparison over the Homomorphic Encryption
Babenko M., Tchernykh A., Chervyakov N., Kuchukov V., Miranda-López V., Rivera-Rodriguez R., Du Z., Talbi E.
Abstract

Modern algorithms for symmetric and asymmetric encryptions are not suitable to provide security of data that needs data processing. They cannot perform calculations over encrypted data without first decrypting it when risks are high. Residue Number System (RNS) as a homomorphic encryption allows ensuring the confidentiality of the stored information and performing calculations over encrypted data without preliminary decoding but with unacceptable time and resource consumption. An important operation for encrypted data processing is a number comparison. In RNS, it consists of two steps: the computation of the positional characteristic of the number in RNS representation and comparison of its positional characteristics in the positional number system. In this paper, we propose a new efficient method to compute the positional characteristic based on the approximate method. The approximate method as a tool to compare numbers does not require resource-consuming non-modular operations that are replaced by fast bit right shift operations and taking the least significant bits. We prove that in case when the dynamic range of RNS is an odd number, the size of the operands is reduced by the size of the module. If one of the RNS moduli is a power of two, then the size of the operands is less than the dynamic range. We simulate proposed method in the ISE Design Suite environment on the FPGA Xilinx Spartan-6 SP605 and show that it gains 31% in time and 37% in the area on average with respect to the known approximate method. It makes our method efficient for hardware implementation of cryptographic primitives constructed over a prime finite field.

Programming and Computer Software. 2019;45(8):532-543
pages 532-543 views
Evolutionary Algorithms for Optimizing Cost and QoS on Cloud-based Content Distribution Networks
Iturriaga S., Nesmachnow S., Goñi G., Dorronsoro B., Tchernykh A.
Abstract

Content Distribution Networks (CDN) are key for providing worldwide services and content to end-users. In this work, we propose three multiobjective evolutionary algorithms for solving the problem of designing and optimizing cloud-based CDNs. We consider the objectives of minimizing the total cost of the infrastructure (including virtual machines, network, and storage) and the maximization of the quality-of-service provided to end-users. The proposed model considers a multi-tenant approach where a single cloud-based CDN is able to host multiple content providers using a resource sharing strategy. The proposed evolutionary algorithms address the offline problem of provisioning infrastructure resources while a greedy heuristic method is proposed for addressing the online problem of routing contents. The experimental evaluation of the proposed methods is performed over a set of realistic problem instances. Results indicate that the proposed approach is effective for designing and optimizing cloud-based CDNs reducing total costs by up to 10.3% while maintaining an adequate quality of service.

Programming and Computer Software. 2019;45(8):544-556
pages 544-556 views
Software Advances using n-agents Wireless Communication Integration for Optimization of Surrounding Recognition and Robotic Group Dead Reckoning
Ivanov M., Sergiyenko O., Tyrsa V., Lindner L., Rodriguez-Quiñonez J.C., Flores-Fuentes W., Rivas-Lopez M., Hernández-Balbuena D., Nieto Hipólito J.I.
Abstract

Nowadays artificial intelligence and swarm robotics become wide spread and take their approach in civil tasks. The main purpose of the article is to show the influence of common knowledge about surroundings sharing in the robotic group navigation problem by implementing the data transferring within the group. Methodology provided in article reviews a set of tasks implementation of which improves the results of robotic group navigation. The main questions for the research are the problems of robotics vision, path planning, data storing and data exchange. Article describes the structure of real-time laser technical vision system as the main environment-sensing tool for robots. The vision system uses dynamic triangulation principle. Article provides examples of obtained data, distance-based methods for resolution and speed control. According to the data obtained by provided vision system were decided to use matrix-based approach for robots path planning, it inflows the tasks of surroundings discretization, and trajectory approximation. Two network structure types for data transferring are compared. Authors are proposing a methodology for dynamic network forming based on leader changing system. For the confirmation of theory were developed an application of robotic group modeling. Obtained results show that common knowledge sharing between robots in-group can significantly decrease individual trajectories length.

Programming and Computer Software. 2019;45(8):557-569
pages 557-569 views
RETRACTED ARTICLE: DOOR: Distributed Object Oriented Software Restructuring Approach Using Neural Network
Khan A.
Abstract

For the circulated programming frameworks evolvement, Object Oriented (OO) approach is utilized by architects with originators in the point of reference period which results in Distributed Object Oriented (DOO) frameworks. The main aspect of DOO frameworks remains as the equipped scattering of programming classes among different hubs. The essential plan of DOO applications has no top-class circulation, henceforth rebuilding must be finished. The DOO programming rebuilding is done by means of a proposed versatile strategy called Neural Network (NN), to strengthen the exhibition further. At first, Class Dependency Graph (CDG) is developed, in which the hubs speak to the classes, and furthermore the associations between the hubs speak to the conditions between the classes. Presently, the components of articles, strategies, factors, lines, and import connected with the classes in the CDG are extricated and given as contributions to the NN for the preparation procedure. Presently, bunching of the prepared highlights is finished by which the OO framework is sectioned into subsystems with low coupling utilizing Class Dependency Based Clustering (CDBC) strategy. Presently, the grouped classes are amassed into bunch diagrams utilizing K‑Medoid bunching method lastly, the mapping is finished with the made parcels to the fixed accessible machines utilizing Recursive K Means grouping in the focused on circulated design. Reenactment results uncovered that the proposed work yields upgraded results in a useful manner contrasted with the current systems.

Programming and Computer Software. 2019;45(8):570-580
pages 570-580 views
Graphs Resemblance based Software Birthmarks through Data Mining for Piracy Control
Sarwar S., Qayyum Z.U., Safyan M., Iqbal M., Mahmood Y.
Abstract

The emergence of software artifacts greatly emphasizes the need for protecting intellectual property rights (IPR) hampered by software piracy requiring effective measures for software piracy control. Software birthmarking targets to counter ownership theft of software by identifying similarity of their origins. A novice birthmarking approach has been proposed in this paper that is based on hybrid of text-mining and graph-mining techniques. The code elements of a program and their relations with other elements have been identified through their properties (i.e., code constructs) and transformed into Graph Manipulation Language (GML). The software birthmarks generated by exploiting the graph theoretic properties (through clustering coefficient) are used for the classifications of similarity or dissimilarity of two programs. The proposed technique has been evaluated over metrics of credibility, resilience, method theft, modified code detection and self-copy detection for programs asserting the effectiveness of proposed approach against software ownership theft. The comparative analysis of proposed approach with contemporary ones shows better results for having properties and relations of program nodes and for employing dynamic techniques of graph mining without adding any overhead (such as increased program size and processing cost).

Programming and Computer Software. 2019;45(8):581-589
pages 581-589 views
Debugging Smart Contract’s Business Logic Using Symbolic Model Checking
Shishkin E.
Abstract

Smart contracts are a special type of programs running inside a blockchain. Immutable and transparent, they provide means to implement fault-tolerant and censorship-resistant services. Unfortunately, its immutability causes a serious challenge of ensuring that a business logic and implementation is correct upfront, before publishing in a blockchain. Several big accidents have indeed shown that users of this technology need special tools to verify smart contract correctness. Existing automated checkers are able to detect only well known implementation bugs, leaving the question of business logic correctness far aside. In this work, we present a symbolic model-checking technique along with a formal specification method for a subset of Solidity programming language that is able to express both state properties and trace properties; the latter constitutes a weak analogy of temporal properties. We evaluate the proposed technique on the MiniDAO smart contract, a young brother of notorious TheDAO. Our Proof-of-Concept was able to detect a non-trivial error in the business logic of this smart contract in a few seconds.

Programming and Computer Software. 2019;45(8):590-599
pages 590-599 views
Hybrid Model for Efficient Anomaly Detection in Short-timescale GWAC Light Curves and Similar Datasets
Sun Y., Zhao Z., Ma X., Du Z.
Abstract

Early warning during sky survey provides a crucial opportunity to detect low-mass, free-floating planets. In particular, to search short-timescale microlensing (ML) events from high-cadence and wide- field survey in real time, a hybrid method which combines ARIMA (Autoregressive Integrated Moving Average) with LSTM (Long-Short Time Memory) and GRU (Gated Recurrent Unit) recurrent neural networks (RNN) is presented to monitor all observed light curves and identify ML events at their early stages. Experimental results show that the hybrid models perform better in accuracy and less time consuming of adjusting parameters. ARIMA+LSTM and ARIMA+GRU can achieve improvement in accuracy by 14.5% and 13.2%, respectively. In the case of abnormal detection of light curves, GRU can achieve almost the same result as LSTM with less time by 8%. The hybrid models are also applied to MIT-BIH Arrhythmia Databases ECG dataset which has the similar abnormal pattern to ML. The experimental results from both data sets show that the hybrid model can save up to 40% of researchers' time in model adjusting and optimization to achieve 90% accuracy.

Programming and Computer Software. 2019;45(8):600-610
pages 600-610 views

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies