Published: 06.09.2024

Use of REST API and WebSocket Interfaces Algorithms for Structuring the Three-Link Level of Emergent Systems and Displaying Media Systems

Mikhail Mikhailovich Blagirev, Alexey Olegovich Kostyrenkov
415-428
Abstract:

An analysis of the speed and efficiency of data transfer using the WebSocket and REST API protocols was carried out. To compare the speed of processing stream objects and identify a more reliable technology for developing APIs, expansions of basic functions in Taylor and Fourier series were used. As a result, it was revealed that the REST API is a faster and more accessible resource for transmitting information data in a bitwise transformation, and the scalability of this protocol prevails in the number of processed units, which allows expanding the number of tests performed.

On the Approach to Detecting Pedestrian Movement using the Method of Histograms of Oriented Gradients

Maxim Vladimirovich Bobyr, Natalya Anatol'evna Milostnaya, Natalia Igorevna Khrapova
429-447
Abstract:

An approach to automatically recognizing the movement of people at a pedestrian crossing presented in the article. This approach includes two main procedures, for each of which program code commands are given in the C# programming language using the EMGU computer vision library. In the first procedure, pedestrian detection is carried out using a combination of directional gradient histogram and support vector methods. The second procedure allows you to read frames from a video sequence and process them. This approach allows detecting the movements of people at a pedestrian crossing without using specialized neural networks. At the same time, the method proposed in the article demonstrated sufficient reliability of human movement recognition, which indicates its applicability in real conditions.

Application of Computer Vision Methods to Old Tatar Text Recognition

Iskander Airatovich Valishin
448-477
Abstract:

A developed tool that recognizes strings, words and Arabic characters from scanned images. The possibilities and prospects for using the tool in research activities are considered. The results of experiments on the operational performance of the instrument are presented using the example of Old Tatar digitized images.

Software Tool for Videoproduction Optimisation

Rustem Faridovich Davletshin, Irina Sergeevna Shakhova
478-502
Abstract:

The paper proposes software mechanisms aimed at enhancing video production processes for the authors of artistic video materials. We propose a mechanism for creating animated three-dimensional shooting plans (storyboards) using augmented reality to position and animate the movement of actors. In order to overcome the limitations of the iOS operating system related to access to sensors, we developed a mechanism for separately capturing audio and video streams from device sensors for recording and their subsequent synchronization by timestamps for saving to device memory. Computer vision technologies are used to ensure compliance with the rules of compositional construction and image quality analysis. The paper also presents mechanisms for working with the script, including text processing algorithms for displaying subtitles on the screen, and speech recognition algorithms for comparing speech recognition of actors with the text of the script.

Development of an Expert System Based on Fuzzy Logic for Pneumonia Diagnostics

Adelya Iskanderovna Enikeeva, Rustam Arifovich Burnashev, Rustam Rinatovich Farahov
503-532
Abstract:

The paper is devoted to the development of an expert system for diagnosing pneumonia based on fuzzy logic and implemented using the Mamdani algorithm. The paper discusses the main stages of the system development, including fuzzification of input data, definition of fuzzy rules based on medical expert knowledge, aggregation of fuzzy inferences and their defuzzification to obtain the final diagnostic result. The web interface of the system is implemented using the Django framework, which ensures ease of interaction for users. The use of a medical expert system for diagnosing pneumonia can reduce the time required to establish a diagnosis and improve the quality of diagnosis by integrating the experience of medical experts and modern information technologies.

Automation of Footages Sorting by Screenplay Text for Video Editing

Andrey Dmitrievich Nemanov, Irina Sergeevna Shakhova
533-557
Abstract:

The video editing process involves numerous labor-intensive operations for sorting and preparing footages, requiring significant time investment. This article describes the development of a software solution that uses machine learning technology to automate these processes.


The primary focus is on creating a system capable of classifying and sorting media files according to the screenplay text, thereby increasing the efficiency of material preparation for editing. The system includes modules for speech recognition, audio and video classification, and algorithms for determining screenplay compliance.


Testing showed that the proposed system correctly classifies media files in most cases, significantly reducing rough-cut editing time.

Taking into Account the Structure of the Document in the Method of Automatic Annotation of Mathematical Concepts in Educational Texts

Konstantin Sergeevich Nikolaev
558-577
Abstract:

The enrichment of educational texts with semantic content (in particular, adding hyperlinks to the pages of the service that displays detailed information about concepts in the text) helps to increase the efficiency of students' assimilation of the material. The existing methods of semantic markup of educational texts do not take into account the structural features of such documents, which leads to excessive recognition of concepts. This article describes the development of the method of automatic annotation of mathematical concepts in educational mathematical texts by adding functionality to account for the structure of an educational document. The main purpose of the method is to process educational materials of the distance education course "Technology for solving planimetric problems". Following a single template when creating course pages allows you to apply an analysis of the web page markup and keywords used by the course creators. The main task in this process is to determine the type of table cell containing text fragments of educational materials. In accordance with the recommendations of the course creators, definitions should be highlighted in the cells containing the task statement, as well as in those blocks where the input data of the task is indicated. The type of table cells is determined by analyzing their attributes and searching for keywords in their contents. This limitation of recognizable text fragments will improve the student's perception of the course pages and improve the quality of learning.

A New Approach to Creating a Corpus of Video Game Texts

Nikita Ramilevich Nurlygaianov, Vlada Vladimirovna Kugurakova
578-597
Abstract:

The problem of high and increasing cost of video game development is considered, and to solve it is proposed to apply procedural content generation, which will reduce development costs.


The work is a part of a large-scale research on automatic prototyping of video games and is devoted to the processing of game scenarios, i.e. natural language texts. It is proposed to extract the necessary entities from the scripts and pass them to further steps of the algorithm, which will generate game resources based on the textual descriptions.


There are several publications devoted to game text processing, in which several different structures for storing the extracted information are proposed. In this paper we propose a universal format that is suitable for processing the text of any video game and allows to create a corpus of texts for use in further research and automatic generation of game prototypes.

Neural Network Architecture of Embodied Intelligence

Ayrat Rafkatovich Nurutdinov
598-655
Abstract:

In recent years, advances in artificial intelligence (AI) and machine learning have been driven by advances in the development of large language models (LLMs) based on deep neural networks. At the same time, despite its substantial capabilities, LLMs have fundamental limitations such as spontaneous unreliability in facts and judgments; making simple errors that are dissonant with high competence in general; credulity, manifested by a willingness to accept a user's knowingly false claims as true; and lack of knowledge about events that have occurred after training has been completed.


Probably the key reason is that bioinspired intelligence learning occurs through the assimilation of implicit knowledge by an embodied form of intelligence to solve interactive real-world physical problems. Bioinspired studies of the nervous systems of organisms suggest that the cerebellum, which coordinates movement and maintains balance, is a prime candidate for uncovering methods for realizing embodied physical intelligence. Its simple repetitive structure and ability to control complex movements offer hope for the possibility of creating an analog to adaptive neural networks.


This paper explores the bioinspired architecture of the cerebellum as a form of analog computational networks capable of modeling complex real-world physical systems. As a simple example, a realization of embodied AI in the form of a multi-component model of an octopus tentacle is presented, demonstrating the potential in creating adaptive physical systems that learn and interact with the environment.

Automation of Reading Related Data from Relational and Non-Relational Databases in the Context of using the JPA Standard

Angelina Sergeevna Savincheva, Alexander Andreevich Ferenets
656-678
Abstract:

The process of automating the management of the reading operation of related data from relational and non-relational databases is described.


The developed software tool is based on the use of the JPA (Java Persistence API) standard, which defines the capabilities of managing the lifecycle of entities in Java applications. An architecture for embedding in event processes has been designed, allowing the solution to be integrated into projects regardless of which JPA implementation is used. Support for various data loading strategies, types, and relationship parameters has been implemented. The performance of the tool has been evaluated.

Application of the Douglas-Peucker Algorithm in Online Authentication of Remote Work Tools for Specialist Training in Higher Education Group of Scientific Specialties (UGSN) 10.00.00

Anton Grigorievich Uymin, Vladimir Sergeyevich Grekov
679-694
Abstract:

In today's world, digital technologies are penetrating all aspects of human activity, including education and labor. Since 2019, when, in response to global challenges, the world's educational systems have actively started to shift to distance learning, there has been an urgent need to develop and implement reliable identification and authentication technologies. These technologies are necessary to ensure the authenticity of work and protection from falsification of academic achievements, especially in the context of higher education in accordance with the group of specialties and directions (USGS) 10.00.00 - Information Security, where laboratory and practical work play a key role in the educational process.


The problem lies in the need to optimize the flow of incoming data, which, first, can affect the retraining of the neural network core of the recognition system, and second, impose excessive requirements on the network's bandwidth. To solve this problem, efficient preprocessing of gesture data is required to simplify their trajectories while preserving the key features of the gestures.


This article proposes the use of the Douglas–Peucker algorithm for preliminary processing of mouse gesture trajectory data. This algorithm significantly reduces the number of points in the trajectories, simplifying them while preserving the main shape of the gestures. The data with simplified trajectories are then used to train neural networks.


The experimental part of the work showed that the application of the Douglas–Peucker algorithm allows for a 60% reduction in the number of points in the trajectories, leading to an increase in gesture recognition accuracy from 70% to 82%. Such data simplification contributes to speeding up the neural networks' training process and improving their operational efficiency.


The study confirmed the effectiveness of using the Douglas–Peucker algorithm for preliminary data processing in mouse gesture recognition tasks. The article suggests directions for further research, including the optimization of the algorithm's parameters for different types of gestures and exploring the possibility of combining it with other machine learning methods. The obtained results can be applied to developing more intuitive and adaptive user interfaces.