• Main Navigation
  • Main Content
  • Sidebar

Russian Digital Libraries Journal

  • Home
  • About
    • About the Journal
    • Aims and Scopes
    • Themes
    • Editor-in-Chief
    • Editorial Team
    • Submissions
    • Open Access Statement
    • Privacy Statement
    • Contact
  • Current
  • Archives
  • Register
  • Login
  • Search
Published since 1998
ISSN 1562-5419
16+
Language
  • Русский
  • English

Search

Advanced filters

Search Results

Interaction with the User in the SAPFOR System

Nikita Andreevich Kataev
157-183
Abstract:

Automation of parallel programming is important at any stage of parallel program development. These stages include profiling of the original program, program transformation, which allows us to achieve higher performance after program parallelization, and, finally, construction and optimization of the parallel program. It is also important to choose a suitable parallel programming model to express parallelism available in a program. On the one hand, the parallel programming model should be capable to map the parallel program to a variety of existing hardware resources. On the other hand, it should simplify the development of the assistant tools and it should allow the user to explore the parallel program the assistant tools generate in a semi-automatic way. The SAPFOR (System FOR Automated Parallelization) system combines various approaches to automation of parallel programming. Moreover, it allows the user to guide the parallelization if necessary. SAPFOR produces parallel programs according to the high-level DVMH parallel programming model which simplify the development of efficient parallel programs for heterogeneous computing clusters. This paper focuses on the approach to semi-automatic parallel programming, which SAPFOR implements. We discuss the architecture of the system and present the interactive subsystem which is useful to guide the SAPFOR through program parallelization. We used the interactive subsystem to parallelize programs from the NAS Parallel Benchmarks in a semi-automatic way. Finally, we compare the performance of manually written parallel programs with programs the SAPFOR system builds.

Keywords: program analysis, program transformation, automated parallelization, graphical user interface, SAPFOR, DVM, LLVM.

Web Based System for Program Analysis and Transformation in Optimizing Parallelizing System

Anton Pavlovich Bagly
576-593
Abstract: Experience of designing different variants for web-based development environment (IDE) for Optimizing parallelizing system and compiler for reconfigurable architecture is described. Designed system is based on existing tools and frameworks such as Jupyter Notebook and Eclipse Che. Set of requirements for Optimizing parallelizing system components is developed to make it possible to integrate them into web-based development environment accessible through the Internet. Designing portable environment for compiler development, compiler technology demonstration and teaching parallel program development is also described. Examples of performing newly developed program transformations are shown to be used during program optimizations for FPGA inside the designed web environment. Means of program transformation visualization are described for use with Jupyter Notebook. The work shown demonstrates possibility to organize remote access to library of instruments and tools for program optimizations currently under development that would be convenient for application developers.
Keywords: integrated environment, parallelizing compiler, program transformations, FPGA, containerization, interactive notebook, cloud computing.

Reconstruction of Multi-Dimensional Form of Linearized Accesses to Arrays in SAPFOR

Nikita Andreevich Kataev, Vladislav Nikolaevich Vasilkin
770-787
Abstract: The system for automated parallelization SAPFOR (System FOR Automated Parallelization) includes tools for program analysis and transformation. The main goal of the system is to reduce the complexity of program parallelization. SAPFOR system is focused on the investigation of multilingual applications in Fortran and C programming languages. The low-level LLVM IR representation is used in SAPFOR for program analysis. This representation allows us to perform various IR-level optimizations to improve the quality of program analysis. At the same time, it loses some features of the program, which are available in its higher level representation. One of these features is the multi-dimensional structure of the arrays. Data dependence analysis is one of the main problems which should be solved to automate program parallelization. Moreover, such an analysis belongs to the class of NP-hard problems. Knowledge of the multidimensional structure of arrays allows in many cases to take into account the structure of index expressions in calls to arrays and reduce the complexity of the analysis. In addition, the use of multi-dimensional arrays allows us to use multi-dimensional processor matrix and to parallelize a whole loop nests, rather than a single loop in the nest. So, parallelism of a program is going to be increased. These opportunities are natively supported in the DVM system. This paper discusses the approach used in the SAPFOR system to recover the form of multi-dimensional arrays by their linearized representation in LLVM IR. The proposed approach has been successfully evaluated on various applications including performance tests from the NAS Parallel Benchmarks suite.
Keywords: program analysis, semi-automatic parallelization, SAPFOR, DVM, LLVM.

On the Way to Creating Parallelizing Compilers for Computing Systems with Distributed Memory

Boris Yakovlevich Steinberg
127-149
Abstract:

The conditions for creating optimizing parallelizing compilers for computing systems with distributed memory are described. Target computing systems are microcircuits of the “supercomputer on a chip” type. Both optimizing program transformations specific to systems with distributed memory and those transformations that are needed both for computing systems with distributed memory and for computing systems with shared memory are presented. The issues of minimizing interprocessor transfers when parallelizing a recursive function are discussed. The main approach to creating such compilers is block-affine data placement in distributed memory with minimization of inter-processor transfers. It is shown that parallelizing compilers for computing systems with distributed memory should be created on the basis of a high-level internal representation and a high-level output language.

Keywords: automatic parallelization, distributed memory, program transformation, data distribution, data interchange.

Automation of Program Parallelization for Multicore Processors with Distributed Local Memory

Anton Pavlovich Bagliy, Nikita Maksimovich Krivosheyev, Boris Yakovlevich Steinberg
135-153
Abstract:

This paper is concerned with development of parallelizing compiler onto computer system with distributed memory. Industrial parallelizing compilers create programs for shared memory systems. Transformation of sequential programs onto systems with distributed memory requires development of new functions. This is becoming topical for future computer systems with hundreds and more cores. Conditions for program loop parallelization onto computer system with distributed memory is formulated in terms of information dependence graph.

Keywords: automatic parallelization, distributed memory, program transformation, data distribution, data interchange.
1 - 5 of 5 items
Information
  • For Readers
  • For Authors
  • For Librarians
Make a Submission
Current Issue
  • Atom logo
  • RSS2 logo
  • RSS1 logo

Russian Digital Libraries Journal

ISSN 1562-5419

Information

  • About the Journal
  • Aims and Scopes
  • Themes
  • Author Guidelines
  • Submissions
  • Privacy Statement
  • Contact
  • eLIBRARY.RU
  • dblp computer science bibliography

Send a manuscript

Authors need to register with the journal prior to submitting or, if already registered, can simply log in and begin the five-step process.

Make a Submission
About this Publishing System

© 2015-2025 Kazan Federal University; Institute of the Information Society