Automation of Program Parallelization for Multicore Processors with Distributed Local Memory
Main Article Content
Abstract
This paper is concerned with development of parallelizing compiler onto computer system with distributed memory. Industrial parallelizing compilers create programs for shared memory systems. Transformation of sequential programs onto systems with distributed memory requires development of new functions. This is becoming topical for future computer systems with hundreds and more cores. Conditions for program loop parallelization onto computer system with distributed memory is formulated in terms of information dependence graph.
Article Details
References
2. Gong Z., Chen Z., Szaday Z., Wong D., Sura Z., Watkinson N., Maleki S., Padua D., Veidenbaum A., Nicolau A. An empirical study of the effect of source-level loop transformations on compiler stability // Proceedings of the ACM on Programming Languages. 2018. P. 1–29.
3. Гервич Л.Р., Кравченко Е.Н., Штейнберг Б.Я., Юрушкин М.В. Автоматизация распараллеливания программ с блочным размещением данных // Сибирский журнал вычислительной математики. 2015. Т. 18. №1. C. 41–53.
4. Bondhugula U. Automatic distributed-memory parallelization and codegeneration using the polyhedral framework // Technical report ISc-CSA-TR-2011-3. 2011. 10 p.
5. DVM-система разработки параллельных программ. URL: http://dvm-system.org/ru/about/, дата обращения 26.03.2022.
6. Корнеев В.В. Параллельное программирование // Программная инженерия. 2022. Т. 13. № 1. С. 3–16.
7. Krivosheev N.M., Steinberg B.Ya. Algorithm for searching minimum inter-node data transfers. // «Procedia Computer Science», 10th International Young Scientist Conference on Computational Science. YSC 2021. 1–3 July 2021. P. 306–313.
8. Kwon D., Han S., Kim H. MPI backend for an automatic parallelizing compiler // Proceedings Fourth International Symposium on Parallel Architectures, Algorithms, and Networks (I-SPAN’99). 06.1999. P. 152–157. https://doi.org/10.1109/ISPAN.1999. 778932.
9. Epiphany-V: A 1024-core 64-bit RISC processor. URL: https://parallella.org/2016/10/05/epiphany-v-a-1024-core-64-bit-risc-processor, дата обращения 26.03.2022.
10. SoC Esperanto. URL: https://www.esperanto.ai/technology, дата обращения 26.03.2022.
11. Бахтин В.А., Захаров Д.А., Колганов А.С., Крюков В.А., Поддерюгина Н.В., Притула М.Н. Решение прикладных задач с использованием DVM-системы // Вестник ЮУрГУ. Серия: Вычислительная математика и информатика. 2019. Т. 8. № 1. С. 89–106.
12. Прангишвили И.В., Виленкин С.Я., Медведев И.Л. Параллельные вычислительные системы с общим управлением. М.: Энергоатомиздат, 1983. 312 с.
13. Процессор НТЦ «Модуль». URL: https://www.cnews.ru/news/top/2019-03-06_svet_uvidel_moshchnejshij_rossijskij_nejroprotsessor, дата обращения 26.03.2022.
14. Штейнберг Б.Я. Блочно-аффинные размещения данных в параллельной памяти // Информационные технологии. 2010. №6. C. 36–41.
15. Штейнберг Б.Я. Оптимизация размещения данных в параллельной памяти. Ростов-на-Дону: Изд-во Южного федерального университета, 2010. 255 с.
16. Vasilenko A., Veselovskiy V., Metelitsa E., Zhivykh N., Steinberg B.,
Steinberg O. Precompiler for the ACELAN-COMPOS Package Solvers // In: Malyshkin V. (eds). Parallel Computing Technologies. PaCT 2021. Lecture Notes in Computer Science. Vol. 12942. P. 103–116. Springer, Cham. https://doi.org/10.1007/978-3-030-86359-3_8
17. Штейнберг О.Б. Минимизация количества временных массивов в задаче разбиения циклов // Известия ВУЗов. Северо-Кавказский регион. Естественные науки. 2011. №5. C. 31–35.
18. SambaNova Launches Second-Gen DataScale System. URL: https://www.hpcwire.com/2022/09/14/sambanova-launches-second-gen-datascalesystem, дата обращения 20.01.2023.
19. Елизаров Г.С., Конотопцев В.Н., Корнеев В.В. Специализированные большие интегральные схемы для реализации нейросетевых выводов // XХII международная конференция «Харитоновские тематические научные чтения «Суперкомпьютерное моделирование и искусственный интеллект»: сб. трудов / Под ред. Р.М. Шагалиева. Саров: ФГУП «РФЯЦ-ВНИИЭФ», 2022. С. 181–184.
20. Корнеев В.В. Направления повышения производительности нейросетевых вычислений // Программная инженерия. 2020. Т. 11, № 1. С. 21–25. https://doi.org/10.17587/prin.11.21–25
21. Yen I.E., Xiao Zh., Xu D. S4: a High-sparsity, High-performance AI Accelerator // arXiv:2207.08006v1 [cs.AR] 16 Jul 2022
22. Gale T., Elsen E., Hooker S. The state of sparsity in deep neural networks // arXiv preprint arXiv:1902.09574, 2019
23. Intelligence Processing Unit. URL: https://www.graphcore.ai/products/ipu. (accessed: 20.01.2023)
24. Jia Zh., Tillman B., Maggioni M., Scarpazza D.P. Dissecting the Graphcore IPU Architecture via Microbenchmarking // Technical Report. December 7, 2019. arXiv:1912.03413v1 [cs.DC] 7 Dec. 2019. 91 p.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Presenting an article for publication in the Russian Digital Libraries Journal (RDLJ), the authors automatically give consent to grant a limited license to use the materials of the Kazan (Volga) Federal University (KFU) (of course, only if the article is accepted for publication). This means that KFU has the right to publish an article in the next issue of the journal (on the website or in printed form), as well as to reprint this article in the archives of RDLJ CDs or to include in a particular information system or database, produced by KFU.
All copyrighted materials are placed in RDLJ with the consent of the authors. In the event that any of the authors have objected to its publication of materials on this site, the material can be removed, subject to notification to the Editor in writing.
Documents published in RDLJ are protected by copyright and all rights are reserved by the authors. Authors independently monitor compliance with their rights to reproduce or translate their papers published in the journal. If the material is published in RDLJ, reprinted with permission by another publisher or translated into another language, a reference to the original publication.
By submitting an article for publication in RDLJ, authors should take into account that the publication on the Internet, on the one hand, provide unique opportunities for access to their content, but on the other hand, are a new form of information exchange in the global information society where authors and publishers is not always provided with protection against unauthorized copying or other use of materials protected by copyright.
RDLJ is copyrighted. When using materials from the log must indicate the URL: index.phtml page = elbib / rus / journal?. Any change, addition or editing of the author's text are not allowed. Copying individual fragments of articles from the journal is allowed for distribute, remix, adapt, and build upon article, even commercially, as long as they credit that article for the original creation.
Request for the right to reproduce or use any of the materials published in RDLJ should be addressed to the Editor-in-Chief A.M. Elizarov at the following address: amelizarov@gmail.com.
The publishers of RDLJ is not responsible for the view, set out in the published opinion articles.
We suggest the authors of articles downloaded from this page, sign it and send it to the journal publisher's address by e-mail scan copyright agreements on the transfer of non-exclusive rights to use the work.