Real-Time Generative Simulation of Game Environment
Main Article Content
Abstract
This paper explores the potential of generative neural network simulations, focusing on the application of reinforcement learning methods and neural world models for creating interactive worlds. Key achievements in agent training using reinforcement learning are discussed. Special attention is given to neural world models, as well as generative models such as Oasis, DIAMOND, Genie, and GameNGen, which employ diffusion networks to generate realistic and interactive game worlds. The opportunities and limitations of generative simulation models are examined, including issues related to error accumulation and memory constraints, and their impact on the quality of generation. The conclusion presents suggestions for future research directions.
Article Details
References
2. Mnih V. et al. Human-level control through deep reinforcement learning // Nature. 2015. Vol. 518. No. 7540. P. 529–533.
3. Silver D. et al. Mastering the game of go without human knowledge // Nature. 2017. Vol. 550. No. 7676. P. 354–359.
4. Silver D. et al. Mastering the game of Go with deep neural networks and tree search // Nature. 2016. Vol. 529. No. 7587. P. 484–489.
5. Vinyals O. et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning // Nature. 2019. Vol. 575. No. 7782. P. 350–354.
6. Berner C. et al. Dota 2 with large scale deep reinforcement learning // arXiv preprint arXiv:1912.06680. 2019.
7. Сахибгареева Г.Ф., Кугуракова В.В., Большаков Э.С. Инструменты балансирования игр // Электронные библиотеки. 2023. Т. 26. No. 2. С. 225–251.
8. Rani G. et al. A deep reinforcement learning technique for bug detection in video games // International Journal of Information Technology. 2023. Т. 15. No.. 1. С. 355–367.
9. Wiering M.A., Van Otterlo M. Reinforcement learning // Adaptation, learning, and optimization. 2012. Vol. 12. No. 3. P. 729.
10. Майн Х., Осаки С. Марковские процессы принятия решений. 1977.
11. Spaan M.T.J. Partially observable Markov decision processes. //Reinforcement learning: State-of-the-art. Berlin, Heidelberg: Springer Berlin Heidelberg. 2012. P. 387–414.
12. Cai Q. et al. A survey on deep reinforcement learning for data processing and analytics // IEEE Transactions on Knowledge and Data Engineering. 2022. Vol. 35. No. 5. P. 4446–4465.
13. Moerland T.M. et al. Model-based reinforcement learning: A survey //Foundations and Trends in Machine Learning. 2023. Vol. 16. No. 1. P. 1–118.
14. Ha D., Schmidhuber J. World models // arXiv preprint arXiv:1803.10122. 2018.
15. Pinheiro Cinelli L. et al. Variational autoencoder // Variational methods for machine learning with applications to deep networks. Cham: Springer International Publishing. 2021. P. 111–149.
16. Hafner D. et al. Learning latent dynamics for planning from pixels // International conference on machine learning. PMLR. 2019. P. 2555–2565.
17. Micheli V., Alonso E., Fleuret F. Transformers are sample-efficient world models // arXiv preprint arXiv:2209.00588. 2022.
18. Kaiser L. et al. Model-based reinforcement learning for Atari // arXiv preprint arXiv:1903.00374. 2019.
19. Vaswani A. et al. Attention is all you need // Advances in neural information processing systems. 2017. Vol. 30.
20. Ye W. et al. Mastering Atari games with limited data // Advances in neural information processing systems. 2021. Vol. 34. P. 25476–25488.
21. Alonso E. et al. Diffusion for world modeling: Visual details matter in Atari // Advances in Neural Information Processing Systems. 2024. Vol. 37. P. 58757–58791.
22. Ho J., Jain A., Abbeel P. Denoising diffusion probabilistic models // Advances in neural information processing systems. 2020. Vol. 33. P. 6840–6851.
23. Karras T. et al. Elucidating the design space of diffusion-based generative models // Advances in neural information processing systems. 2022. Vol. 35. P. 26565–26577.
24. Pearce T., Zhu J. Counter-strike deathmatch with large-scale behavioural cloning // 2022 IEEE Conference on Games (CoG). IEEE, 2022. P. 104–111.
25. Bruce J. et al. Genie: Generative interactive environments // Forty-first International Conference on Machine Learning. 2024.
26. Xu M. et al. Spatial-temporal transformer networks for traffic flow forecasting //arXiv preprint arXiv:2001.02908. 2020.
27. Dosovitskiy A. et al. An image is worth 16x16 words: Transformers for image recognition at scale // arXiv preprint arXiv:2010.11929. 2020.
28. Valevski D. et al. Diffusion models are real-time game engines // The Thirteenth International Conference on Learning Representations. 2024.
29. Rombach R. et al. High-resolution image synthesis with latent diffusion models // Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. P. 10684–10695.
30. Yu H. et al. Uncovering the text embedding in text-to-image diffusion models // arXiv preprint arXiv:2404.01154. 2024.
31. Zhang R. et al. The unreasonable effectiveness of deep features as a perceptual metric // Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. P. 586–595.
32. Decart E. et al. Oasis: A universe in a transformer. URL: https://oasis-model.github.io/
33. Choi B., Jeong J. ViV-Ano: Anomaly detection and localization combining vision transformer and variational autoencoder in the manufacturing process // Electronics. 2022. Vol. 11. No. 15. P. 2306.
34. Pasini M. et al. Continuous autoregressive models with noise augmentation avoid error accumulation // arXiv preprint arXiv:2411.18447. 2024.
35. Parker-Holder J. et al. Genie 2: A Large-Scale Foundation World Model. URL: https://deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/

This work is licensed under a Creative Commons Attribution 4.0 International License.
Presenting an article for publication in the Russian Digital Libraries Journal (RDLJ), the authors automatically give consent to grant a limited license to use the materials of the Kazan (Volga) Federal University (KFU) (of course, only if the article is accepted for publication). This means that KFU has the right to publish an article in the next issue of the journal (on the website or in printed form), as well as to reprint this article in the archives of RDLJ CDs or to include in a particular information system or database, produced by KFU.
All copyrighted materials are placed in RDLJ with the consent of the authors. In the event that any of the authors have objected to its publication of materials on this site, the material can be removed, subject to notification to the Editor in writing.
Documents published in RDLJ are protected by copyright and all rights are reserved by the authors. Authors independently monitor compliance with their rights to reproduce or translate their papers published in the journal. If the material is published in RDLJ, reprinted with permission by another publisher or translated into another language, a reference to the original publication.
By submitting an article for publication in RDLJ, authors should take into account that the publication on the Internet, on the one hand, provide unique opportunities for access to their content, but on the other hand, are a new form of information exchange in the global information society where authors and publishers is not always provided with protection against unauthorized copying or other use of materials protected by copyright.
RDLJ is copyrighted. When using materials from the log must indicate the URL: index.phtml page = elbib / rus / journal?. Any change, addition or editing of the author's text are not allowed. Copying individual fragments of articles from the journal is allowed for distribute, remix, adapt, and build upon article, even commercially, as long as they credit that article for the original creation.
Request for the right to reproduce or use any of the materials published in RDLJ should be addressed to the Editor-in-Chief A.M. Elizarov at the following address: amelizarov@gmail.com.
The publishers of RDLJ is not responsible for the view, set out in the published opinion articles.
We suggest the authors of articles downloaded from this page, sign it and send it to the journal publisher's address by e-mail scan copyright agreements on the transfer of non-exclusive rights to use the work.