MetaHuman Synthetic Dataset for Optimizing 3D Model Skinning
Main Article Content
Abstract
In this study, we present a method for creating a synthetic dataset using the MetaHuman framework to optimize the skinning of 3D models. The research focuses on improving the quality of skeletal deformation (skinning) by leveraging a diverse array of high-fidelity virtual human models. Using MetaHuman, we generated an extensive dataset comprising dozens of virtual characters with varied anthropometric fea-tures and precisely defined skinning weight parameters. This data was used to train an algorithm that optimizes the distribution of skinning weights between bones and the character mesh.
The proposed approach automates the weight rigging process, significantly reducing manual effort for riggers and increasing the accuracy of deformations during animation. Experimental results show that leveraging synthetic data reduces skinning errors and produces smoother character movements compared to traditional methods. The outcomes have direct applications in the video game, animation, virtual reality, and simulation industries, where rapid and high-quality rigging of numerous characters is required. The method can be integrated into existing graphics engines and development pipelines (such as Unreal Engine or Unity) as a plugin or tool, facilitating the adoption of this technology in practical projects.
Article Details
References
2. Epic Games. New release brings Mesh to MetaHuman to Unreal Engine … MetaHuman Mesh to MetaHuman Announcement. 2022. URL: https://www.unrealengine.com/en-US/blog/new-release-brings-mesh-to-metahuman-to-unreal-engine-and-much-more
3. Epic Games Forums. Skinning Method in MetaHuman (developer reply), April, 2021. // Unreal Engine forum. 2021. URL: https://forums.unrealengine.com/t/skinning-method-in-metahuman/226222
4. Кугуракова В.В., Абрамов В.Д. и др. Генерация трехмерных синтетических датасетов // Электронные библиотеки. 2021. Т. 24. No. 4. C. 622–652. https://doi.org/10.26907/1562-5419-2021-24-4-622-652
5. Сулейманова Е.А., Газизов Р.Р., Кугуракова В.В. и др. От идеи до реальности: процесс создания одежды для персонажа компьютерной игры // Известия вузов. Технология текстильной промышленности. 2024. №5(413). С. 13–19. https://doi.org/10.47367/0021-3497_2024_5_13
6. Baran I., Popović J. Automatic Rigging and Animation of 3D Characters. ACM Trans. Graph. 2007. T. 26. No. 3. C. 72-es.
7. McGrane C. Auto-rigging? Still something only Mixamo can do? // ThreeJS Forum. 2022. URL: https://discourse.threejs.org/t/auto-rigging-still-some thing-only-mixamo-can-do/43709/8
8. Automatic rigging of three dimensional characters for animation: пат. США № 11,170,558 B2 / Adobe Inc.; опубл. 09.11.2021. URL: https://patents.google.com/patent/US11170558B2/en
9. Xu Z., Zhou Y., Kalogerakis E., Landreth C., Singh K. RigNet: Neural Rigging for Articulated Characters //arXiv preprint arXiv:2005.00559. 2020.
10. Liu L., Zheng Y., Tang D., Yuan Y., Fan C., Zhou K. Neuroskinning: Automatic skin binding for production characters with deep graph networks //ACM Transactions on Graphics (ToG). 2019. Vol. 38. No. 4. P. 1–12.
11. Mosella-Montoro A., Ruiz-Hidalgo J. Skinningnet: Two-stream graph convolutional neural network for skinning prediction of synthetic characters // Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. P. 18593–18602.
12. Guo Z. et al. Make-It-Animatable: An Efficient Framework for Authoring Animation-Ready 3D Characters //arXiv preprint arXiv:2411.18197. 2024. URL: https://jasongzy.github.io/Make-It-Animatable/
13. Gazizov R., Shubin A. Modification of Skeletal Character Animation Using Inverse Kinematics Controllers //2024 International Russian Smart Industry Conference (SmartIndustryCon). IEEE, 2024. P. 553–557. https://doi.org/10.1109/SmartIndustryCon61328.2024.10515984
14. Тарасов А.С., Кугуракова В.В. Реконструкция трехмерной модели человека по единственному изображению// Электронные библиотеки. 2021. Т. 24, № 3. С. 485–504. https://doi.org/10.26907/1562-5419-2021-24-3-485-504

This work is licensed under a Creative Commons Attribution 4.0 International License.
Presenting an article for publication in the Russian Digital Libraries Journal (RDLJ), the authors automatically give consent to grant a limited license to use the materials of the Kazan (Volga) Federal University (KFU) (of course, only if the article is accepted for publication). This means that KFU has the right to publish an article in the next issue of the journal (on the website or in printed form), as well as to reprint this article in the archives of RDLJ CDs or to include in a particular information system or database, produced by KFU.
All copyrighted materials are placed in RDLJ with the consent of the authors. In the event that any of the authors have objected to its publication of materials on this site, the material can be removed, subject to notification to the Editor in writing.
Documents published in RDLJ are protected by copyright and all rights are reserved by the authors. Authors independently monitor compliance with their rights to reproduce or translate their papers published in the journal. If the material is published in RDLJ, reprinted with permission by another publisher or translated into another language, a reference to the original publication.
By submitting an article for publication in RDLJ, authors should take into account that the publication on the Internet, on the one hand, provide unique opportunities for access to their content, but on the other hand, are a new form of information exchange in the global information society where authors and publishers is not always provided with protection against unauthorized copying or other use of materials protected by copyright.
RDLJ is copyrighted. When using materials from the log must indicate the URL: index.phtml page = elbib / rus / journal?. Any change, addition or editing of the author's text are not allowed. Copying individual fragments of articles from the journal is allowed for distribute, remix, adapt, and build upon article, even commercially, as long as they credit that article for the original creation.
Request for the right to reproduce or use any of the materials published in RDLJ should be addressed to the Editor-in-Chief A.M. Elizarov at the following address: amelizarov@gmail.com.
The publishers of RDLJ is not responsible for the view, set out in the published opinion articles.
We suggest the authors of articles downloaded from this page, sign it and send it to the journal publisher's address by e-mail scan copyright agreements on the transfer of non-exclusive rights to use the work.