Генерация трехмерных синтетических датасетов

Main Article Content

Влада Владимировна Кугуракова
Виталий Денисович Абрамов
Даниил Иванович Костюк
Регина Айратовна Шараева
Рим Радикович Газизов
Мурад Рустэмович Хафизов

Аннотация

Работа посвящена описанию процесса разработки универсального инструментария для генерации синтетических данных для обучения разных нейронных сетей. Используемый подход показал свою успешность и эффективность в решении различных задач, в частности, обучения нейросети для распознавания покупательского поведения внутри магазинов через камеры наблюдения и пространств устройствами дополненной реальности без использования вспомогательных инфракрасных камер. Обобщающие выводы позволяют спланировать дальнейшее развитие технологий генерации трехмерных синтетических данных.

Article Details

Библиографические ссылки

1. AI Training Dataset Market Size, Share & Trends Analysis Report By Type (Text, Image/Video, Audio), By Vertical (IT, Automotive, Government, Healthcare, BFSI), By Region, And Segment Forecasts, 2020–2027 // Grand View Research. 2020. 100 p. URL: https://www.grandviewresearch.com/industry-analysis/ai-training-dataset-market
2. Heeger D.J. A model for the extraction of image flow // Proceedings of the Optical Society of America Topical Meeting on Computer Vision. 1987. P. 151–154.
3. Barron J.L., Fleet D.J., Beauchemin S.S. Performance of optical flow techniques // Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 1992. P. 236–242.
4. McCane B., Novins K., Crannitch D., Galvin B. On benchmarking optical flow. // Computer Vision and Image Understanding. 2001. V. 84. P. 126–143.
5. Otte M., Nagel H.H. Estimation of optical flow based on higher-order spatiotemporal derivatives in interlaced and non-interlaced image sequences // Artificial Intelligence. 1995. V. 78. P. 5–43.
6. Meister S., Kondermann D. Real versus realistically rendered scenes for optical flow evaluation // 14th ITG Conference on Electronic Media Tech. 2011. P. 1–6.
7. Baker S., Roth S., Scharstein D., Black M.J., Lewis J.P., Szeliski R. A database and evaluation methodology for optical flow // IEEE 11th International Conference on Computer Vision. 2007. P. 1–8.
8. Vaudrey T., Rabe C., Klette R., Milburn J. Differences between stereo and motion behaviour on synthetic and real-world stereo sequences // 23rd International Conference Image and Vision Computing. 2008. P. 1–6.
9. Dwibedi D., Misra I., Hebert M. Cut, paste and learn: Surprisingly easy synthesis for instance detection // The IEEE International Conference on Computer Vision. 2017. P. 1–12.
10. Mac Aodha O., Brostow G.J., Pollefeys M. Segmenting video into classes of algorithm-suitability // IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2010. P. 1054–1061.
11. Butler D.J., Wulff J., Stanley G.B., Black M.J. A naturalistic open source movie for optical flow evaluation // ECCV 2012: Computer Vision – ECCV. 2012. P. 611–625.
12. Onkarappa N., Sappa A.D. Speed and texture: An empirical study on optical-flow accuracy in ADAS scenarios // IEEE Transactions on Intelligent Transportation Systems. 2014. V. 15. No. 1. P. 136–147.
13. Qiu W., Yuille A.L. UnrealCV: Connecting computer vision to Unreal Engine // Computer Vision – ECCV 2016. 2016. Workshops. P. 909–916.
14. Zhang Y., Qiu W., Chen Q., Hu X., Yuille A.L. UnrealStereo: A synthetic dataset for analyzing stereo vision // ArXiv Preprint arXiv:1612.04647. 2016. P. 1–10.
15. Taylor G.R., Chosak A.J., Brewer P.C. OVVV: Using virtual worlds to design and evaluate surveillance systems // 007 IEEE Conference on Computer Vision and Pattern Recognition. 2007. P. 1–8.
16. Dosovitskiy A., Ros G., Codevilla F., Lopez A., Koltun V. Carla: An open urban driving simulator // Conference on Robot Learning. 2016. P. 1–16.
17. Gaidon A., Wang Q., Cabon Y., Vig E. Virtual worlds as proxy for multi-object tracking analysis // IEEE Conference on Computer Vision and Pattern Recognition. 2016. P. 4340–4349.
18. Ros G., Sellart L., Materzynska J., Vazquez D., Lopez A.M. The Synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. P. 3234–3243.
19. Richter S.R., Hayder Z., Koltun V. Playing for benchmarks // International Conference on Computer Vision. 2017. Iss. 8237505.
20. Handa A., Pătrăucean V., Badrinarayanan V., Stent S., Cipolla R. Understanding realworld indoor scenes with synthetic data // IEEE Conference on Computer Vision and Pattern Recognition. 2016. Iss. 7780811. P. 4077–4085.
21. McCormac J., Handa A., Leutenegger S., Davison A.J. Scenenet RGB-D: Can 5m synthetic images beat generic imagenet pre-training on indoor segmentation? // The IEEE International Conference on Computer Vision. 2017. Iss. 8237554. P. 2697–2706.
22. Handa A., Whelan T., McDonald J., Davison A. A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM // IEEE International Conference on Robotics and Automation. 2014. Iss. 6907054. P. 1524–1531.
23. Song S., Yu F., Zeng A., Chang A.X., Savva M., Funkhouser T. Semantic scene completion from a single depth image // IEEE Conference on Computer Vision and Pattern Recognition. 2017. P. 190–198.
24. Wu Z., Song S., Khosla A., Yu F., Zhang L., Tang X., Xiao J. 3D shapenets: A deep representation for volumetric shapes // IEEE Conference on Computer Vision and Pattern Recognition. 2015. Iss. 7298801. P. 1912–1920.
25. Chang A.X., Funkhouser T., Guibas L., Hanrahan P., Huang Q., Li Z., Savarese S., Savva M., Song S., Su H., Xiao J., Yi L., Yu F. ShapeNet: An Information-Rich 3D Model Repository // Tech. Rep. ArXiv preprint arXiv:1512.03012. 2015.
26. de Souza C.R., Gaidon A., Cabon Y., Peña A.M.L. Procedural generation of videos to train deep action recognition networks // 2017 IEEE Conference on Computer Vision and Pattern Recognition. 2017. P. 2594–2604
27. Su H, Qi C.R., Li Y., Guibas L.J. Render for CNN: viewpoint estimation in images using CNNs trained with rendered 3D model views // IEEE International Conference on Computer Vision. 2015. Iss. 7410665. P. 2686–2694.
28. Movshovitz-Attias Y., Kanade T., Sheikh Y. How useful is photo-realistic rendering for visual learning? // ECCV Workshops. 2016. P. 1–16.
29. Zhang Y., Song S., Yumer E., Savva M., Lee J.Y., Jin H., Funkhouser T. Physically-based rendering for indoor scene understanding using convolutional neural networks // IEEE Conference on Computer Vision and Pattern Recognition. 2017. P. 5057–5065.
30. Абдурайимов Л.Н., Халилова З.Э. Краткий обзор популярных движков для создания игровых приложений под операционную систему Android // Информационно-компьютерные технологии в экономике, образовании и социальной сфере. 2018. С. 80–86.
31. Unreal Engine // URL: https://www.unrealengine.com/
32. Unity // URL: https://unity.com/
33. Magdics M., Sauvaget C., García R.J., Sbert M. Post-Processing NPR Effects for Video Games // 12th ACM International Conference on Virtual Reality Continuum and Its Applications in Industry (VRCAI). 2013. P. 147–156.
34. Metahuman Creator // URL: https://www.unrealengine.com/en-US/digital-humans
35. Кугуракова В.В., Зыков Е.Ю., Касимов А.В., Ситдиков А.Г., Скобелев А.А., Шайхутдинова Е.Ф. In situ двухдиапазонная 3D-дефектоскопия стенописей архитектурных памятников // Электронные библиотеки. 2016. T. 19. №6. C. 538–558.
36. Тарасов А.С., Кугуракова В.В. Реконструкция трехмерной модели человека по единственному изображению // Электронные библиотеки. 2021. Т. 24, № 3. С. 485–504.


Наиболее читаемые статьи этого автора (авторов)

<< < 1 2 3 4 > >>