Mechanisms of Realistic Facial Expressions for Anthropomorphic Social Agents
Main Article Content
Abstract
Article Details
References
Bednarski R., Pszczoła P. Comparison of face animation methods // Computer Game Innovations. 2017. P. 29–40.
Zoss G., Beeler T., Gross M., Bradley D. Accurate markerless jaw tracking for facial performance capture // ACM Transactions on Graphics. 2019. Vol. 38. No. 4. Article 50.
Zollhöfer M., Thies J., Garrido P., Bradley D., Beeler T., Pérez P., Stamminger M., Nießner M., Theobalt C. State of the art on monocular 3D face reconstruction, tracking, and applications // Computer Graphics Forum. 2018. Vol. 37. No. 2. P. 523–550.
Kugurakova V.V., Talanov M.O., Manakhov N.R. Anthropomorphic artificial social agent with simulated emotions and its implementation // 6th Annual International Conference on Biologically Inspired Cognitive Architectures (BICA 2015). 2015. Vol. 71. P. 112–118.
Зиннатов А.А. Разработка алгоритмов автозахвата мимики лиц с real-time наложением на аватары в реализации на Unreal Engine 4 / Выпускная квалификационная работа // Казанский федеральный университет. Высшая школа информационных технологий и интеллектуальных систем. 2018. 41 c. URL: https://kpfu.ru/ student_diplom/10.160.178.20_5299872_F_zinnatov.pdf
Wan V., Anderson R., Blokland A., Braunschweiler N., Chen L., Kolluru B., Latorre J., Maia R., Stenger B., Yanagisawa K., Stylianou Y., Akamine M., Gales M.J.F., Cipolla R. Photo-realistic expressive text to talking head synthesis // Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. 2013. P. 2667.
Zhang X., Wang L., Li G., Seide F., Soong F.K. A new language independent, photo-realistic talking head driven by voice only // Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. 2013. P. 2743.
Cosker D., Marshall D., Rosin P.L., Hicks Y. Speech driven facial animation using a hidden Markov coarticulation model // Proceedings – International Conference on Pattern Recognition. 2004. P. 128.
Eskimez S.E., Maddox R.K., Xu C., Duan Z. Generating talking face landmarks from speech. Vol. 10891 LNCS. 2018. P. 372–381.
Eskimez S.E., Maddox R.K., Xu C., Duan Z. Noise-resilient training method for face landmark generation from speech // IEEE/ACM Transactions on Audio Speech and Language Processing. 2020. Vol. 28. P. 27–38.
Karras T., Aila T., Laine S., Herva A., Lehtinen J. Audio-driven facial animation by joint end-to-end learning of pose and emotion // ACM Transactions on Graphics. 2017. Vol. 36. Is. 4. Article 94.
Cudeiro D., Bolkart T., Laidlaw C., Ranjan A., Black M.J. Capture, learning, and synthesis of 3D speaking styles // Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2019. P. 10093.
Ekman P. Facial expression and emotion // American Psychologist. 1993. Vol. 48. No. 4. P. 384–392.
Kerkeni L., Serrestou Y., Raoof K., Cleder C., Mahjoub M., Mbarki M. Automatic Speech Emotion Recognition Using Machine Learning. In book: Social Media and Machine Learning // IntechOpen. 2019. URL: https://www.intechopen.com/books/social-media-and-machine-learning/automatic-speech-emotion-recognition-using-machine-learning
Venkataramanan K., Rajamohan H.R. Emotion Recognition from Speech // Arxiv.org. 2019. P. 1–14. URL: https://arxiv.org/pdf/1912.10458.pdf
Nithya Roopa S., Prabhakaran M., Betty P. Speech emotion recognition using deep learning // International Journal of Recent Technology and Engineering. 2019. Vol. 7. No. 4S. P. 247–250.
Chatterjee A., Gupta U., Chinnakotla M.K., Srikanth R., Galley M., Agrawal P. Understanding Emotions in Text Using Deep Learning and Big Data // Computers in Human Behavior. 2019. Vol. 93. P. 309–317.
Ramalingam V.V., Pandian A., Jaiswal A., Bhatia N. Emotion detection from text // Journal of Physics: Conference Series. 2018. Vol. 1000. No. 1. Article 012027.
Алексеев А.А., Кугуракова В.В., Иванов Д.С. Выявление психологического портрета на основе определения тональности сообщений для антропоморфного социального агента // Электронные библиотеки. 2016. Т. 19. № 3. С. 149–165.
Ruhland K., Peters C.E., Andrist S., Badler J.B., Badler N.I., Gleicher M., Mutlu B., McDonnell R. A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception // Computer Graphics Forum. 2015. Vol. 34. No. 6. P. 299–326.
Hoppe S., Loetscher T., Morey S.A., Bulling A. Eye movements during everyday behavior predict personality traits // Frontiers in Human Neuroscience. 2018. Vol. 12, 13. Article 105.
King D.E. DLib / OpenSource библиотека // URL: http://dlib.net
Mallick S. Face morph using OpenCV C++/Python / OpenSource библиотека // 2016. URL: http://www.learnopencv.com/face-morph-using- opencv-cpp-python/
Sheng G., Kai, W. SDK-Based Real-Time Face Tracking and Animation / Archived // Intel. RealSense. 2016. URL: https://software.intel.com/en-us/ articles/intel-realsense-sdk-based-real-time-face-tracking-and-animation
Зиннатов А.А. Механизмы реалистичной мимики для антропоморфных социальных агентов / Демонстрационное видео // YouTube. 2020. URL: https://youtu.be/vljrw9R5Yuc?list=PLIY6UcIDS7wKyVAWBkl sESdA0fteFL0Y-
Зиннатов А.А. FaceAnimation_UE4. / Исходный код // GitHub. 2020. URL: https://github.com/ainur-zinnatov/FaceAnimation_UE4.git
Presenting an article for publication in the Russian Digital Libraries Journal (RDLJ), the authors automatically give consent to grant a limited license to use the materials of the Kazan (Volga) Federal University (KFU) (of course, only if the article is accepted for publication). This means that KFU has the right to publish an article in the next issue of the journal (on the website or in printed form), as well as to reprint this article in the archives of RDLJ CDs or to include in a particular information system or database, produced by KFU.
All copyrighted materials are placed in RDLJ with the consent of the authors. In the event that any of the authors have objected to its publication of materials on this site, the material can be removed, subject to notification to the Editor in writing.
Documents published in RDLJ are protected by copyright and all rights are reserved by the authors. Authors independently monitor compliance with their rights to reproduce or translate their papers published in the journal. If the material is published in RDLJ, reprinted with permission by another publisher or translated into another language, a reference to the original publication.
By submitting an article for publication in RDLJ, authors should take into account that the publication on the Internet, on the one hand, provide unique opportunities for access to their content, but on the other hand, are a new form of information exchange in the global information society where authors and publishers is not always provided with protection against unauthorized copying or other use of materials protected by copyright.
RDLJ is copyrighted. When using materials from the log must indicate the URL: index.phtml page = elbib / rus / journal?. Any change, addition or editing of the author's text are not allowed. Copying individual fragments of articles from the journal is allowed for distribute, remix, adapt, and build upon article, even commercially, as long as they credit that article for the original creation.
Request for the right to reproduce or use any of the materials published in RDLJ should be addressed to the Editor-in-Chief A.M. Elizarov at the following address: amelizarov@gmail.com.
The publishers of RDLJ is not responsible for the view, set out in the published opinion articles.
We suggest the authors of articles downloaded from this page, sign it and send it to the journal publisher's address by e-mail scan copyright agreements on the transfer of non-exclusive rights to use the work.