Of Neural Network Model Robustness Through Generating Invariant to Attributes Embeddings

Main Article Content

Abstract

Model robustness to minor deviations in the distribution of input data is an important criterion in many tasks. Neural networks show high accuracy on training samples, but the quality on test samples can be dropped dramatically due to different data distributions, a situation that is exacerbated at the subgroup level within each category. In this article we show how the robustness of the model at the subgroup level can be significantly improved with the help of the domain adaptation approach to image embeddings. We have found that application of a competitive approach to embeddings limitation gives a significant increase of accuracy metrics in a complex subgroup in comparison with the previous models. The method was tested on two independent datasets, the accuracy in a complex subgroup on the Waterbirds dataset is 90.3 {y : waterbirds;a : landbackground}, on the CelebA dataset is 92.22 {y : blondhair;a : male}.

Article Details

References

1. Vladimir Vapnik. Principles of risk minimization for learning theory // Advances in Neural Information Processing Systems. 1992. P. 831–838.
2. Christian Szegedy at all. Inception-v4, inception-resnet and the impact of residual connections on learning. Thirty-first AAAI Conference on Artificial Intelli-gence, 2017.
3. Dirk Hovy, Anders Søgaard. Tagging performance correlates with author age // Proceedings of the 53rd annual meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (volume 2: Short papers). 2015, P. 483–488.
4. Nicole Shadowen. Ethics and bias in machine learning: A technical study of what makes us “good”. The Transhumanism Handbook. Springer, 2019. P. 247–261.
5. Osonde A Osoba, William Welser IV. An intelligence in our image: The risks of bias and errors in artificial intelligence. Rand Corporation, 2017.
6. Shai Danziger, Jonathan Levavи, Liora Avnaim-Pesso. Extraneous factors in judicial decisions // Proceedings of the National Academy of Sciences 108.17 (2011). P. 6889–6892.
7. Amitabha Mukerjee at all. Multi-objective evolutionary algorithms for the risk-return trade-off in bank loan management // International Transactions in Operational Research 9.5. 2002. P. 583–597.
8. Julia K. Winkler at all. Association between surgical skin markings in der-moscopic images and diagnostic performance of a deep learning convolutional neural network for melanoma recognition // JAMA Dermatology 155.10. 2019. P. 1135–1141.
9. Philipp Tschandl, Cliff Rosendahl, Harald Kittler. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions // Scientific Data 5. 2018. P. 180161.
10. Noel CF Codella at all. Skin lesion analysis toward melanoma detection: A challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), hosted by the International Skin Imaging Collaboration (ISIC)// 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). IEEE. 2018. P. 168–172.
11. Marc Combalia at all. Bcn20000: Dermoscopic lesions in the wild // arXiv preprint arXiv:1908.02288, 2019.
12. Shiori Sagawa at all. Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization // arXiv preprint arXiv:1911.08731, 2019.
13. Sharon Li Karan Goel Albert Gu, Chris R´e. Automating the Art of Data Augmentation CLAMP: An Instantiation of Model Patching, 2020.
URL: http://hazyresearch.stanford.edu/data-aug-part-4.
14. Jun-Yan Zhu at all. Unpaired image-to-image translation using cyclecon-sistent adversarial networks // Proceedings of the IEEE international Conference on Computer Vision, 2017. P. 2223–2232.
15. Phillip Isola at all. Image-to-image translation with conditional adversari-al networks // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. P. 1125–1134.
16. Soumya Tripathy, Juho Kannala, Esa Rahtu. Learning image-to-image translation using paired and unpaired training samples // Asian Conference on Com-puter Vision. Springer, 2018. P. 51–66.
17. Ivan Anokhin at all. High-Resolution Daytime Translation Without Do-main Labels // arXiv preprint arXiv:2003.08791, 2020.
18. Tero Karras at all. Analyzing and improving the image quality of stylegan // arXiv preprint arXiv:1912.04958, 2019.
19. Sangwoo Mo, Minsu Cho, Jinwoo Shin. Instagan: Instance-aware imageto-image translation // arXiv preprint arXiv:1812.10889, 2018.
20. Yaroslav Ganin, Victor Lempitsky. Unsupervised domain adaptation by backpropagation // arXiv preprint arXiv:1409.7495, 2014.
21. Ying Tai, Jian Yang, Xiaoming Liu. Image super-resolution via deep recur-sive residual network // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. P. 3147–3155.
22. Jia Deng at all. Imagenet: A large-scale hierarchical image database// 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE. 2009. P. 248–255.
23. Diederik P Kingma, Jimmy Ba. Adam: A method for stochastic optimiza-tion // arXiv preprint arXiv:1412.6980, 2014.