Analysing Machine Learning Models based on Explainable Artificial Intelligence Methods in Educational Analytics
Main Article Content
Abstract
The problem of predicting early dropout of students of Russian universities is urgent and therefore requires the development of new innovative approaches to solve it. To solve this problem, it is possible to develop predictive systems based on the use of student data, available in the information systems of universities. This paper investigates machine learning models for predicting early student dropout trained on the basis of student characteristics and performance data. The main scientific novelty of the work lies in the use of explainable AI methods to interpret and explain the performance of the trained machine learning models. The Explainable AI methods allow us to understand which of the input features (student characteristics) have the greatest influence on the results of the machine learning models. (student characteristics) have the greatest influence on the prediction results of trained models, and can also help to understand why the models make certain decisions. The findings expand the understanding of the influence of various factors on early dropout of students.
Keywords:
Article Details
References
2. Терентьев E.A., Груздев И.А., Горбунова Е.В. Суд идёт: дискурс преподавателей об отсеве студентов // Вопросы образования. 2015. № 2. С. 129–151. https://doi.org/10.17323/1814-9545-2015-2-129-151
3. Горбунова Е.В. Выбытия студентов из вузов: исследования в России и США // Вопросы образования. 2018. № 1. С. 110–131. https://doi.org/10.17323/1814-9545-2018-1-110-131
4. Горбунова Е.В. Влияние адаптации первокурсников к университету на вероятность их отчисления из вуза // Universitas. Журнал о жизни университетов. 2013. № 2 (1). С. 59–84.
5. Климова Т.А., Ким А.Т., Отт М.А. Индивидуальные образовательные траектории студентов как условие качественного университетского образования // Университетское управление: практика и анализ. 2023. 27 (1). С. 23–33. https://doi.org/10.15826/umpa.2023.01.003
6. Мещеряков А.О., Баянова Н.А., Калинина Е.А., Денисов В.А. Предикторы выбытия студентов медицинского вуза // Медицинское образование и профессиональное развитие. 2022. № 3 (47). URL: https://www.medobr.ru/ru/jarticles/736.html?SSr=0101348cba14ffffffff27c__07e60b0e0e0130-1843 (дата обращения 01.03.2024)
7. Шмелева Е. Д. Факторы отсева студентов инженерно-технического профиля в российских вузах // Вопросы образования. 2020. № 3. С. 110–136.
8. Мухамадиева К.Б. Машинное обучение в совершенствовании образовательной среды // Образование и проблемы развития общества. 2020. № 4 (13). С. 70–77.
9. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences // ICML’17. 2017. P. 3145–3153. URL: https://proceedings.mlr.press/v70/shrikumar17a.html (дата обращения 01.03.2024)
10. Daniel W. Apley, Jingyu Zhu. Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models // Journal of the Royal Statistical Society Series B: Statistical Methodology. 2020. No 4 (82). P. 1059–1086. https://doi.org/10.1111/rssb.12377
11. Linardatos P., Papastefanopoulos V., Kotsiantis S. Explainable AI: A Review of Machine Learning Interpretability Methods // Entropy. 2020. https://doi.org/10.3390/e23010018
12. Rachha A., Seyam M. Explainable AI in Education: Current Trends, Challenges, And Opportunities // SoutheastCon. 2023. P. 232–239. https://doi.org/10.1109/SoutheastCon51012.2023.10115140
13. Fan F.L., Xiong J., Li M., Wang G. On Interpretability of Artificial Neural Networks: A Survey // IEEE Trans Radiat. Plasma Med Sci. 2021. No. 5 6). P. 741–760. https://doi.org/10.1109/trpms.2021.3066428
14. Fiore U. Neural Networks in the Educational Sector: Challenges and Opportunities // Balkan Region Conference on Engineering and Business Education. 2019. No 1 (1). P. 332–337. https://doi.org/10.2478/cplbu-2020-0039
15. Montavon G., Samek W., Müller K.-R. Methods for interpreting and understanding deep neural networks // Digital Signal Processing. 2018. No. 73. P. 1–15. https://doi.org/10.1016/j.dsp.2017.10.011
16. Saranya A., Subhashini R. A systematic review of Explainable Artificial Intelligence models and applications: Recent developments and future trends // Decision Analytics Journal. 2023. No. 7. https://doi.org/10.1016/j.dajour.2023.100230
17. Murdoch W.J., Singh C., Kumbier K., Abbasi-Asl R., Yu B. Definitions, methods, and applications in interpretable machine learning // Proceedings of the National Academy of Sciences. 2019. No. 16 (44). P. 22071–22080
18. Meyer Lauritsen S. et al. Explainable artificial intelligence model to predict acute critical illness from electronic health records // Nature Communications. 2020. № 11 (1).
19. Linden T., Jong J., Lu C., Kiri V., Haeffs K., Fröhlich H. An explainable multimodal neural network architecture for predicting epilepsy comorbidities based on administrative claims data // Frontiers in Artificial Intelligence. 2021. No. 4. https://doi.org/10.3389/frai.2021.610197
20. Lu Y., Murzakhanov I., Chatzivasileiadis S. Neural network interpretability for forecasting of aggregated renewable generation // In Proceedings of 2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids. 2021. P. 282–288.
21. Mai-Anh T. Vu и др. A shared vision for machine learning in neuroscience // Journal of Neuroscience. 2018. 38 (7). P. 1601–1607.
22. Sundararajan M., Taly A., and Yan Q. Axiomatic attribution for deep networks // CoRR. 2017. URL: https://arxiv.org/abs/1703.01365
23. Kokhlikyan N. др. Captum: A unified and generic model interpretability library for pytorch // CoRR. 2020. URL: https://arxiv.org/abs/2009.07896 (дата обращения 01.03.2024)
24. Lundberg S.M., Lee S.-I. A unified approach to interpreting model predictions // Advances in Neural Information Processing Systems. 2017. No. 30. P. 4765–4774.
25. Linardatos P., Papastefanopoulos V., Kotsiantis S. Explainable AI: A Review of Machine Learning Interpretability Methods // Entropy. 2020. https://doi.org/10.3390/e23010018
26. Sahakyan M., Aung Z., Rahwan T. Explainable Artificial Intelligence for Tabular Data: A Survey // IEEE Access. 2021. No. 9. P. 135392–135422. https://doi.org/10.1109/ACCESS.2021.3116481
27. Шобонов Н.А., Булаева М.Н., Зиновьева С.А. Искусственный интеллект в образовании // Проблемы современного педагогического образования. № 79 (4). 2023. С. 288–290.
28. Hassan Khosravi и др. Explainable Artificial Intelligence in education // Computers and Education: Artificial Intelligence. 2022. No. 3. https://doi.org/10.1016/j.caeai.2022.100074
29. Гафаров Ф.М., Руднева Я.Б., Шарифов У.Ю. Прогностическое моделирование в высшем образовании: определение факторов академической успеваемости // Высшее образование в России. 2023. Т. 32. № 1. С. 51–70. https://doi.org/10.31992/0869-3617-2023-32-1-51-7
This work is licensed under a Creative Commons Attribution 4.0 International License.
Presenting an article for publication in the Russian Digital Libraries Journal (RDLJ), the authors automatically give consent to grant a limited license to use the materials of the Kazan (Volga) Federal University (KFU) (of course, only if the article is accepted for publication). This means that KFU has the right to publish an article in the next issue of the journal (on the website or in printed form), as well as to reprint this article in the archives of RDLJ CDs or to include in a particular information system or database, produced by KFU.
All copyrighted materials are placed in RDLJ with the consent of the authors. In the event that any of the authors have objected to its publication of materials on this site, the material can be removed, subject to notification to the Editor in writing.
Documents published in RDLJ are protected by copyright and all rights are reserved by the authors. Authors independently monitor compliance with their rights to reproduce or translate their papers published in the journal. If the material is published in RDLJ, reprinted with permission by another publisher or translated into another language, a reference to the original publication.
By submitting an article for publication in RDLJ, authors should take into account that the publication on the Internet, on the one hand, provide unique opportunities for access to their content, but on the other hand, are a new form of information exchange in the global information society where authors and publishers is not always provided with protection against unauthorized copying or other use of materials protected by copyright.
RDLJ is copyrighted. When using materials from the log must indicate the URL: index.phtml page = elbib / rus / journal?. Any change, addition or editing of the author's text are not allowed. Copying individual fragments of articles from the journal is allowed for distribute, remix, adapt, and build upon article, even commercially, as long as they credit that article for the original creation.
Request for the right to reproduce or use any of the materials published in RDLJ should be addressed to the Editor-in-Chief A.M. Elizarov at the following address: amelizarov@gmail.com.
The publishers of RDLJ is not responsible for the view, set out in the published opinion articles.
We suggest the authors of articles downloaded from this page, sign it and send it to the journal publisher's address by e-mail scan copyright agreements on the transfer of non-exclusive rights to use the work.