Verified Explainability Core: a GD-ANFIS/SHAP Hybrid Architecture for XAI 2.0

Main Article Content

Andrei Sergeevich Ilin
Alexey Nikolaevich Averkin

Abstract

This paper proposes a hybrid Explainable AI architecture that fuses a fully differentiable neuro-fuzzy GD-ANFIS model with the post-hoc SHAP method. The integration is designed to meet XAI 2.0 principles, which call for explanations that are transparent, verifiable, and adaptable at the same time. GD-ANFIS produces human-readable Takagi-Sugeno rules, ensuring structural interpretability, whereas SHAP delivers quantitative feature contributions derived from Shapley theory. To merge these layers, we introduce a comparative-audit mechanism that automatically matches the sets of key features identified by both methods, checks whether the directions of influence coincide, and assesses the consistency between SHAP numerical scores and GD-ANFIS linguistic rules. Such dual-loop on global soil-subsidence mapping, and RMSE 2.30 and 2.36 on Boston Housing and surface-water-quality monitoring respectively, all with full interpretability preserved. In every case, top-feature overlap between the two explanation layers exceeded 60%, demonstrating strong agreement between structural and numerical interpretations. The proposed architecture therefore offers a practical foundation for responsible XAI 2.0 deployment in critical domains ranging from medicine and ecology to geoinformation systems and finance.

Article Details

How to Cite
Trofimov, Y. V., A. D. Lebedev, A. S. Ilin, and A. N. Averkin. “Verified Explainability Core: A GD-ANFIS/SHAP Hybrid Architecture for XAI 2.0”. Russian Digital Libraries Journal, vol. 28, no. 5, Dec. 2025, pp. 1230-52, doi:10.26907/1562-5419-2025-28-5-1230-1252.

References

1. Trofimov Y.V., Shevchenko A.V., Averkin A.N., Muravyov I.P., Kuznetsov E.M. Concept of hierarchically organized explainable intelligent systems: synthesis of deep neural networks, fuzzy logic and incremental learning in medical diagnostics // Proceedings of the VI International Conference on Neural Networks and Neurotechnologies (NeuroNT). 2025. P. 14–17. https://doi.org/10.1109/NeuroNT66873.2025.11049976
2. Rudin C. Stop explaining black box machine learning models for high‑stakes decisions and use interpretable models instead // Nature Machine Intelligence. 2019. Vol. 1, No. 5. P. 206–215. https://doi.org/10.1038/s42256-019-0048-x
3. Lundberg S.M., Lee S.-I. A unified approach to interpreting model predictions // Advances in Neural Information Processing Systems. 2017. Vol. 30. P. 4765–4774. https://doi.org/10.48550/arXiv.1705.07874
4. Ribeiro M.T., Singh S., Guestrin C. “Why Should I Trust You?” Explaining the predictions of any classifier // Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2016. P. 1135–1144. https://doi.org/10.1145/2939672.2939778
5. Lipton Z.C. The mythos of model interpretability // Communications of the ACM. 2018. Vol. 61, no. 10. P. 36–43. https://doi.org/10.1145/3233231
6. Doshi-Velez F., Kim B. Towards a rigorous science of interpretable machine learning // arXiv preprint. 2017. arXiv:1702.08608. https://doi.org/10.48550/arXiv.1702.08608
7. Jang J.S.R. ANFIS: Adaptive-network-based fuzzy inference system // IEEE Transactions on Systems, Man, and Cybernetics. 1993. Vol. 23, no. 3. P. 665–685 https://doi.org/10.1109/21.256541
8. Zadeh L.A. Fuzzy sets // Information and Control. 1965. Vol. 8, No. 3. P. 338–353. https://doi.org/10.1016/S0019-9958(65)90241-X
9. Trofimov Y.V., Averkin A.N. The relationship between trusted artificial intelligence and XAI 2.0: theory and frameworks // Soft Measurements and Computing. 2025. Vol. 90, No. 5. P. 68–84. https://doi.org/10.36871/2618-9976.2025.05.006
10. Takagi T., Sugeno M. Fuzzy identification of systems and its applications to modeling and control // IEEE Transactions on Systems, Man, and Cybernetics. 1985. Vol. 15, No. 1. P. 116–132. https://doi.org/10.1109/TSMC.1985.6313399
11. Nguyen T., Mirjalili S. X‑ANFIS: explainable adaptive neuro‑fuzzy inference system: repository. Электрон. ресурс // GitHub. 2023. Дата обращения: 15.01.2025.
12. Shapley L.S. A value for n‑person games // Contributions to the Theory of Games, vol. 2. Princeton University Press. 1953. P. 307–317. https://doi.org/10.1515/9781400881970-018
13. Breiman L. Random forests // Machine Learning. 2001. Vol. 45, no. 1. P. 5–32. https://doi.org/10.1023/A:1010933404324
14. Comprehensive surface water quality monitoring dataset (1940–2023): dataset. Электрон. ресурс // Figshare. 2025. https://doi.org/10.6084/m9.figshare.27800394. Дата обращения: июль 2025.
15. Hasan M.F., Smith R., Vajedian S., Majumdar S., Pommerenke R. Global land subsidence mapping reveals widespread loss of aquifer storage capacity // Nature Communications. 2023. Vol. 14. Art. 6180. https://doi.org/10.1038/s41467-023-41933-z


Most read articles by the same author(s)