Abstract:
This paper proposes a hybrid Explainable AI architecture that fuses a fully differentiable neuro-fuzzy GD-ANFIS model with the post-hoc SHAP method. The integration is designed to meet XAI 2.0 principles, which call for explanations that are transparent, verifiable, and adaptable at the same time. GD-ANFIS produces human-readable Takagi-Sugeno rules, ensuring structural interpretability, whereas SHAP delivers quantitative feature contributions derived from Shapley theory. To merge these layers, we introduce a comparative-audit mechanism that automatically matches the sets of key features identified by both methods, checks whether the directions of influence coincide, and assesses the consistency between SHAP numerical scores and GD-ANFIS linguistic rules. Such dual-loop on global soil-subsidence mapping, and RMSE 2.30 and 2.36 on Boston Housing and surface-water-quality monitoring respectively, all with full interpretability preserved. In every case, top-feature overlap between the two explanation layers exceeded 60%, demonstrating strong agreement between structural and numerical interpretations. The proposed architecture therefore offers a practical foundation for responsible XAI 2.0 deployment in critical domains ranging from medicine and ecology to geoinformation systems and finance.