TY - JOUR
T1 - A Maturity Model for Practical Explainability in Artificial Intelligence-Based Applications
T2 - Integrating Analysis and Evaluation (MM4XAI-AE) Models
AU - Muñoz-Ordóñez, Julián
AU - Cobos, Carlos
AU - Vidal-Rojas, Juan C.
AU - Herrera, Francisco
N1 - Publisher Copyright:
Copyright © 2025 Julián Muñoz-Ordóñez et al. International Journal of Intelligent Systems published by John Wiley & Sons Ltd.
PY - 2025
Y1 - 2025
N2 - The increasing adoption of artificial intelligence (AI) in critical domains such as healthcare, law, and defense demands robust mechanisms to ensure transparency and explainability in decision-making processes. While machine learning and deep learning algorithms have advanced significantly, their growing complexity presents persistent interpretability challenges. Existing maturity frameworks, such as Capability Maturity Model Integration, fall short in addressing the distinct requirements of explainability in AI systems, particularly where ethical compliance and public trust are paramount. To address this gap, we propose the Maturity Model for eXplainable Artificial Intelligence: Analysis and Evaluation (MM4XAI-AE), a domain-agnostic maturity model tailored to assess and guide the practical deployment of explainability in AI-based applications. The model integrates two complementary components: an analysis model and an evaluation model, structured across four maturity levels—operational, justified, formalized, and managed. It evaluates explainability across three critical dimensions: technical foundations, structured design, and human-centered explainability. MM4XAI-AE is grounded in the PAG-XAI framework, emphasizing the interrelated dimensions of practicality, auditability, and governance, thereby aligning with current reflections on responsible and trustworthy AI. The MM4XAI-AE model is empirically validated through a structured evaluation of thirteen published AI applications from diverse sectors, analyzing their design and deployment practices. The results show a wide distribution across maturity levels, underscoring the model’s capacity to identify strengths, gaps, and actionable pathways for improving explainability. This work offers a structured and scalable framework to standardize explainability practices and supports researchers, developers, and policymakers in fostering more transparent, ethical, and trustworthy AI systems.
AB - The increasing adoption of artificial intelligence (AI) in critical domains such as healthcare, law, and defense demands robust mechanisms to ensure transparency and explainability in decision-making processes. While machine learning and deep learning algorithms have advanced significantly, their growing complexity presents persistent interpretability challenges. Existing maturity frameworks, such as Capability Maturity Model Integration, fall short in addressing the distinct requirements of explainability in AI systems, particularly where ethical compliance and public trust are paramount. To address this gap, we propose the Maturity Model for eXplainable Artificial Intelligence: Analysis and Evaluation (MM4XAI-AE), a domain-agnostic maturity model tailored to assess and guide the practical deployment of explainability in AI-based applications. The model integrates two complementary components: an analysis model and an evaluation model, structured across four maturity levels—operational, justified, formalized, and managed. It evaluates explainability across three critical dimensions: technical foundations, structured design, and human-centered explainability. MM4XAI-AE is grounded in the PAG-XAI framework, emphasizing the interrelated dimensions of practicality, auditability, and governance, thereby aligning with current reflections on responsible and trustworthy AI. The MM4XAI-AE model is empirically validated through a structured evaluation of thirteen published AI applications from diverse sectors, analyzing their design and deployment practices. The results show a wide distribution across maturity levels, underscoring the model’s capacity to identify strengths, gaps, and actionable pathways for improving explainability. This work offers a structured and scalable framework to standardize explainability practices and supports researchers, developers, and policymakers in fostering more transparent, ethical, and trustworthy AI systems.
KW - AI governance
KW - explainable artificial intelligence (XAI)
KW - maturity model
KW - practical explainability
KW - responsible AI
KW - trustworthy AI
UR - http://www.scopus.com/inward/record.url?scp=105009270958&partnerID=8YFLogxK
U2 - 10.1155/int/4934696
DO - 10.1155/int/4934696
M3 - Article
AN - SCOPUS:105009270958
SN - 0884-8173
VL - 2025
JO - International Journal of Intelligent Systems
JF - International Journal of Intelligent Systems
IS - 1
M1 - 4934696
ER -