A Maturity Model for Practical Explainability in Artificial Intelligence-Based Applications: Integrating Analysis and Evaluation (MM4XAI-AE) Models

Julián Muñoz-Ordóñez, Carlos Cobos, Juan C. Vidal-Rojas, Francisco Herrera

Research output: Contribution to journalArticlepeer-review

Abstract

The increasing adoption of artificial intelligence (AI) in critical domains such as healthcare, law, and defense demands robust mechanisms to ensure transparency and explainability in decision-making processes. While machine learning and deep learning algorithms have advanced significantly, their growing complexity presents persistent interpretability challenges. Existing maturity frameworks, such as Capability Maturity Model Integration, fall short in addressing the distinct requirements of explainability in AI systems, particularly where ethical compliance and public trust are paramount. To address this gap, we propose the Maturity Model for eXplainable Artificial Intelligence: Analysis and Evaluation (MM4XAI-AE), a domain-agnostic maturity model tailored to assess and guide the practical deployment of explainability in AI-based applications. The model integrates two complementary components: an analysis model and an evaluation model, structured across four maturity levels—operational, justified, formalized, and managed. It evaluates explainability across three critical dimensions: technical foundations, structured design, and human-centered explainability. MM4XAI-AE is grounded in the PAG-XAI framework, emphasizing the interrelated dimensions of practicality, auditability, and governance, thereby aligning with current reflections on responsible and trustworthy AI. The MM4XAI-AE model is empirically validated through a structured evaluation of thirteen published AI applications from diverse sectors, analyzing their design and deployment practices. The results show a wide distribution across maturity levels, underscoring the model’s capacity to identify strengths, gaps, and actionable pathways for improving explainability. This work offers a structured and scalable framework to standardize explainability practices and supports researchers, developers, and policymakers in fostering more transparent, ethical, and trustworthy AI systems.

Original languageEnglish
Article number4934696
JournalInternational Journal of Intelligent Systems
Volume2025
Issue number1
DOIs
StatePublished - 2025
Externally publishedYes

Keywords

  • AI governance
  • explainable artificial intelligence (XAI)
  • maturity model
  • practical explainability
  • responsible AI
  • trustworthy AI

Fingerprint

Dive into the research topics of 'A Maturity Model for Practical Explainability in Artificial Intelligence-Based Applications: Integrating Analysis and Evaluation (MM4XAI-AE) Models'. Together they form a unique fingerprint.

Cite this