arXiv:2412.03884v3 Announce Type: replace
Abstract: Explainable Artificial Intelligence (XAI) methods are increasingly used in safety-critical domains, yet there is no unified framework to jointly evaluate fidelity, interpretability, robustness, fairness, and completeness. We address this gap through two contributions. First, we propose a multi-criteria evaluation framework that formalizes these five criteria using principled metrics: fidelity via prediction-gap analysis; interpretability via a composite concentration-coherence-contrast score; robustness via cosine-similarity perturbation stability; fairness via Jensen-Shannon divergence across demographic groups; and completeness via feature-ablation coverage. These are integrated using an entropy-weighted dynamic scoring scheme that adapts to domain-specific priorities. Second, we introduce Perturbation-Gradient Consensus Attribution (PGCA), which fuses grid-based perturbation importance with Grad-CAM++ through consensus amplification and adaptive contrast enhancement, combining perturbation fidelity with gradient-based spatial precision. We evaluate across five domains (brain tumor MRI, plant disease, security screening, gender, and sunglass detection) using fine-tuned ResNet-50 models. PGCA achieves the best performance in fidelity $(2.22 pm 1.62)$, interpretability $(3.89 pm 0.33)$, and fairness $(4.95 pm 0.03)$, with statistically significant improvements over baselines $(p < 10^-7)$. Sensitivity analysis shows stable rankings (Kendall’s $(tau geq 0.88)$). Code and results are publicly available.
Behavior change beyond intervention: an activity-theoretical perspective on human-centered design of personal health technology
IntroductionModern personal technologies, such as smartphone apps with artificial intelligence (AI) capabilities, have a significant potential for helping people make necessary changes in their behavior

