arXiv:2410.18560v2 Announce Type: replace
Abstract: Explainable Artificial Intelligence (XAI) methods in text summarization are essential for understanding the model behavior and fostering trust in model-generated summaries. Despite the effectiveness of XAI methods, recent studies have highlighted a key challenge in this area known as the “disagreement problem”. This problem occurs when different XAI methods yield conflicting explanations for the same model outcome. Such discrepancies raise concerns about the consistency of explanations and reduce confidence in model interpretations, which is crucial for secure and accountable AI applications. This work is among the first to empirically investigate the disagreement problem in text summarization, demonstrating that such discrepancies are widespread in state-of-the-art summarization models. To address this gap, we propose Regional Explainable AI (RXAI) a novel segmentation-based approach, where each article is divided into smaller, coherent segments using sentence transformers and clustering. We use XAI methods on text segments to create localized explanations that help reduce disagreement between different XAI methods, thereby enhancing the trustworthiness of AI-generated summaries. Our results illustrate that the localized explanations are more consistent than full-text explanations. The proposed approach is validated using two benchmark summarization datasets, Extreme summarization (Xsum) and CNN/Daily Mail, indicating a substantial decrease in disagreement. Additionally, the interactive JavaScript visualization tool is developed to facilitate easy, color-coded exploration of attribution scores at the sentence level, enhancing user comprehension of model explanations.
OptoLoop: An optogenetic tool to probe the functional role of genome organization
The genome folds inside the cell nucleus into hierarchical architectural features, such as chromatin loops and domains. If and how this genome organization influences the


