A Deep Learning Framework for Improving Explainability in AI- Generated Summaries
Main Article Content
Abstract
The burgeoning field of artificial intelligence has witnessed significant advancements in natural language processing, particularly in the domain of automated text summarization. Despite these advancements, the opacity of deep learning models poses a significant challenge to the interpretability and trustworthiness of AI-generated summaries. This paper proposes a novel deep learning framework designed to enhance the explainability of AI-generated summaries, thereby bridging the gap between model performance and user trust. Our framework leverages a combination of transformer-based architectures and attention mechanisms to not only generate high-quality summaries but also provide interpretable insights into the decision-making processes of these models. By integrating layer-wise relevance propagation (LRP) with attention distributions, our approach elucidates the contribution of individual tokens and sentences to the final summary output. This dual mechanism facilitates a granular understanding of how input data is transformed into concise and coherent summaries, thus offering end-users a more transparent view of the model’s functionality. We evaluate our framework on several benchmark datasets, including the CNN/Daily Mail and XSum datasets, to demonstrate its efficacy in producing both accurate and explainable summaries. Our experimental results indicate that our approach not only maintains competitive summarization performance but also significantly enhances explainability metrics, as measured by novel explainability score metrics introduced in this work. Furthermore, we discuss the implications of improved explainability in AI-generated summaries on various application domains, such as legal document analysis and medical report synthesis, where the transparency of decision-making processes is crucial. This research contributes to the growing body of literature advocating for explainable AI, paving the way for more trustworthy and user-centric AI applications in natural language processing. In conclusion, the proposed deep learning framework represents a significant step towards reconciling the often conflicting goals of performance and explainability in AI-generated text summarization, offering a robust solution to enhance user trust and model transparency.