This comprehensive review provides a critical analysis of the evolution of Text Summarization (TS) techniques, ranging from early statistical methods to modern Transformer-based Large Language Models (LLMs). The paper meticulously evaluates the architectural differences between extractive and abstractive summarization, emphasizing the importance of linguistic coherence and factual accuracy. It identifies current research gaps, such as the challenges in multi-document summarization and the limitations of traditional evaluation metrics like ROUGE. By synthesizing recent breakthroughs in Deep Learning, this study offers a comprehensive roadmap for future research in Natural Language Processing (NLP). This work is essential for researchers looking to develop trustable AI systems for automated content distillation and information retrieval.

For more information, click on the link below.

2025
https://onlinelibrary.wiley.com/doi/10.1111/exsy.13833

Leave a Reply

Your email address will not be published. Required fields are marked *