Papers
arxiv:2406.11289

A Systematic Survey of Text Summarization: From Statistical Methods to Large Language Models

Published on Jun 17
· Submitted by hpzhang on Jun 21
Authors:
,

Abstract

Text summarization research has undergone several significant transformations with the advent of deep neural networks, pre-trained language models (PLMs), and recent large language models (LLMs). This survey thus provides a comprehensive review of the research progress and evolution in text summarization through the lens of these paradigm shifts. It is organized into two main parts: (1) a detailed overview of datasets, evaluation metrics, and summarization methods before the LLM era, encompassing traditional statistical methods, deep learning approaches, and PLM fine-tuning techniques, and (2) the first detailed examination of recent advancements in benchmarking, modeling, and evaluating summarization in the LLM era. By synthesizing existing literature and presenting a cohesive overview, this survey also discusses research trends, open challenges, and proposes promising research directions in summarization, aiming to guide researchers through the evolving landscape of summarization research.

Community

Paper submitter

This survey thus provides a comprehensive review of the research progress and evolution in text summarization through the lens of these paradigm shifts, covering a detailed overview of datasets, evaluation metrics, and summarization methods before the LLM era and the first detailed examination of recent advancements in benchmarking, modeling, and evaluating summarization in the LLM era.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.11289 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.11289 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.11289 in a Space README.md to link it from this page.

Collections including this paper 1