Datasets:
metadata
language:
- it
license: cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100k
source_datasets:
- original
task_categories:
- summarization
Dataset Card for fanpage
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: [Needs More Information]
- Repository: [Needs More Information]
- Paper: [Needs More Information]
- Leaderboard: [Needs More Information]
- Point of Contact: [Needs More Information]
Dataset Summary
Fanpage dataset, containing news articles taken from Fanpage.
There are two features:
- source: Input news article.
- target: Summary of the article.
Supported Tasks and Leaderboards
abstractive-summarization
,summarization
Languages
The text in the dataset is in Italian
Licensing Information
Fanpage text summarization dataset by Nicola Landro, Ignazio Gallo, Riccardo La Grassa, Edoardo Federici, derivated from Fanpage is licensed under Creative Commons Attribution 4.0 International
Citation Information
More details and results in published work
@Article{info13050228,
AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo},
TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization},
JOURNAL = {Information},
VOLUME = {13},
YEAR = {2022},
NUMBER = {5},
ARTICLE-NUMBER = {228},
URL = {https://www.mdpi.com/2078-2489/13/5/228},
ISSN = {2078-2489},
ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.},
DOI = {10.3390/info13050228}
}