autoevaluator
HF staff
Add evaluation results on the 3.0.0 config and test split of cnn_dailymail
4dcd69e
metadata
language:
- it
tags:
- summarization
datasets:
- ARTeLab/fanpage
metrics:
- rouge
base_model: gsarti/it5-base
model-index:
- name: summarization_fanpage128
results:
- task:
type: summarization
name: Summarization
dataset:
name: cnn_dailymail
type: cnn_dailymail
config: 3.0.0
split: test
metrics:
- type: rouge
value: 13.2951
name: ROUGE-1
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2JmMjEwNzVhZTIwYTczMDY4NzE5OGViYTNiNjdiNTZjMTY1NDhmZjhiMDZhYzY2NjI4MzVhMTUzNDVmNjFmZiIsInZlcnNpb24iOjF9.hCyv0ZQL-NC4NxOiR2n4GqY9zeT4bYnBBdembrYeQUTaS87W6f30H7VSQ-aYaGGe9P_GBG3iFGCKtvgn5rQnDw
- type: rouge
value: 5.1775
name: ROUGE-2
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDY4MmFiYjQ4MzQxNzM4ZjI3ZGViNGM1ZTIzODFhOWJhZTU5OGEyNDk5M2VhNTNjY2YzMmZjNWFlYWVjMTIwZiIsInZlcnNpb24iOjF9.1nA0y5Cpvm14MvUoJ19L34Ic8tvsCAkPr6wdp7PMF6MW10wRe6F7k6EybzgG8RllV1zf26Z2L3rzt2wPE4fiCQ
- type: rouge
value: 11.541
name: ROUGE-L
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOThmZDZmZWQxZWM5NWVkZTliMzNjOGMxNDI1YTExOWU0ZTk4NDMxZjc5ZDllNDVkNTZhZDI2ZjAzMjE5NDZlNCIsInZlcnNpb24iOjF9.fKSomEvGq5Gcf-lnyhlVWzlx3qZvgyWdO0gPojVwmK7EM6m-TBrCQABGXkaMxIgq9vQHo9gcia3gGQZMGl2YCQ
- type: rouge
value: 12.5922
name: ROUGE-LSUM
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmU3NmQzMTM0YjgzMjMzMGIxM2Q5OWY4YjI2ZmEwNmEwYWQzOWNmODIzMjJjMWJlNzI0MjU1YTI5ZDhmNzYwZCIsInZlcnNpb24iOjF9.8l7ikfY1uQ7iXcktX1TMBz6wmXx1tyNlxk2YqKD1yq_z9a7v83g6WbSWDIyoHb18qGkd02Elt7cUgCAvobsjCA
- type: loss
value: 2.448732614517212
name: loss
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGQxOGI0MjA4ZDBlNDA4YjhmNWEwOTZiMDFmNmMzOTE0NWRlNTlkZjkzNjQ3N2EyNzkwNDM0OWVhMzlmZmE4OSIsInZlcnNpb24iOjF9.40ugMNqcd_j2-i7DowTK4pC71TY0lrAz-cm05IpK7M2m-Rz7GkexW54zL6xzgz677KSJ_Lpz9ZlAWkKTTm3BBA
- type: gen_len
value: 19
name: gen_len
verified: true
verifyToken: >-
eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGVlYWZlZjc2MWEyYzE1NzRiYzJjZjM5Mzk1OWNhZGFkMDRjYTFjZTNhZTNiNGIyMmJiMWY3NTRlYjUyMjFmMCIsInZlcnNpb24iOjF9.9vVczqSCxDH-5CbGuDDzA1x5-Jj8wtebMw_lJbPm9DKNaMRbxk4GjmWpxjFeBrwK8jxv-8T3oJ-YSY6c1Yz6Dg
summarization_fanpage128
This model is a fine-tuned version of gsarti/it5-base on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.5348
- Rouge1: 34.1882
- Rouge2: 15.7866
- Rougel: 25.141
- Rougelsum: 28.4882
- Gen Len: 69.3041
Usage
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ARTeLab/it5-summarization-fanpage-128")
model = T5ForConditionalGeneration.from_pretrained("ARTeLab/it5-summarization-fanpage-128")
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
Citation
More details and results in published work
@Article{info13050228,
AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo},
TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization},
JOURNAL = {Information},
VOLUME = {13},
YEAR = {2022},
NUMBER = {5},
ARTICLE-NUMBER = {228},
URL = {https://www.mdpi.com/2078-2489/13/5/228},
ISSN = {2078-2489},
ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.},
DOI = {10.3390/info13050228}
}