File size: 3,264 Bytes
db7427b
 
8157add
 
 
db7427b
 
 
 
2bb663d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71f734f
 
db7427b
 
 
 
7072538
a21ecbc
eda417f
2ebfac1
 
 
 
 
 
db7427b
eda417f
db7427b
eda417f
 
 
 
 
db7427b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fe05e88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
tags:
- summarization
language:
- it
metrics:
- rouge
model-index:
- name: summarization_fanpage128
  results:
  - task:
      type: summarization
      name: Summarization
    dataset:
      name: scan
      type: scan
      config: simple
      split: train
    metrics:
    - name: ROUGE-1
      type: rouge
      value: 18.018
      verified: true
    - name: ROUGE-2
      type: rouge
      value: 1.0789
      verified: true
    - name: ROUGE-L
      type: rouge
      value: 15.9721
      verified: true
    - name: ROUGE-LSUM
      type: rouge
      value: 15.9668
      verified: true
    - name: loss
      type: loss
      value: 2.6717886924743652
      verified: true
    - name: gen_len
      type: gen_len
      value: 18.9985
      verified: true
datasets:
- ARTeLab/fanpage
---

# summarization_fanpage128

This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on Fanpage dataset for Abstractive Summarization.

It achieves the following results:
- Loss: 1.5348
- Rouge1: 34.1882
- Rouge2: 15.7866
- Rougel: 25.141
- Rougelsum: 28.4882
- Gen Len: 69.3041

## Usage 

```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ARTeLab/it5-summarization-fanpage-128")
model = T5ForConditionalGeneration.from_pretrained("ARTeLab/it5-summarization-fanpage-128")
```

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0

### Framework versions

- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3

# Citation

More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228)

```
@Article{info13050228,
    AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo},
    TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization},
    JOURNAL = {Information},
    VOLUME = {13},
    YEAR = {2022},
    NUMBER = {5},
    ARTICLE-NUMBER = {228},
    URL = {https://www.mdpi.com/2078-2489/13/5/228},
    ISSN = {2078-2489},
    ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.},
    DOI = {10.3390/info13050228}
}
```