File size: 1,135 Bytes
6424b22 d1f2aa8 6424b22 d1f2aa8 6424b22 d1f2aa8 6424b22 d1f2aa8 6424b22 d1f2aa8 6424b22 d1f2aa8 6424b22 d1f2aa8 6424b22 d1f2aa8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
---
tags:
- summarization
language:
- it
metrics:
- rouge
model-index:
- name: summarization_mbart_mlsum
results: []
datasets:
- ARTeLab/mlsum-it
---
# mbart_summarization_mlsum
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on mlsum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 3.3336
- Rouge1: 19.3489
- Rouge2: 6.4028
- Rougel: 16.3497
- Rougelsum: 16.5387
- Gen Len: 33.5945
## Usage
```python
from transformers import MBartTokenizer, MBartForConditionalGeneration
tokenizer = MBartTokenizer.from_pretrained("ARTeLab/mbart-summarization-mlsum")
model = MBartForConditionalGeneration.from_pretrained("ARTeLab/mbart-summarization-mlsum")
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4.0
### Framework versions
- Transformers 4.15.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3 |