efederici commited on
Commit
c03b749
1 Parent(s): e10840f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -27
README.md CHANGED
@@ -1,23 +1,22 @@
1
  ---
2
- language:
3
- - it_IT
4
- - it_IT
5
  tags:
6
- - generated_from_trainer
 
 
7
  metrics:
8
  - rouge
9
  model-index:
10
  - name: summarization_mbart_ilpost
11
  results: []
 
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # summarization_mbart_ilpost
18
 
19
- This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset.
20
- It achieves the following results on the evaluation set:
21
  - Loss: 2.3640
22
  - Rouge1: 38.9101
23
  - Rouge2: 21.384
@@ -25,19 +24,12 @@ It achieves the following results on the evaluation set:
25
  - Rougelsum: 35.0743
26
  - Gen Len: 39.8843
27
 
28
- ## Model description
29
-
30
- More information needed
31
-
32
- ## Intended uses & limitations
33
-
34
- More information needed
35
-
36
- ## Training and evaluation data
37
-
38
- More information needed
39
-
40
- ## Training procedure
41
 
42
  ### Training hyperparameters
43
 
@@ -50,13 +42,9 @@ The following hyperparameters were used during training:
50
  - lr_scheduler_type: linear
51
  - num_epochs: 4.0
52
 
53
- ### Training results
54
-
55
-
56
-
57
  ### Framework versions
58
 
59
  - Transformers 4.15.0.dev0
60
  - Pytorch 1.10.0+cu102
61
  - Datasets 1.15.1
62
- - Tokenizers 0.10.3
 
1
  ---
 
 
 
2
  tags:
3
+ - summarization
4
+ language:
5
+ - it
6
  metrics:
7
  - rouge
8
  model-index:
9
  - name: summarization_mbart_ilpost
10
  results: []
11
+ datasets:
12
+ - ARTeLab/ilpost
13
  ---
14
 
15
+ # mbart_summarization_ilpost
 
16
 
17
+ This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on IlPost dataset for Abstractive Summarization..
18
 
19
+ It achieves the following results:
 
20
  - Loss: 2.3640
21
  - Rouge1: 38.9101
22
  - Rouge2: 21.384
 
24
  - Rougelsum: 35.0743
25
  - Gen Len: 39.8843
26
 
27
+ ## Usage
28
+ ```python
29
+ from transformers import MBartTokenizer, MBartForConditionalGeneration
30
+ tokenizer = MBartTokenizer.from_pretrained("ARTeLab/mbart-summarization-ilpost")
31
+ model = MBartForConditionalGeneration.from_pretrained("ARTeLab/mbart-summarization-ilpost")
32
+ ```
 
 
 
 
 
 
 
33
 
34
  ### Training hyperparameters
35
 
 
42
  - lr_scheduler_type: linear
43
  - num_epochs: 4.0
44
 
 
 
 
 
45
  ### Framework versions
46
 
47
  - Transformers 4.15.0.dev0
48
  - Pytorch 1.10.0+cu102
49
  - Datasets 1.15.1
50
+ - Tokenizers 0.10.3