sara-nabhani commited on
Commit
42f3566
1 Parent(s): a5b29e5

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - esnli
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ - rouge
11
+ - bleu
12
+ model-index:
13
+ - name: t5-small-e-snli-generation-label_and_explanation-selected-b48
14
+ results:
15
+ - task:
16
+ name: Sequence-to-sequence Language Modeling
17
+ type: text2text-generation
18
+ dataset:
19
+ name: esnli
20
+ type: esnli
21
+ config: plain_text
22
+ split: validation
23
+ args: plain_text
24
+ metrics:
25
+ - name: Accuracy
26
+ type: accuracy
27
+ value: 0.8657793131477342
28
+ - name: F1
29
+ type: f1
30
+ value: 0.8658628497423001
31
+ - name: Rouge1
32
+ type: rouge
33
+ value: 0.6049779979620054
34
+ - name: Bleu
35
+ type: bleu
36
+ value: 0.4039391893498565
37
+ ---
38
+
39
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
40
+ should probably proofread and complete it, then remove this comment. -->
41
+
42
+ # t5-small-e-snli-generation-label_and_explanation-selected-b48
43
+
44
+ This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the esnli dataset.
45
+ It achieves the following results on the evaluation set:
46
+ - Loss: 1.9091
47
+ - Accuracy: 0.8658
48
+ - F1: 0.8659
49
+ - Bertscore F1: 0.9337
50
+ - Rouge1: 0.6050
51
+ - Rouge2: 0.3983
52
+ - Rougel: 0.5492
53
+ - Rougelsum: 0.5513
54
+ - Bleu: 0.4039
55
+
56
+ ## Model description
57
+
58
+ More information needed
59
+
60
+ ## Intended uses & limitations
61
+
62
+ More information needed
63
+
64
+ ## Training and evaluation data
65
+
66
+ More information needed
67
+
68
+ ## Training procedure
69
+
70
+ ### Training hyperparameters
71
+
72
+ The following hyperparameters were used during training:
73
+ - learning_rate: 0.001
74
+ - train_batch_size: 48
75
+ - eval_batch_size: 48
76
+ - seed: 42
77
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
78
+ - lr_scheduler_type: linear
79
+ - lr_scheduler_warmup_ratio: 0.05
80
+ - num_epochs: 10
81
+
82
+ ### Training results
83
+
84
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Bertscore F1 | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu |
85
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------------:|:------:|:------:|:------:|:---------:|:------:|
86
+ | 1.7285 | 0.17 | 2000 | 1.9945 | 0.7799 | 0.7792 | 0.9249 | 0.5631 | 0.3517 | 0.5091 | 0.5116 | 0.3617 |
87
+ | 1.3318 | 0.35 | 4000 | 1.9494 | 0.7980 | 0.7971 | 0.9295 | 0.5766 | 0.3656 | 0.5218 | 0.5234 | 0.3785 |
88
+ | 1.2662 | 0.52 | 6000 | 1.8983 | 0.8322 | 0.8331 | 0.9289 | 0.5769 | 0.3656 | 0.5205 | 0.5225 | 0.3727 |
89
+ | 1.2285 | 0.7 | 8000 | 1.9078 | 0.8391 | 0.8396 | 0.9313 | 0.5833 | 0.3734 | 0.5304 | 0.5321 | 0.3884 |
90
+ | 1.1973 | 0.87 | 10000 | 1.9246 | 0.8485 | 0.8470 | 0.9303 | 0.5888 | 0.3782 | 0.5322 | 0.5339 | 0.3868 |
91
+ | 1.1715 | 1.05 | 12000 | 1.9262 | 0.8561 | 0.8565 | 0.9331 | 0.6020 | 0.3950 | 0.5464 | 0.5479 | 0.4039 |
92
+ | 1.1368 | 1.22 | 14000 | 1.9155 | 0.8621 | 0.8612 | 0.9313 | 0.6027 | 0.3918 | 0.5442 | 0.5463 | 0.3889 |
93
+ | 1.1281 | 1.4 | 16000 | 1.9091 | 0.8658 | 0.8659 | 0.9337 | 0.6050 | 0.3983 | 0.5492 | 0.5513 | 0.4039 |
94
+
95
+
96
+ ### Framework versions
97
+
98
+ - Transformers 4.27.4
99
+ - Pytorch 2.0.0+cu117
100
+ - Datasets 2.11.0
101
+ - Tokenizers 0.13.2