sara-nabhani commited on
Commit
6ca5e9b
1 Parent(s): 17e070c

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - esnli
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ - rouge
11
+ - bleu
12
+ model-index:
13
+ - name: t5-small-e-snli-generation-label_and_explanation-selected-b64
14
+ results:
15
+ - task:
16
+ name: Sequence-to-sequence Language Modeling
17
+ type: text2text-generation
18
+ dataset:
19
+ name: esnli
20
+ type: esnli
21
+ config: plain_text
22
+ split: validation
23
+ args: plain_text
24
+ metrics:
25
+ - name: Accuracy
26
+ type: accuracy
27
+ value: 0.8732981101402154
28
+ - name: F1
29
+ type: f1
30
+ value: 0.8729633394714756
31
+ - name: Rouge1
32
+ type: rouge
33
+ value: 0.6144211309547953
34
+ - name: Bleu
35
+ type: bleu
36
+ value: 0.4223746159966924
37
+ ---
38
+
39
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
40
+ should probably proofread and complete it, then remove this comment. -->
41
+
42
+ # t5-small-e-snli-generation-label_and_explanation-selected-b64
43
+
44
+ This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the esnli dataset.
45
+ It achieves the following results on the evaluation set:
46
+ - Loss: 1.9257
47
+ - Accuracy: 0.8733
48
+ - F1: 0.8730
49
+ - Bertscore F1: 0.9356
50
+ - Rouge1: 0.6144
51
+ - Rouge2: 0.4096
52
+ - Rougel: 0.5592
53
+ - Rougelsum: 0.5611
54
+ - Bleu: 0.4224
55
+
56
+ ## Model description
57
+
58
+ More information needed
59
+
60
+ ## Intended uses & limitations
61
+
62
+ More information needed
63
+
64
+ ## Training and evaluation data
65
+
66
+ More information needed
67
+
68
+ ## Training procedure
69
+
70
+ ### Training hyperparameters
71
+
72
+ The following hyperparameters were used during training:
73
+ - learning_rate: 0.001
74
+ - train_batch_size: 64
75
+ - eval_batch_size: 64
76
+ - seed: 42
77
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
78
+ - lr_scheduler_type: linear
79
+ - lr_scheduler_warmup_ratio: 0.05
80
+ - num_epochs: 10
81
+
82
+ ### Training results
83
+
84
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Bertscore F1 | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu |
85
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------------:|:------:|:------:|:------:|:---------:|:------:|
86
+ | 1.6638 | 0.23 | 2000 | 2.0039 | 0.7883 | 0.7869 | 0.9274 | 0.5705 | 0.3601 | 0.5175 | 0.5192 | 0.3730 |
87
+ | 1.2998 | 0.47 | 4000 | 1.9378 | 0.8283 | 0.8293 | 0.9303 | 0.5861 | 0.3748 | 0.5310 | 0.5329 | 0.3854 |
88
+ | 1.2351 | 0.7 | 6000 | 1.8752 | 0.8431 | 0.8437 | 0.9321 | 0.5951 | 0.3880 | 0.5411 | 0.5430 | 0.3954 |
89
+ | 1.1948 | 0.93 | 8000 | 1.9346 | 0.8536 | 0.8529 | 0.9333 | 0.6018 | 0.3931 | 0.5451 | 0.5472 | 0.4006 |
90
+ | 1.1537 | 1.16 | 10000 | 1.8881 | 0.8654 | 0.8647 | 0.9332 | 0.6070 | 0.4023 | 0.5483 | 0.5506 | 0.4096 |
91
+ | 1.1298 | 1.4 | 12000 | 1.9265 | 0.8690 | 0.8685 | 0.9337 | 0.6053 | 0.3988 | 0.5507 | 0.5526 | 0.4093 |
92
+ | 1.1219 | 1.63 | 14000 | 1.9017 | 0.8713 | 0.8714 | 0.9332 | 0.6029 | 0.3941 | 0.5470 | 0.5489 | 0.4042 |
93
+ | 1.1088 | 1.86 | 16000 | 1.9257 | 0.8733 | 0.8730 | 0.9356 | 0.6144 | 0.4096 | 0.5592 | 0.5611 | 0.4224 |
94
+
95
+
96
+ ### Framework versions
97
+
98
+ - Transformers 4.27.4
99
+ - Pytorch 2.0.0+cu117
100
+ - Datasets 2.11.0
101
+ - Tokenizers 0.13.2