debarghabhattofficial commited on
Commit
1b5c472
·
1 Parent(s): f0bde62

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -0
README.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - qg_squadshifts
7
+ metrics:
8
+ - bleu
9
+ model-index:
10
+ - name: t5-small-squad-qg-a2c-spt-test
11
+ results:
12
+ - task:
13
+ name: Sequence-to-sequence Language Modeling
14
+ type: text2text-generation
15
+ dataset:
16
+ name: qg_squadshifts
17
+ type: qg_squadshifts
18
+ config: new_wiki
19
+ split: test
20
+ args: new_wiki
21
+ metrics:
22
+ - name: Bleu
23
+ type: bleu
24
+ value: 0.23547421773993357
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # t5-small-squad-qg-a2c-spt-test
31
+
32
+ This model is a fine-tuned version of [lmqg/t5-small-squad-qg](https://huggingface.co/lmqg/t5-small-squad-qg) on the qg_squadshifts dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 3.4547
35
+ - Bleu: 0.2355
36
+ - Precisions: [0.5101032779524023, 0.27323701230961817, 0.1849309090909091, 0.13089521804906926]
37
+ - Brevity Penalty: 0.9770
38
+ - Length Ratio: 0.9773
39
+ - Translation Length: 42313
40
+ - Reference Length: 43296
41
+
42
+ ## Model description
43
+
44
+ More information needed
45
+
46
+ ## Intended uses & limitations
47
+
48
+ More information needed
49
+
50
+ ## Training and evaluation data
51
+
52
+ More information needed
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 0.0001
60
+ - train_batch_size: 64
61
+ - eval_batch_size: 64
62
+ - seed: 42
63
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
+ - lr_scheduler_type: linear
65
+ - num_epochs: 20
66
+ - label_smoothing_factor: 0.15
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length |
71
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:-----------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|
72
+ | 3.6461 | 1.0 | 42 | 3.4498 | 0.2316 | [0.5034767900770531, 0.2677271431902381, 0.18033496967946866, 0.12674081080199603] | 0.9830 | 0.9832 | 42568 | 43296 |
73
+ | 3.5941 | 2.0 | 84 | 3.4449 | 0.2334 | [0.5064935064935064, 0.2706766917293233, 0.1837192369302461, 0.1297833102812356] | 0.9761 | 0.9764 | 42273 | 43296 |
74
+ | 3.5535 | 3.0 | 126 | 3.4418 | 0.2351 | [0.5046481676618663, 0.27023957042544405, 0.183162194034573, 0.12935904928891487] | 0.9863 | 0.9863 | 42705 | 43296 |
75
+ | 3.5221 | 4.0 | 168 | 3.4405 | 0.2346 | [0.5067099872863399, 0.2710557070510323, 0.1838082001389854, 0.12971505218045604] | 0.9808 | 0.9810 | 42474 | 43296 |
76
+ | 3.4813 | 5.0 | 210 | 3.4413 | 0.2353 | [0.5066437760165565, 0.27207408175970116, 0.18422346239481827, 0.12990788528124386] | 0.9819 | 0.9821 | 42521 | 43296 |
77
+ | 3.4576 | 6.0 | 252 | 3.4423 | 0.2378 | [0.5084422914119086, 0.27441317598236287, 0.18654947088417279, 0.13218146781199988] | 0.9820 | 0.9822 | 42524 | 43296 |
78
+ | 3.4317 | 7.0 | 294 | 3.4461 | 0.2370 | [0.508597711994339, 0.27434549523759955, 0.1864933105029457, 0.1320847546575702] | 0.9790 | 0.9792 | 42395 | 43296 |
79
+ | 3.4282 | 8.0 | 336 | 3.4430 | 0.2372 | [0.5112069374022651, 0.2755184768679551, 0.18752188630792577, 0.13300769002277302] | 0.9745 | 0.9748 | 42206 | 43296 |
80
+ | 3.3954 | 9.0 | 378 | 3.4467 | 0.2365 | [0.5071792823067703, 0.27278146381919965, 0.18532428138090423, 0.13094694250473146] | 0.9827 | 0.9828 | 42553 | 43296 |
81
+ | 3.3874 | 10.0 | 420 | 3.4494 | 0.2368 | [0.5085979459153868, 0.2738351999584232, 0.18577968360665237, 0.13147814699623506] | 0.9803 | 0.9805 | 42452 | 43296 |
82
+ | 3.3818 | 11.0 | 462 | 3.4468 | 0.2364 | [0.5076105112099184, 0.2726918885256111, 0.18536839364748764, 0.13117647058823528] | 0.9816 | 0.9818 | 42507 | 43296 |
83
+ | 3.3453 | 12.0 | 504 | 3.4515 | 0.2364 | [0.5064182291788891, 0.2725908291067177, 0.18462869502523432, 0.13056080244903276] | 0.9841 | 0.9842 | 42613 | 43296 |
84
+ | 3.3249 | 13.0 | 546 | 3.4488 | 0.2361 | [0.5086831701807228, 0.27295143665481353, 0.18476184964407663, 0.13056981267775997] | 0.9814 | 0.9815 | 42496 | 43296 |
85
+ | 3.3347 | 14.0 | 588 | 3.4510 | 0.2360 | [0.5115360041647933, 0.27477148080438757, 0.18571719938230238, 0.13102925672113863] | 0.9758 | 0.9760 | 42259 | 43296 |
86
+ | 3.3197 | 15.0 | 630 | 3.4541 | 0.2357 | [0.5084278920853148, 0.27313095639980267, 0.18462651997683846, 0.1300251872689804] | 0.9809 | 0.9811 | 42478 | 43296 |
87
+ | 3.3182 | 16.0 | 672 | 3.4539 | 0.2357 | [0.5085926832713404, 0.2730530525331741, 0.18453966415749856, 0.1300251872689804] | 0.9809 | 0.9811 | 42478 | 43296 |
88
+ | 3.3139 | 17.0 | 714 | 3.4545 | 0.2354 | [0.5091441111923921, 0.27309414705269736, 0.18470338860013358, 0.13042336724647194] | 0.9785 | 0.9788 | 42377 | 43296 |
89
+ | 3.301 | 18.0 | 756 | 3.4545 | 0.2351 | [0.5107360147723775, 0.2733852424749164, 0.18494009270326212, 0.1307443792444122] | 0.9753 | 0.9756 | 42241 | 43296 |
90
+ | 3.2999 | 19.0 | 798 | 3.4549 | 0.2354 | [0.5107762189784476, 0.2735509138381201, 0.18508053945413766, 0.13082142151373427] | 0.9760 | 0.9763 | 42269 | 43296 |
91
+ | 3.3027 | 20.0 | 840 | 3.4547 | 0.2355 | [0.5101032779524023, 0.27323701230961817, 0.1849309090909091, 0.13089521804906926] | 0.9770 | 0.9773 | 42313 | 43296 |
92
+
93
+
94
+ ### Framework versions
95
+
96
+ - Transformers 4.27.4
97
+ - Pytorch 1.9.0
98
+ - Datasets 2.9.0
99
+ - Tokenizers 0.13.2