shafin commited on
Commit
53c8c6b
1 Parent(s): 21d7a49

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: gbert-large-finetuned-cust
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # gbert-large-finetuned-cust
14
+
15
+ This model is a fine-tuned version of [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.1846
18
+
19
+ ## Model description
20
+
21
+ More information needed
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 2e-05
37
+ - train_batch_size: 32
38
+ - eval_batch_size: 32
39
+ - seed: 42
40
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
+ - lr_scheduler_type: linear
42
+ - num_epochs: 30
43
+ - mixed_precision_training: Native AMP
44
+
45
+ ### Training results
46
+
47
+ | Training Loss | Epoch | Step | Validation Loss |
48
+ |:-------------:|:-----:|:----:|:---------------:|
49
+ | 0.8251 | 1.0 | 157 | 0.5204 |
50
+ | 0.508 | 2.0 | 314 | 0.3953 |
51
+ | 0.4009 | 3.0 | 471 | 0.3242 |
52
+ | 0.3587 | 4.0 | 628 | 0.3300 |
53
+ | 0.3276 | 5.0 | 785 | 0.3137 |
54
+ | 0.302 | 6.0 | 942 | 0.2826 |
55
+ | 0.2777 | 7.0 | 1099 | 0.2768 |
56
+ | 0.2609 | 8.0 | 1256 | 0.2726 |
57
+ | 0.244 | 9.0 | 1413 | 0.2660 |
58
+ | 0.2274 | 10.0 | 1570 | 0.2391 |
59
+ | 0.2132 | 11.0 | 1727 | 0.2353 |
60
+ | 0.2014 | 12.0 | 1884 | 0.2134 |
61
+ | 0.1835 | 13.0 | 2041 | 0.2278 |
62
+ | 0.1896 | 14.0 | 2198 | 0.2110 |
63
+ | 0.1974 | 15.0 | 2355 | 0.2132 |
64
+ | 0.1775 | 16.0 | 2512 | 0.1973 |
65
+ | 0.1715 | 17.0 | 2669 | 0.1941 |
66
+ | 0.1777 | 18.0 | 2826 | 0.2105 |
67
+ | 0.1741 | 19.0 | 2983 | 0.2127 |
68
+ | 0.1607 | 20.0 | 3140 | 0.1762 |
69
+ | 0.1562 | 21.0 | 3297 | 0.2095 |
70
+ | 0.1548 | 22.0 | 3454 | 0.1805 |
71
+ | 0.1534 | 23.0 | 3611 | 0.1852 |
72
+ | 0.1484 | 24.0 | 3768 | 0.1773 |
73
+ | 0.1473 | 25.0 | 3925 | 0.1759 |
74
+ | 0.1354 | 26.0 | 4082 | 0.1734 |
75
+ | 0.136 | 27.0 | 4239 | 0.1902 |
76
+ | 0.1306 | 28.0 | 4396 | 0.1769 |
77
+ | 0.1353 | 29.0 | 4553 | 0.1705 |
78
+ | 0.1368 | 30.0 | 4710 | 0.1846 |
79
+
80
+
81
+ ### Framework versions
82
+
83
+ - Transformers 4.28.1
84
+ - Pytorch 2.0.0+cu118
85
+ - Datasets 2.11.0
86
+ - Tokenizers 0.13.3