gokuls commited on
Commit
880b7ae
1 Parent(s): def511c

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ datasets:
5
+ - glue
6
+ metrics:
7
+ - matthews_correlation
8
+ - accuracy
9
+ model-index:
10
+ - name: sa_BERT_no_pretrain_cola
11
+ results:
12
+ - task:
13
+ name: Text Classification
14
+ type: text-classification
15
+ dataset:
16
+ name: glue
17
+ type: glue
18
+ config: cola
19
+ split: validation
20
+ args: cola
21
+ metrics:
22
+ - name: Matthews Correlation
23
+ type: matthews_correlation
24
+ value: 0.0
25
+ - name: Accuracy
26
+ type: accuracy
27
+ value: 0.6912751793861389
28
+ ---
29
+
30
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
+ should probably proofread and complete it, then remove this comment. -->
32
+
33
+ # sa_BERT_no_pretrain_cola
34
+
35
+ This model is a fine-tuned version of [](https://huggingface.co/) on the glue dataset.
36
+ It achieves the following results on the evaluation set:
37
+ - Loss: 0.6310
38
+ - Matthews Correlation: 0.0
39
+ - Accuracy: 0.6913
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 0.0005
59
+ - train_batch_size: 128
60
+ - eval_batch_size: 128
61
+ - seed: 10
62
+ - distributed_type: multi-GPU
63
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
+ - lr_scheduler_type: linear
65
+ - num_epochs: 50
66
+ - mixed_precision_training: Native AMP
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
71
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
72
+ | 0.8826 | 1.0 | 67 | 0.6624 | 0.0 | 0.6913 |
73
+ | 0.616 | 2.0 | 134 | 0.6358 | 0.0 | 0.6913 |
74
+ | 0.6134 | 3.0 | 201 | 0.6195 | 0.0 | 0.6913 |
75
+ | 0.6139 | 4.0 | 268 | 0.6285 | 0.0 | 0.6913 |
76
+ | 0.6117 | 5.0 | 335 | 0.6180 | 0.0 | 0.6913 |
77
+ | 0.6099 | 6.0 | 402 | 0.6183 | 0.0 | 0.6913 |
78
+ | 0.6113 | 7.0 | 469 | 0.6232 | 0.0 | 0.6913 |
79
+ | 0.6135 | 8.0 | 536 | 0.6182 | 0.0 | 0.6913 |
80
+ | 0.6094 | 9.0 | 603 | 0.6221 | 0.0 | 0.6913 |
81
+ | 0.6096 | 10.0 | 670 | 0.6310 | 0.0 | 0.6913 |
82
+
83
+
84
+ ### Framework versions
85
+
86
+ - Transformers 4.29.2
87
+ - Pytorch 1.14.0a0+410ce96
88
+ - Datasets 2.12.0
89
+ - Tokenizers 0.13.3