doyoungkim commited on
Commit
1a302ea
1 Parent(s): db5626a
Files changed (4) hide show
  1. README.md +13 -13
  2. config.json +1 -1
  3. pytorch_model.bin +2 -2
  4. training_args.bin +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ model_index:
16
  metric:
17
  name: Accuracy
18
  type: accuracy
19
- value: 0.9277522935779816
20
  ---
21
 
22
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -26,8 +26,8 @@ should probably proofread and complete it, then remove this comment. -->
26
 
27
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
28
  It achieves the following results on the evaluation set:
29
- - Loss: 0.3194
30
- - Accuracy: 0.9278
31
 
32
  ## Model description
33
 
@@ -47,8 +47,8 @@ More information needed
47
 
48
  The following hyperparameters were used during training:
49
  - learning_rate: 2e-05
50
- - train_batch_size: 16
51
- - eval_batch_size: 16
52
  - seed: 42
53
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
  - lr_scheduler_type: linear
@@ -58,16 +58,16 @@ The following hyperparameters were used during training:
58
 
59
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
60
  |:-------------:|:-----:|:-----:|:---------------:|:--------:|
61
- | 0.1803 | 1.0 | 4210 | 0.3157 | 0.9117 |
62
- | 0.1249 | 2.0 | 8420 | 0.3171 | 0.9209 |
63
- | 0.0803 | 3.0 | 12630 | 0.3355 | 0.9232 |
64
- | 0.0699 | 4.0 | 16840 | 0.3194 | 0.9278 |
65
- | 0.0371 | 5.0 | 21050 | 0.3965 | 0.9266 |
66
 
67
 
68
  ### Framework versions
69
 
70
- - Transformers 4.9.2
71
- - Pytorch 1.9.0+cu102
72
  - Datasets 1.11.0
73
- - Tokenizers 0.10.3
 
16
  metric:
17
  name: Accuracy
18
  type: accuracy
19
+ value: 0.930045871559633
20
  ---
21
 
22
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
26
 
27
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
28
  It achieves the following results on the evaluation set:
29
+ - Loss: 0.2485
30
+ - Accuracy: 0.9300
31
 
32
  ## Model description
33
 
 
47
 
48
  The following hyperparameters were used during training:
49
  - learning_rate: 2e-05
50
+ - train_batch_size: 32
51
+ - eval_batch_size: 32
52
  - seed: 42
53
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
  - lr_scheduler_type: linear
 
58
 
59
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
60
  |:-------------:|:-----:|:-----:|:---------------:|:--------:|
61
+ | 0.1662 | 1.0 | 2105 | 0.2485 | 0.9300 |
62
+ | 0.1102 | 2.0 | 4210 | 0.2777 | 0.9266 |
63
+ | 0.0835 | 3.0 | 6315 | 0.3368 | 0.9232 |
64
+ | 0.0529 | 4.0 | 8420 | 0.3310 | 0.9255 |
65
+ | 0.035 | 5.0 | 10525 | 0.3855 | 0.9278 |
66
 
67
 
68
  ### Framework versions
69
 
70
+ - Transformers 4.9.1
71
+ - Pytorch 1.8.1
72
  - Datasets 1.11.0
73
+ - Tokenizers 0.10.1
config.json CHANGED
@@ -20,7 +20,7 @@
20
  "pretrained_model_name_or_path": "bert-base-uncased",
21
  "problem_type": "single_label_classification",
22
  "torch_dtype": "float32",
23
- "transformers_version": "4.9.2",
24
  "type_vocab_size": 2,
25
  "use_cache": true,
26
  "vocab_size": 30522
 
20
  "pretrained_model_name_or_path": "bert-base-uncased",
21
  "problem_type": "single_label_classification",
22
  "torch_dtype": "float32",
23
+ "transformers_version": "4.9.1",
24
  "type_vocab_size": 2,
25
  "use_cache": true,
26
  "vocab_size": 30522
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:194e6486b81086f314b18acff889a3f92088111262ecd28a7c5a50f1d85401be
3
- size 438019245
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fb1ed9f505fd6850ca4e7dc62090e574b21bddc86f90dc84244483c55ff00e6
3
+ size 438024457
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a7eec076c0b1268d47b1992f691b17e0e136d129c5dd932e03b59c9596f0f1e9
3
  size 2671
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4bd401f7f9e1dec32cdf3725c9316fc6e4df3e3cde119445a4a25eb2c1b8e33
3
  size 2671