arsalanu commited on
Commit
b30044d
1 Parent(s): 1a6a44d

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -8
README.md CHANGED
@@ -1,6 +1,9 @@
1
  ---
 
2
  tags:
3
  - generated_from_trainer
 
 
4
  model-index:
5
  - name: bert-base-uncased-sst2
6
  results: []
@@ -11,7 +14,10 @@ should probably proofread and complete it, then remove this comment. -->
11
 
12
  # bert-base-uncased-sst2
13
 
14
- This model is a fine-tuned version of [./bert-base-uncased-finetuned-sst2](https://huggingface.co/./bert-base-uncased-finetuned-sst2) on an unknown dataset.
 
 
 
15
 
16
  ## Model description
17
 
@@ -30,18 +36,24 @@ More information needed
30
  ### Training hyperparameters
31
 
32
  The following hyperparameters were used during training:
33
- - learning_rate: 5e-05
34
- - train_batch_size: 1
35
- - eval_batch_size: 8
36
  - seed: 42
37
  - distributed_type: IPU
38
- - total_train_batch_size: 1024
39
- - total_eval_batch_size: 1024
 
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
- - lr_scheduler_type: linear
42
- - num_epochs: 3.0
 
43
  - training precision: Mixed Precision
44
 
 
 
 
 
45
  ### Framework versions
46
 
47
  - Transformers 4.25.1
 
1
  ---
2
+ license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
  model-index:
8
  - name: bert-base-uncased-sst2
9
  results: []
 
14
 
15
  # bert-base-uncased-sst2
16
 
17
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.2241
20
+ - Accuracy: 0.9230
21
 
22
  ## Model description
23
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - learning_rate: 9e-05
40
+ - train_batch_size: 2
41
+ - eval_batch_size: 2
42
  - seed: 42
43
  - distributed_type: IPU
44
+ - gradient_accumulation_steps: 32
45
+ - total_train_batch_size: 2048
46
+ - total_eval_batch_size: 32
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: cosine
49
+ - lr_scheduler_warmup_ratio: 0.1
50
+ - num_epochs: 2
51
  - training precision: Mixed Precision
52
 
53
+ ### Training results
54
+
55
+
56
+
57
  ### Framework versions
58
 
59
  - Transformers 4.25.1