heedou commited on
Commit
20a1f17
1 Parent(s): 9b7a9c2

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -19,11 +19,11 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.0189
23
- - Precision: 0.9763
24
- - Recall: 0.9736
25
- - F1: 0.9749
26
- - Accuracy: 0.9964
27
 
28
  ## Model description
29
 
@@ -43,8 +43,8 @@ More information needed
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 2e-05
46
- - train_batch_size: 16
47
- - eval_batch_size: 16
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
@@ -54,14 +54,14 @@ The following hyperparameters were used during training:
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
- | No log | 1.0 | 155 | 0.0265 | 0.9538 | 0.9599 | 0.9569 | 0.9943 |
58
- | No log | 2.0 | 310 | 0.0194 | 0.9589 | 0.9772 | 0.9680 | 0.9951 |
59
- | No log | 3.0 | 465 | 0.0189 | 0.9763 | 0.9736 | 0.9749 | 0.9964 |
60
 
61
 
62
  ### Framework versions
63
 
64
- - Transformers 4.37.2
65
- - Pytorch 2.1.0+cu121
66
- - Datasets 2.17.1
67
- - Tokenizers 0.15.2
 
19
 
20
  This model is a fine-tuned version of [klue/roberta-large](https://huggingface.co/klue/roberta-large) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.0290
23
+ - Precision: 0.9626
24
+ - Recall: 0.9588
25
+ - F1: 0.9607
26
+ - Accuracy: 0.9936
27
 
28
  ## Model description
29
 
 
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 2e-05
46
+ - train_batch_size: 2
47
+ - eval_batch_size: 2
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
 
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
+ | 0.0682 | 1.0 | 1240 | 0.0561 | 0.9327 | 0.9392 | 0.9359 | 0.9898 |
58
+ | 0.0359 | 2.0 | 2480 | 0.0408 | 0.9701 | 0.9539 | 0.9619 | 0.9939 |
59
+ | 0.0157 | 3.0 | 3720 | 0.0290 | 0.9626 | 0.9588 | 0.9607 | 0.9936 |
60
 
61
 
62
  ### Framework versions
63
 
64
+ - Transformers 4.31.0.dev0
65
+ - Pytorch 2.0.1+cu117
66
+ - Datasets 2.13.2.dev0
67
+ - Tokenizers 0.13.3