kt220 commited on
Commit
1216eb1
1 Parent(s): 1ab50fa

End of training

Browse files
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
- license: apache-2.0
3
- base_model: distilbert-base-uncased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -15,10 +15,10 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # my_review_model
17
 
18
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.4664
21
- - Accuracy: 0.7940
22
 
23
  ## Model description
24
 
@@ -43,19 +43,22 @@ The following hyperparameters were used during training:
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
- - num_epochs: 2
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
52
- | No log | 1.0 | 101 | 0.4857 | 0.7996 |
53
- | No log | 2.0 | 202 | 0.4664 | 0.7940 |
 
 
 
54
 
55
 
56
  ### Framework versions
57
 
58
- - Transformers 4.34.0
59
- - Pytorch 2.0.1+cu118
60
- - Datasets 2.14.5
61
- - Tokenizers 0.14.1
 
1
  ---
2
+ license: cc-by-sa-4.0
3
+ base_model: cl-tohoku/bert-base-japanese-whole-word-masking
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
15
 
16
  # my_review_model
17
 
18
+ This model is a fine-tuned version of [cl-tohoku/bert-base-japanese-whole-word-masking](https://huggingface.co/cl-tohoku/bert-base-japanese-whole-word-masking) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.4346
21
+ - Accuracy: 0.9332
22
 
23
  ## Model description
24
 
 
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
+ - num_epochs: 5
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
52
+ | 0.2084 | 1.0 | 678 | 0.2145 | 0.9363 |
53
+ | 0.1852 | 2.0 | 1356 | 0.2198 | 0.9384 |
54
+ | 0.0955 | 3.0 | 2034 | 0.2944 | 0.9362 |
55
+ | 0.0469 | 4.0 | 2712 | 0.3892 | 0.9359 |
56
+ | 0.0318 | 5.0 | 3390 | 0.4346 | 0.9332 |
57
 
58
 
59
  ### Framework versions
60
 
61
+ - Transformers 4.35.2
62
+ - Pytorch 2.1.0+cu118
63
+ - Datasets 2.15.0
64
+ - Tokenizers 0.15.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4aa32639e6d0aa6d7df9483ff6c0e12863823cbeb47f9b8bb317f14a14a21160
3
  size 442499064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c8c4549d4421950a9cfaaf01911b042dd0c5c32d9e2f5d85e15b4e1cc013b02
3
  size 442499064
runs/Nov23_10-14-36_4b70a48305f4/events.out.tfevents.1700734483.4b70a48305f4.34123.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6bf80a2a5c26939e1036ed258b162b3e43f2bd64e5250702e302fd2aefad6562
3
- size 6800
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7548f3f6239f060f917ef7e420c0f53a825c8ab9958d2aaec69641592786aa3
3
+ size 7154