pythonist commited on
Commit
134d765
1 Parent(s): bd5ea79

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -14
README.md CHANGED
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 3.3973
18
 
19
  ## Model description
20
 
@@ -39,27 +39,37 @@ The following hyperparameters were used during training:
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
- - num_epochs: 10
43
 
44
  ### Training results
45
 
46
  | Training Loss | Epoch | Step | Validation Loss |
47
  |:-------------:|:-----:|:----:|:---------------:|
48
- | No log | 1.0 | 21 | 3.8598 |
49
- | No log | 2.0 | 42 | 3.3527 |
50
- | No log | 3.0 | 63 | 3.2951 |
51
- | No log | 4.0 | 84 | 3.2875 |
52
- | No log | 5.0 | 105 | 3.2971 |
53
- | No log | 6.0 | 126 | 3.3236 |
54
- | No log | 7.0 | 147 | 3.3624 |
55
- | No log | 8.0 | 168 | 3.3717 |
56
- | No log | 9.0 | 189 | 3.3847 |
57
- | No log | 10.0 | 210 | 3.3973 |
 
 
 
 
 
 
 
 
 
 
58
 
59
 
60
  ### Framework versions
61
 
62
- - Transformers 4.24.0
63
  - Pytorch 1.12.1+cu113
64
- - Datasets 2.6.1
65
  - Tokenizers 0.13.2
 
14
 
15
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 4.1282
18
 
19
  ## Model description
20
 
 
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
+ - num_epochs: 20
43
 
44
  ### Training results
45
 
46
  | Training Loss | Epoch | Step | Validation Loss |
47
  |:-------------:|:-----:|:----:|:---------------:|
48
+ | No log | 1.0 | 21 | 4.8068 |
49
+ | No log | 2.0 | 42 | 4.2013 |
50
+ | No log | 3.0 | 63 | 4.1314 |
51
+ | No log | 4.0 | 84 | 4.1801 |
52
+ | No log | 5.0 | 105 | 4.0462 |
53
+ | No log | 6.0 | 126 | 4.0440 |
54
+ | No log | 7.0 | 147 | 4.0000 |
55
+ | No log | 8.0 | 168 | 4.1003 |
56
+ | No log | 9.0 | 189 | 4.0352 |
57
+ | No log | 10.0 | 210 | 4.0905 |
58
+ | No log | 11.0 | 231 | 4.0861 |
59
+ | No log | 12.0 | 252 | 4.0542 |
60
+ | No log | 13.0 | 273 | 4.0622 |
61
+ | No log | 14.0 | 294 | 4.1420 |
62
+ | No log | 15.0 | 315 | 4.1125 |
63
+ | No log | 16.0 | 336 | 4.1990 |
64
+ | No log | 17.0 | 357 | 4.1098 |
65
+ | No log | 18.0 | 378 | 4.1217 |
66
+ | No log | 19.0 | 399 | 4.1306 |
67
+ | No log | 20.0 | 420 | 4.1282 |
68
 
69
 
70
  ### Framework versions
71
 
72
+ - Transformers 4.25.1
73
  - Pytorch 1.12.1+cu113
74
+ - Datasets 2.7.1
75
  - Tokenizers 0.13.2