yemen2016 commited on
Commit
2d8cd0f
·
verified ·
1 Parent(s): e5fedde

End of training

Browse files
README.md CHANGED
@@ -21,11 +21,11 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
- - Accuracy: 0.8516
25
- - Precision: 0.8584
26
- - Recall: 0.8516
27
- - F1: 0.8481
28
- - Loss: 0.5580
29
 
30
  ## Model description
31
 
@@ -59,29 +59,29 @@ The following hyperparameters were used during training:
59
 
60
  | Training Loss | Epoch | Step | Accuracy | Precision | Recall | F1 | Validation Loss |
61
  |:-------------:|:-----:|:----:|:--------:|:---------:|:------:|:------:|:---------------:|
62
- | No log | 1.0 | 13 | 0.5603 | 0.6393 | 0.5603 | 0.4202 | 0.8848 |
63
- | No log | 2.0 | 26 | 0.7465 | 0.7387 | 0.7465 | 0.7388 | 0.6107 |
64
- | No log | 3.0 | 39 | 0.7477 | 0.7414 | 0.7477 | 0.7442 | 0.7030 |
65
- | No log | 4.0 | 52 | 0.7981 | 0.7980 | 0.7981 | 0.7945 | 0.5344 |
66
- | No log | 5.0 | 65 | 0.8123 | 0.8183 | 0.8123 | 0.8087 | 0.4756 |
67
- | No log | 6.0 | 78 | 0.7888 | 0.7790 | 0.7888 | 0.7818 | 0.5430 |
68
- | No log | 7.0 | 91 | 0.8123 | 0.8030 | 0.8123 | 0.8075 | 0.5115 |
69
- | No log | 8.0 | 104 | 0.8066 | 0.8012 | 0.8066 | 0.8021 | 0.5513 |
70
- | No log | 9.0 | 117 | 0.8370 | 0.8456 | 0.8370 | 0.8371 | 0.4638 |
71
- | No log | 10.0 | 130 | 0.8421 | 0.8377 | 0.8421 | 0.8379 | 0.5429 |
72
- | No log | 11.0 | 143 | 0.8519 | 0.8554 | 0.8519 | 0.8496 | 0.4703 |
73
- | No log | 12.0 | 156 | 0.8480 | 0.8428 | 0.8480 | 0.8437 | 0.5025 |
74
- | No log | 13.0 | 169 | 0.8504 | 0.8607 | 0.8504 | 0.8499 | 0.5898 |
75
- | No log | 14.0 | 182 | 0.8409 | 0.8342 | 0.8409 | 0.8366 | 0.5546 |
76
- | No log | 15.0 | 195 | 0.8365 | 0.8335 | 0.8365 | 0.8339 | 0.5665 |
77
- | No log | 16.0 | 208 | 0.8489 | 0.8503 | 0.8489 | 0.8463 | 0.5506 |
78
- | No log | 17.0 | 221 | 0.8553 | 0.8642 | 0.8553 | 0.8521 | 0.5503 |
79
- | No log | 18.0 | 234 | 0.8511 | 0.8577 | 0.8511 | 0.8476 | 0.5557 |
80
- | No log | 18.48 | 240 | 0.8516 | 0.8584 | 0.8516 | 0.8481 | 0.5580 |
81
 
82
 
83
  ### Framework versions
84
 
85
- - Transformers 4.47.1
86
  - Pytorch 2.5.1+cu124
87
  - Tokenizers 0.21.0
 
21
 
22
  This model is a fine-tuned version of [NbAiLab/nb-bert-base](https://huggingface.co/NbAiLab/nb-bert-base) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Accuracy: 0.8538
25
+ - Precision: 0.8586
26
+ - Recall: 0.8538
27
+ - F1: 0.8518
28
+ - Loss: 0.6499
29
 
30
  ## Model description
31
 
 
59
 
60
  | Training Loss | Epoch | Step | Accuracy | Precision | Recall | F1 | Validation Loss |
61
  |:-------------:|:-----:|:----:|:--------:|:---------:|:------:|:------:|:---------------:|
62
+ | No log | 1.0 | 13 | 0.5937 | 0.6746 | 0.5937 | 0.4918 | 0.7939 |
63
+ | No log | 2.0 | 26 | 0.7974 | 0.8004 | 0.7974 | 0.7938 | 0.5314 |
64
+ | No log | 3.0 | 39 | 0.7306 | 0.7926 | 0.7306 | 0.7241 | 0.7069 |
65
+ | No log | 4.0 | 52 | 0.7986 | 0.7908 | 0.7986 | 0.7942 | 0.5295 |
66
+ | No log | 5.0 | 65 | 0.8152 | 0.8272 | 0.8152 | 0.8116 | 0.5823 |
67
+ | No log | 6.0 | 78 | 0.8018 | 0.7923 | 0.8018 | 0.7969 | 0.5561 |
68
+ | No log | 7.0 | 91 | 0.8370 | 0.8513 | 0.8370 | 0.8330 | 0.5735 |
69
+ | No log | 8.0 | 104 | 0.8360 | 0.8400 | 0.8360 | 0.8320 | 0.5175 |
70
+ | No log | 9.0 | 117 | 0.8340 | 0.8515 | 0.8340 | 0.8371 | 0.6198 |
71
+ | No log | 10.0 | 130 | 0.8323 | 0.8480 | 0.8323 | 0.8288 | 0.5806 |
72
+ | No log | 11.0 | 143 | 0.8494 | 0.8596 | 0.8494 | 0.8459 | 0.5858 |
73
+ | No log | 12.0 | 156 | 0.8509 | 0.8595 | 0.8509 | 0.8497 | 0.6112 |
74
+ | No log | 13.0 | 169 | 0.8484 | 0.8473 | 0.8484 | 0.8466 | 0.5477 |
75
+ | No log | 14.0 | 182 | 0.8448 | 0.8471 | 0.8448 | 0.8446 | 0.6023 |
76
+ | No log | 15.0 | 195 | 0.8546 | 0.8684 | 0.8546 | 0.8517 | 0.6594 |
77
+ | No log | 16.0 | 208 | 0.8472 | 0.8532 | 0.8472 | 0.8478 | 0.6293 |
78
+ | No log | 17.0 | 221 | 0.8538 | 0.8650 | 0.8538 | 0.8519 | 0.6870 |
79
+ | No log | 18.0 | 234 | 0.8541 | 0.8589 | 0.8541 | 0.8520 | 0.6491 |
80
+ | No log | 18.48 | 240 | 0.8538 | 0.8586 | 0.8538 | 0.8518 | 0.6499 |
81
 
82
 
83
  ### Framework versions
84
 
85
+ - Transformers 4.48.2
86
  - Pytorch 2.5.1+cu124
87
  - Tokenizers 0.21.0
config.json CHANGED
@@ -39,7 +39,7 @@
39
  "pooler_type": "first_token_transform",
40
  "position_embedding_type": "absolute",
41
  "torch_dtype": "float32",
42
- "transformers_version": "4.47.1",
43
  "type_vocab_size": 2,
44
  "use_cache": true,
45
  "vocab_size": 119547
 
39
  "pooler_type": "first_token_transform",
40
  "position_embedding_type": "absolute",
41
  "torch_dtype": "float32",
42
+ "transformers_version": "4.48.2",
43
  "type_vocab_size": 2,
44
  "use_cache": true,
45
  "vocab_size": 119547
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:94b23f09d9f306397e5c036422c49b0e74809f37932bfab6473d756ec7ae99e6
3
  size 709090132
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f95772c40b3adc3bf0743f9ac2e967c72a6fc25e91399018cc0ece6b403f416
3
  size 709090132
runs/Feb07_12-30-38_32d4dd9baa07/events.out.tfevents.1738931441.32d4dd9baa07.2176.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ff862e086be27b757ca1ab2ba6f288220b2082bc6ae8067946ed08967fc9118
3
+ size 14888
runs/Feb07_12-30-38_32d4dd9baa07/events.out.tfevents.1738931587.32d4dd9baa07.2176.3 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3dada2acaaf06dcf104c1e91f4f8d24f0c6845f952e2e2ddf0dadf8dd5b16729
3
+ size 560
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d8113bca7dcf20fa87e2038017d2a83275f3bc18a956ce8c205a62c87031aa68
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86342b99fdf98539f0170b0b945c1a3ce72841e3c44b614a128001a4c22725c2
3
  size 5496