Ermira commited on
Commit
f986c37
1 Parent(s): 25900cc

Training in progress epoch 0

Browse files
Files changed (3) hide show
  1. README.md +11 -13
  2. config.json +1 -1
  3. tf_model.h5 +2 -2
README.md CHANGED
@@ -15,13 +15,13 @@ probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Train Loss: 0.0288
19
- - Validation Loss: 0.0564
20
- - Train Precision: 0.9325
21
- - Train Recall: 0.9411
22
- - Train F1: 0.9368
23
- - Train Accuracy: 0.9852
24
- - Epoch: 2
25
 
26
  ## Model description
27
 
@@ -47,14 +47,12 @@ The following hyperparameters were used during training:
47
 
48
  | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
49
  |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
50
- | 0.1600 | 0.0623 | 0.9182 | 0.9287 | 0.9234 | 0.9824 | 0 |
51
- | 0.0487 | 0.0547 | 0.9322 | 0.9357 | 0.9339 | 0.9848 | 1 |
52
- | 0.0288 | 0.0564 | 0.9325 | 0.9411 | 0.9368 | 0.9852 | 2 |
53
 
54
 
55
  ### Framework versions
56
 
57
- - Transformers 4.39.3
58
  - TensorFlow 2.15.0
59
- - Datasets 2.18.0
60
- - Tokenizers 0.15.2
 
15
 
16
  This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Train Loss: 0.1485
19
+ - Validation Loss: 0.0642
20
+ - Train Precision: 0.9083
21
+ - Train Recall: 0.9288
22
+ - Train F1: 0.9184
23
+ - Train Accuracy: 0.9820
24
+ - Epoch: 0
25
 
26
  ## Model description
27
 
 
47
 
48
  | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
49
  |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
50
+ | 0.1485 | 0.0642 | 0.9083 | 0.9288 | 0.9184 | 0.9820 | 0 |
 
 
51
 
52
 
53
  ### Framework versions
54
 
55
+ - Transformers 4.41.1
56
  - TensorFlow 2.15.0
57
+ - Datasets 2.19.1
58
+ - Tokenizers 0.19.1
config.json CHANGED
@@ -45,7 +45,7 @@
45
  "pooler_size_per_head": 128,
46
  "pooler_type": "first_token_transform",
47
  "position_embedding_type": "absolute",
48
- "transformers_version": "4.39.3",
49
  "type_vocab_size": 2,
50
  "use_cache": true,
51
  "vocab_size": 105879
 
45
  "pooler_size_per_head": 128,
46
  "pooler_type": "first_token_transform",
47
  "position_embedding_type": "absolute",
48
+ "transformers_version": "4.41.1",
49
  "type_vocab_size": 2,
50
  "use_cache": true,
51
  "vocab_size": 105879
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e5cd3ab469a6228413fd615024462fdf4a056db1763a9104168f91f375f57e13
3
- size 667364332
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bea99590511d80b81dee1097e5cbc7f8c80f8e236a9666b326d9f3fa20aa7223
3
+ size 667376620