Rocketknight1 HF staff commited on
Commit
981e2ca
1 Parent(s): 4558662

Training in progress epoch 0

Browse files
README.md CHANGED
@@ -14,11 +14,11 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 0.3899
18
- - Train Sparse Categorical Accuracy: 0.8548
19
- - Validation Loss: 0.6135
20
- - Validation Sparse Categorical Accuracy: 0.7796
21
- - Epoch: 1
22
 
23
  ## Model description
24
 
@@ -44,13 +44,12 @@ The following hyperparameters were used during training:
44
 
45
  | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
46
  |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
47
- | 0.8763 | 0.6428 | 0.6313 | 0.7518 | 0 |
48
- | 0.3899 | 0.8548 | 0.6135 | 0.7796 | 1 |
49
 
50
 
51
  ### Framework versions
52
 
53
- - Transformers 4.19.0.dev0
54
- - TensorFlow 2.8.0-rc0
55
- - Datasets 2.1.1.dev0
56
  - Tokenizers 0.11.0
14
 
15
  This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 0.8585
18
+ - Train Sparse Categorical Accuracy: 0.6547
19
+ - Validation Loss: 0.6112
20
+ - Validation Sparse Categorical Accuracy: 0.7589
21
+ - Epoch: 0
22
 
23
  ## Model description
24
 
44
 
45
  | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
46
  |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
47
+ | 0.8585 | 0.6547 | 0.6112 | 0.7589 | 0 |
 
48
 
49
 
50
  ### Framework versions
51
 
52
+ - Transformers 4.21.0.dev0
53
+ - TensorFlow 2.9.1
54
+ - Datasets 2.3.3.dev0
55
  - Tokenizers 0.11.0
config.json CHANGED
@@ -18,7 +18,7 @@
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
- "transformers_version": "4.19.0.dev0",
22
  "type_vocab_size": 2,
23
  "use_cache": true,
24
  "vocab_size": 28996
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
+ "transformers_version": "4.21.0.dev0",
22
  "type_vocab_size": 2,
23
  "use_cache": true,
24
  "vocab_size": 28996
logs/train/events.out.tfevents.1651825376.matt-TRX40-AORUS-PRO-WIFI.11529.0.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e57af9d2224867bb93191598a75f3dd4392b02e899c45cb8c7550dd81e4d05d8
3
+ size 2755943
logs/train/events.out.tfevents.1656604604.matt-TRX40-AORUS-PRO-WIFI.51929.0.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f50d994f359262ddd2bced410b5d709eb9e3ff6b40906c6304e4a5650d8466ba
3
+ size 2761804
logs/validation/events.out.tfevents.1656605348.matt-TRX40-AORUS-PRO-WIFI.51929.1.v2 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6062d4c7275f7e993f4cc8ddc245f7d95dd7edd11feb63c5a6f7e3c5a6d897a
3
+ size 394
special_tokens_map.json CHANGED
@@ -1 +1,7 @@
1
- {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8186a9e09f23b0c1648f0ab7c39cd96c5f74ed673f0c6e08119a493b3d4777ca
3
  size 433515860
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e4bdf1e0b4b1ec3fa8a2b8b85cf62133190d788fe386afe9c6479cdd3d3bb48
3
  size 433515860
tokenizer_config.json CHANGED
@@ -1 +1,14 @@
1
- {"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "bert-base-cased", "tokenizer_class": "BertTokenizer"}
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "do_lower_case": false,
4
+ "mask_token": "[MASK]",
5
+ "model_max_length": 512,
6
+ "name_or_path": "bert-base-cased",
7
+ "pad_token": "[PAD]",
8
+ "sep_token": "[SEP]",
9
+ "special_tokens_map_file": null,
10
+ "strip_accents": null,
11
+ "tokenize_chinese_chars": true,
12
+ "tokenizer_class": "BertTokenizer",
13
+ "unk_token": "[UNK]"
14
+ }