syedyusufali commited on
Commit
e944680
1 Parent(s): d584f8d

Training in progress epoch 0

Browse files
Files changed (3) hide show
  1. README.md +3 -3
  2. tf_model.h5 +1 -1
  3. tokenizer_config.json +1 -1
README.md CHANGED
@@ -14,8 +14,8 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 0.3030
18
- - Validation Loss: 0.1539
19
  - Epoch: 0
20
 
21
  ## Model description
@@ -42,7 +42,7 @@ The following hyperparameters were used during training:
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
- | 0.3030 | 0.1539 | 0 |
46
 
47
 
48
  ### Framework versions
 
14
 
15
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 0.2904
18
+ - Validation Loss: 0.1482
19
  - Epoch: 0
20
 
21
  ## Model description
 
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
+ | 0.2904 | 0.1482 | 0 |
46
 
47
 
48
  ### Framework versions
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a3eaa515db55a260fccc742d35c0a5286f4a8ddd5f70ba833ba725abc3510ee8
3
  size 435861524
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f24ff4c46fc7b1519047dd1415ac2c2f117373cf8e868403f250fda61590ca9
3
  size 435861524
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "from_pt": true, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "bert-base-uncased", "tokenizer_class": "BertTokenizer"}
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "bert-base-uncased", "tokenizer_class": "BertTokenizer"}