mekjr1 commited on
Commit
bee1482
1 Parent(s): 5259552

Training in progress epoch 0

Browse files
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_keras_callback
5
  model-index:
@@ -14,9 +15,9 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 1.9628
18
- - Validation Loss: 1.8647
19
- - Epoch: 4
20
 
21
  ## Model description
22
 
@@ -42,16 +43,12 @@ The following hyperparameters were used during training:
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
- | 2.0947 | 1.8478 | 0 |
46
- | 1.9615 | 1.8739 | 1 |
47
- | 1.9655 | 1.8581 | 2 |
48
- | 1.9618 | 1.8774 | 3 |
49
- | 1.9628 | 1.8647 | 4 |
50
 
51
 
52
  ### Framework versions
53
 
54
- - Transformers 4.26.1
55
- - TensorFlow 2.11.0
56
- - Datasets 2.10.1
57
- - Tokenizers 0.13.2
 
1
  ---
2
  license: apache-2.0
3
+ base_model: bert-base-uncased
4
  tags:
5
  - generated_from_keras_callback
6
  model-index:
 
15
 
16
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Train Loss: 2.0970
19
+ - Validation Loss: 1.8569
20
+ - Epoch: 0
21
 
22
  ## Model description
23
 
 
43
 
44
  | Train Loss | Validation Loss | Epoch |
45
  |:----------:|:---------------:|:-----:|
46
+ | 2.0970 | 1.8569 | 0 |
 
 
 
 
47
 
48
 
49
  ### Framework versions
50
 
51
+ - Transformers 4.31.0
52
+ - TensorFlow 2.12.0
53
+ - Datasets 2.13.1
54
+ - Tokenizers 0.13.3
checkpoint/extra_data.pickle CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0bbccb8cc419d4bb75ba4af607f786a3277c17a6426c01354508eb0dc66c1752
3
  size 876130159
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dccceae5178f18725b48dadb8b4c414030fc7cdecfe3637b18b97ae374e9eeb6
3
  size 876130159
checkpoint/weights.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:77fb5d659cf487d6217b95873c56af65a2785fc6b4949a640aa10aeb867b2662
3
- size 533687680
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9957f259e997077ee3d1d3120f80601c056e33603d2de7ee426f2dd6ec808d0e
3
+ size 533687616
config.json CHANGED
@@ -18,7 +18,7 @@
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
- "transformers_version": "4.26.1",
22
  "type_vocab_size": 2,
23
  "use_cache": true,
24
  "vocab_size": 30522
 
18
  "num_hidden_layers": 12,
19
  "pad_token_id": 0,
20
  "position_embedding_type": "absolute",
21
+ "transformers_version": "4.31.0",
22
  "type_vocab_size": 2,
23
  "use_cache": true,
24
  "vocab_size": 30522
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f56618403cdf5727b87ba4b8a0d02f46d06c9a37f3ef107ebb5fa8a86f64d85b
3
- size 533687680
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9957f259e997077ee3d1d3120f80601c056e33603d2de7ee426f2dd6ec808d0e
3
+ size 533687616
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -1,12 +1,11 @@
1
  {
 
2
  "cls_token": "[CLS]",
3
  "do_lower_case": true,
4
  "mask_token": "[MASK]",
5
  "model_max_length": 512,
6
- "name_or_path": "bert-base-uncased",
7
  "pad_token": "[PAD]",
8
  "sep_token": "[SEP]",
9
- "special_tokens_map_file": null,
10
  "strip_accents": null,
11
  "tokenize_chinese_chars": true,
12
  "tokenizer_class": "BertTokenizer",
 
1
  {
2
+ "clean_up_tokenization_spaces": true,
3
  "cls_token": "[CLS]",
4
  "do_lower_case": true,
5
  "mask_token": "[MASK]",
6
  "model_max_length": 512,
 
7
  "pad_token": "[PAD]",
8
  "sep_token": "[SEP]",
 
9
  "strip_accents": null,
10
  "tokenize_chinese_chars": true,
11
  "tokenizer_class": "BertTokenizer",
vocab.txt CHANGED
The diff for this file is too large to render. See raw diff