Okyx commited on
Commit
c54b78a
1 Parent(s): 7064eb6

Upload TFBertForTokenClassification

Browse files
Files changed (3) hide show
  1. README.md +12 -10
  2. config.json +1 -1
  3. tf_model.h5 +1 -1
README.md CHANGED
@@ -14,9 +14,9 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 0.0030
18
- - Validation Loss: 0.0052
19
- - Epoch: 4
20
 
21
  ## Model description
22
 
@@ -35,23 +35,25 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2140, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
- | 0.0641 | 0.0078 | 0 |
46
- | 0.0052 | 0.0062 | 1 |
47
- | 0.0032 | 0.0052 | 2 |
48
- | 0.0029 | 0.0052 | 3 |
49
- | 0.0030 | 0.0052 | 4 |
 
 
50
 
51
 
52
  ### Framework versions
53
 
54
- - Transformers 4.22.1
55
  - TensorFlow 2.8.2
56
  - Datasets 2.5.1
57
  - Tokenizers 0.12.1
 
14
 
15
  This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 0.0011
18
+ - Validation Loss: 0.0065
19
+ - Epoch: 6
20
 
21
  ## Model description
22
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8550, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
+ | 0.0661 | 0.0177 | 0 |
46
+ | 0.0105 | 0.0123 | 1 |
47
+ | 0.0062 | 0.0094 | 2 |
48
+ | 0.0038 | 0.0090 | 3 |
49
+ | 0.0023 | 0.0069 | 4 |
50
+ | 0.0015 | 0.0073 | 5 |
51
+ | 0.0011 | 0.0065 | 6 |
52
 
53
 
54
  ### Framework versions
55
 
56
+ - Transformers 4.22.2
57
  - TensorFlow 2.8.2
58
  - Datasets 2.5.1
59
  - Tokenizers 0.12.1
config.json CHANGED
@@ -52,7 +52,7 @@
52
  "num_hidden_layers": 12,
53
  "pad_token_id": 0,
54
  "position_embedding_type": "absolute",
55
- "transformers_version": "4.22.1",
56
  "type_vocab_size": 2,
57
  "use_cache": true,
58
  "vocab_size": 28996
 
52
  "num_hidden_layers": 12,
53
  "pad_token_id": 0,
54
  "position_embedding_type": "absolute",
55
+ "transformers_version": "4.22.2",
56
  "type_vocab_size": 2,
57
  "use_cache": true,
58
  "vocab_size": 28996
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0820bd4f377ae8fbb5024f1e97ce05d3b479572f13952290758a7329d5dff2d9
3
  size 431198276
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0c7596f39986de749e76b65aea39b8d0858ad8e17d1f5c74ec9d18cf1b6e28b
3
  size 431198276