amrisaurus commited on
Commit
e9a261a
1 Parent(s): 9c60a54

Upload TFBertForPreTraining

Browse files
Files changed (3) hide show
  1. README.md +151 -0
  2. config.json +24 -0
  3. tf_model.h5 +3 -0
README.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_keras_callback
4
+ model-index:
5
+ - name: pretrained-bert-uncased-100
6
+ results: []
7
+ ---
8
+
9
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
10
+ probably proofread and complete it, then remove this comment. -->
11
+
12
+ # pretrained-bert-uncased-100
13
+
14
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
+ It achieves the following results on the evaluation set:
16
+ - Train Loss: nan
17
+ - Validation Loss: nan
18
+ - Epoch: 99
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
38
+ - training_precision: float32
39
+
40
+ ### Training results
41
+
42
+ | Train Loss | Validation Loss | Epoch |
43
+ |:----------:|:---------------:|:-----:|
44
+ | 8.9143 | 9.5645 | 0 |
45
+ | 7.0594 | 9.6348 | 1 |
46
+ | 6.5846 | 10.4810 | 2 |
47
+ | 6.1703 | 10.3871 | 3 |
48
+ | 6.2204 | 10.4579 | 4 |
49
+ | 5.9912 | 11.0864 | 5 |
50
+ | 6.1670 | 10.8475 | 6 |
51
+ | 5.9434 | 11.3010 | 7 |
52
+ | 5.7437 | 10.9369 | 8 |
53
+ | 5.8586 | 11.1204 | 9 |
54
+ | 5.8595 | 11.5750 | 10 |
55
+ | 5.7608 | 12.0360 | 11 |
56
+ | 5.7219 | 11.4627 | 12 |
57
+ | 5.8005 | 11.6418 | 13 |
58
+ | 5.8040 | 11.7480 | 14 |
59
+ | 5.6217 | 11.9184 | 15 |
60
+ | 5.7095 | 11.9534 | 16 |
61
+ | 5.7426 | 12.3718 | 17 |
62
+ | 5.6365 | 11.7388 | 18 |
63
+ | 5.6291 | 12.1911 | 19 |
64
+ | 5.7437 | 12.3435 | 20 |
65
+ | 5.6589 | 12.1656 | 21 |
66
+ | 5.6835 | 12.1785 | 22 |
67
+ | 5.7466 | 12.1934 | 23 |
68
+ | 5.5470 | 12.4191 | 24 |
69
+ | 5.4824 | 12.3100 | 25 |
70
+ | 5.7151 | 12.7568 | 26 |
71
+ | 5.6431 | 12.3509 | 27 |
72
+ | 5.6188 | 12.4724 | 28 |
73
+ | 5.8535 | 12.3254 | 29 |
74
+ | 5.6457 | 12.5312 | 30 |
75
+ | 5.5622 | 12.6608 | 31 |
76
+ | 5.8279 | 12.6227 | 32 |
77
+ | 5.5720 | 13.0677 | 33 |
78
+ | 5.6025 | 12.5078 | 34 |
79
+ | 5.6830 | 12.2650 | 35 |
80
+ | 5.6089 | 12.5420 | 36 |
81
+ | 5.5842 | 12.4246 | 37 |
82
+ | 5.5823 | 12.8527 | 38 |
83
+ | 5.6300 | 12.9306 | 39 |
84
+ | 5.4753 | 13.0506 | 40 |
85
+ | 5.5736 | 12.8653 | 41 |
86
+ | 5.6237 | 13.1275 | 42 |
87
+ | 5.5517 | 12.6621 | 43 |
88
+ | 5.6275 | 13.0830 | 44 |
89
+ | 5.4596 | 13.0334 | 45 |
90
+ | 5.5789 | 13.1859 | 46 |
91
+ | 5.5381 | 13.4193 | 47 |
92
+ | 5.4859 | 13.3659 | 48 |
93
+ | 5.6895 | 13.1080 | 49 |
94
+ | 5.7514 | 13.1102 | 50 |
95
+ | 5.5406 | 13.3108 | 51 |
96
+ | 5.6091 | 13.3639 | 52 |
97
+ | 5.5978 | 13.3497 | 53 |
98
+ | 5.6819 | 13.3768 | 54 |
99
+ | 5.6097 | 12.7091 | 55 |
100
+ | 5.7161 | 13.1289 | 56 |
101
+ | 5.5263 | 13.5468 | 57 |
102
+ | 5.5250 | 13.4408 | 58 |
103
+ | 5.5677 | 13.7557 | 59 |
104
+ | 5.6698 | 13.0456 | 60 |
105
+ | 5.5404 | 13.2635 | 61 |
106
+ | 5.5019 | 13.5839 | 62 |
107
+ | 5.5027 | 13.3752 | 63 |
108
+ | 5.5898 | 13.5648 | 64 |
109
+ | 5.5646 | 13.1321 | 65 |
110
+ | 5.6336 | 13.4493 | 66 |
111
+ | 5.6596 | 13.0167 | 67 |
112
+ | 5.7952 | 13.6155 | 68 |
113
+ | 5.7404 | 13.4588 | 69 |
114
+ | 5.5704 | 13.3443 | 70 |
115
+ | 5.4927 | 13.6466 | 71 |
116
+ | 5.7395 | 13.6197 | 72 |
117
+ | 5.5994 | 13.3629 | 73 |
118
+ | 5.5902 | 14.0645 | 74 |
119
+ | 5.7020 | 13.7721 | 75 |
120
+ | 5.6767 | 13.4775 | 76 |
121
+ | 5.6172 | 13.7838 | 77 |
122
+ | 5.6232 | 13.7676 | 78 |
123
+ | 5.6285 | 13.4416 | 79 |
124
+ | 5.6174 | 13.5767 | 80 |
125
+ | 5.5878 | 13.7731 | 81 |
126
+ | 5.6670 | 14.1654 | 82 |
127
+ | 5.5013 | 14.2273 | 83 |
128
+ | 5.6745 | 13.9600 | 84 |
129
+ | 5.6135 | 13.8017 | 85 |
130
+ | 5.6932 | 13.6257 | 86 |
131
+ | 5.4745 | 13.9570 | 87 |
132
+ | 5.6542 | 14.0449 | 88 |
133
+ | 5.5748 | 13.6820 | 89 |
134
+ | 5.6025 | 13.7910 | 90 |
135
+ | 5.6333 | 14.4047 | 91 |
136
+ | nan | nan | 92 |
137
+ | nan | nan | 93 |
138
+ | nan | nan | 94 |
139
+ | nan | nan | 95 |
140
+ | nan | nan | 96 |
141
+ | nan | nan | 97 |
142
+ | nan | nan | 98 |
143
+ | nan | nan | 99 |
144
+
145
+
146
+ ### Framework versions
147
+
148
+ - Transformers 4.27.0.dev0
149
+ - TensorFlow 2.9.2
150
+ - Datasets 2.9.0
151
+ - Tokenizers 0.13.2
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForPreTraining"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "transformers_version": "4.27.0.dev0",
21
+ "type_vocab_size": 2,
22
+ "use_cache": true,
23
+ "vocab_size": 30522
24
+ }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23a877b09e4e3dcbe0e0238c240824aaa9637a091467cb190107a4f2a029e400
3
+ size 536063536