Danielwei0214 commited on
Commit
ca7b335
1 Parent(s): 0ba31d1

End of training

Browse files
README.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: ethanyt/guwenbert-large
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - ched_ner
8
+ metrics:
9
+ - precision
10
+ - recall
11
+ - f1
12
+ - accuracy
13
+ model-index:
14
+ - name: guwenbert-large-CHED-ner
15
+ results:
16
+ - task:
17
+ name: Token Classification
18
+ type: token-classification
19
+ dataset:
20
+ name: ched_ner
21
+ type: ched_ner
22
+ config: ched_ner
23
+ split: validation
24
+ args: ched_ner
25
+ metrics:
26
+ - name: Precision
27
+ type: precision
28
+ value: 0.7442799461641992
29
+ - name: Recall
30
+ type: recall
31
+ value: 0.8069066147859922
32
+ - name: F1
33
+ type: f1
34
+ value: 0.7743290548424737
35
+ - name: Accuracy
36
+ type: accuracy
37
+ value: 0.9666064635130461
38
+ ---
39
+
40
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
41
+ should probably proofread and complete it, then remove this comment. -->
42
+
43
+ # guwenbert-large-CHED-ner
44
+
45
+ This model is a fine-tuned version of [ethanyt/guwenbert-large](https://huggingface.co/ethanyt/guwenbert-large) on the ched_ner dataset.
46
+ It achieves the following results on the evaluation set:
47
+ - Loss: 0.1905
48
+ - Precision: 0.7443
49
+ - Recall: 0.8069
50
+ - F1: 0.7743
51
+ - Accuracy: 0.9666
52
+
53
+ ## Model description
54
+
55
+ More information needed
56
+
57
+ ## Intended uses & limitations
58
+
59
+ More information needed
60
+
61
+ ## Training and evaluation data
62
+
63
+ More information needed
64
+
65
+ ## Training procedure
66
+
67
+ ### Training hyperparameters
68
+
69
+ The following hyperparameters were used during training:
70
+ - learning_rate: 2e-05
71
+ - train_batch_size: 16
72
+ - eval_batch_size: 16
73
+ - seed: 42
74
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
75
+ - lr_scheduler_type: linear
76
+ - num_epochs: 10
77
+
78
+ ### Training results
79
+
80
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
81
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
82
+ | No log | 1.0 | 356 | 0.1420 | 0.6862 | 0.7573 | 0.72 | 0.9609 |
83
+ | 0.2304 | 2.0 | 712 | 0.1324 | 0.6907 | 0.7972 | 0.7401 | 0.9624 |
84
+ | 0.095 | 3.0 | 1068 | 0.1314 | 0.7268 | 0.7918 | 0.7579 | 0.9656 |
85
+ | 0.095 | 4.0 | 1424 | 0.1348 | 0.7248 | 0.7967 | 0.7590 | 0.9659 |
86
+ | 0.0613 | 5.0 | 1780 | 0.1525 | 0.7088 | 0.8147 | 0.7581 | 0.9635 |
87
+ | 0.0397 | 6.0 | 2136 | 0.1635 | 0.7224 | 0.8127 | 0.7649 | 0.9648 |
88
+ | 0.0397 | 7.0 | 2492 | 0.1693 | 0.7416 | 0.7986 | 0.7691 | 0.9662 |
89
+ | 0.0261 | 8.0 | 2848 | 0.1809 | 0.7338 | 0.8059 | 0.7682 | 0.9657 |
90
+ | 0.0164 | 9.0 | 3204 | 0.1904 | 0.7291 | 0.8127 | 0.7686 | 0.9655 |
91
+ | 0.0124 | 10.0 | 3560 | 0.1905 | 0.7443 | 0.8069 | 0.7743 | 0.9666 |
92
+
93
+
94
+ ### Framework versions
95
+
96
+ - Transformers 4.43.4
97
+ - Pytorch 2.3.1+cu121
98
+ - Datasets 2.20.0
99
+ - Tokenizers 0.19.1
config.json ADDED
@@ -0,0 +1,302 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "ethanyt/guwenbert-large",
3
+ "architectures": [
4
+ "RobertaForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 1024,
13
+ "id2label": {
14
+ "0": "LABEL_0",
15
+ "1": "LABEL_1",
16
+ "2": "LABEL_2",
17
+ "3": "LABEL_3",
18
+ "4": "LABEL_4",
19
+ "5": "LABEL_5",
20
+ "6": "LABEL_6",
21
+ "7": "LABEL_7",
22
+ "8": "LABEL_8",
23
+ "9": "LABEL_9",
24
+ "10": "LABEL_10",
25
+ "11": "LABEL_11",
26
+ "12": "LABEL_12",
27
+ "13": "LABEL_13",
28
+ "14": "LABEL_14",
29
+ "15": "LABEL_15",
30
+ "16": "LABEL_16",
31
+ "17": "LABEL_17",
32
+ "18": "LABEL_18",
33
+ "19": "LABEL_19",
34
+ "20": "LABEL_20",
35
+ "21": "LABEL_21",
36
+ "22": "LABEL_22",
37
+ "23": "LABEL_23",
38
+ "24": "LABEL_24",
39
+ "25": "LABEL_25",
40
+ "26": "LABEL_26",
41
+ "27": "LABEL_27",
42
+ "28": "LABEL_28",
43
+ "29": "LABEL_29",
44
+ "30": "LABEL_30",
45
+ "31": "LABEL_31",
46
+ "32": "LABEL_32",
47
+ "33": "LABEL_33",
48
+ "34": "LABEL_34",
49
+ "35": "LABEL_35",
50
+ "36": "LABEL_36",
51
+ "37": "LABEL_37",
52
+ "38": "LABEL_38",
53
+ "39": "LABEL_39",
54
+ "40": "LABEL_40",
55
+ "41": "LABEL_41",
56
+ "42": "LABEL_42",
57
+ "43": "LABEL_43",
58
+ "44": "LABEL_44",
59
+ "45": "LABEL_45",
60
+ "46": "LABEL_46",
61
+ "47": "LABEL_47",
62
+ "48": "LABEL_48",
63
+ "49": "LABEL_49",
64
+ "50": "LABEL_50",
65
+ "51": "LABEL_51",
66
+ "52": "LABEL_52",
67
+ "53": "LABEL_53",
68
+ "54": "LABEL_54",
69
+ "55": "LABEL_55",
70
+ "56": "LABEL_56",
71
+ "57": "LABEL_57",
72
+ "58": "LABEL_58",
73
+ "59": "LABEL_59",
74
+ "60": "LABEL_60",
75
+ "61": "LABEL_61",
76
+ "62": "LABEL_62",
77
+ "63": "LABEL_63",
78
+ "64": "LABEL_64",
79
+ "65": "LABEL_65",
80
+ "66": "LABEL_66",
81
+ "67": "LABEL_67",
82
+ "68": "LABEL_68",
83
+ "69": "LABEL_69",
84
+ "70": "LABEL_70",
85
+ "71": "LABEL_71",
86
+ "72": "LABEL_72",
87
+ "73": "LABEL_73",
88
+ "74": "LABEL_74",
89
+ "75": "LABEL_75",
90
+ "76": "LABEL_76",
91
+ "77": "LABEL_77",
92
+ "78": "LABEL_78",
93
+ "79": "LABEL_79",
94
+ "80": "LABEL_80",
95
+ "81": "LABEL_81",
96
+ "82": "LABEL_82",
97
+ "83": "LABEL_83",
98
+ "84": "LABEL_84",
99
+ "85": "LABEL_85",
100
+ "86": "LABEL_86",
101
+ "87": "LABEL_87",
102
+ "88": "LABEL_88",
103
+ "89": "LABEL_89",
104
+ "90": "LABEL_90",
105
+ "91": "LABEL_91",
106
+ "92": "LABEL_92",
107
+ "93": "LABEL_93",
108
+ "94": "LABEL_94",
109
+ "95": "LABEL_95",
110
+ "96": "LABEL_96",
111
+ "97": "LABEL_97",
112
+ "98": "LABEL_98",
113
+ "99": "LABEL_99",
114
+ "100": "LABEL_100",
115
+ "101": "LABEL_101",
116
+ "102": "LABEL_102",
117
+ "103": "LABEL_103",
118
+ "104": "LABEL_104",
119
+ "105": "LABEL_105",
120
+ "106": "LABEL_106",
121
+ "107": "LABEL_107",
122
+ "108": "LABEL_108",
123
+ "109": "LABEL_109",
124
+ "110": "LABEL_110",
125
+ "111": "LABEL_111",
126
+ "112": "LABEL_112",
127
+ "113": "LABEL_113",
128
+ "114": "LABEL_114",
129
+ "115": "LABEL_115",
130
+ "116": "LABEL_116",
131
+ "117": "LABEL_117",
132
+ "118": "LABEL_118",
133
+ "119": "LABEL_119",
134
+ "120": "LABEL_120",
135
+ "121": "LABEL_121",
136
+ "122": "LABEL_122",
137
+ "123": "LABEL_123",
138
+ "124": "LABEL_124",
139
+ "125": "LABEL_125",
140
+ "126": "LABEL_126",
141
+ "127": "LABEL_127",
142
+ "128": "LABEL_128",
143
+ "129": "LABEL_129",
144
+ "130": "LABEL_130",
145
+ "131": "LABEL_131",
146
+ "132": "LABEL_132",
147
+ "133": "LABEL_133",
148
+ "134": "LABEL_134"
149
+ },
150
+ "initializer_range": 0.02,
151
+ "intermediate_size": 4096,
152
+ "label2id": {
153
+ "LABEL_0": 0,
154
+ "LABEL_1": 1,
155
+ "LABEL_10": 10,
156
+ "LABEL_100": 100,
157
+ "LABEL_101": 101,
158
+ "LABEL_102": 102,
159
+ "LABEL_103": 103,
160
+ "LABEL_104": 104,
161
+ "LABEL_105": 105,
162
+ "LABEL_106": 106,
163
+ "LABEL_107": 107,
164
+ "LABEL_108": 108,
165
+ "LABEL_109": 109,
166
+ "LABEL_11": 11,
167
+ "LABEL_110": 110,
168
+ "LABEL_111": 111,
169
+ "LABEL_112": 112,
170
+ "LABEL_113": 113,
171
+ "LABEL_114": 114,
172
+ "LABEL_115": 115,
173
+ "LABEL_116": 116,
174
+ "LABEL_117": 117,
175
+ "LABEL_118": 118,
176
+ "LABEL_119": 119,
177
+ "LABEL_12": 12,
178
+ "LABEL_120": 120,
179
+ "LABEL_121": 121,
180
+ "LABEL_122": 122,
181
+ "LABEL_123": 123,
182
+ "LABEL_124": 124,
183
+ "LABEL_125": 125,
184
+ "LABEL_126": 126,
185
+ "LABEL_127": 127,
186
+ "LABEL_128": 128,
187
+ "LABEL_129": 129,
188
+ "LABEL_13": 13,
189
+ "LABEL_130": 130,
190
+ "LABEL_131": 131,
191
+ "LABEL_132": 132,
192
+ "LABEL_133": 133,
193
+ "LABEL_134": 134,
194
+ "LABEL_14": 14,
195
+ "LABEL_15": 15,
196
+ "LABEL_16": 16,
197
+ "LABEL_17": 17,
198
+ "LABEL_18": 18,
199
+ "LABEL_19": 19,
200
+ "LABEL_2": 2,
201
+ "LABEL_20": 20,
202
+ "LABEL_21": 21,
203
+ "LABEL_22": 22,
204
+ "LABEL_23": 23,
205
+ "LABEL_24": 24,
206
+ "LABEL_25": 25,
207
+ "LABEL_26": 26,
208
+ "LABEL_27": 27,
209
+ "LABEL_28": 28,
210
+ "LABEL_29": 29,
211
+ "LABEL_3": 3,
212
+ "LABEL_30": 30,
213
+ "LABEL_31": 31,
214
+ "LABEL_32": 32,
215
+ "LABEL_33": 33,
216
+ "LABEL_34": 34,
217
+ "LABEL_35": 35,
218
+ "LABEL_36": 36,
219
+ "LABEL_37": 37,
220
+ "LABEL_38": 38,
221
+ "LABEL_39": 39,
222
+ "LABEL_4": 4,
223
+ "LABEL_40": 40,
224
+ "LABEL_41": 41,
225
+ "LABEL_42": 42,
226
+ "LABEL_43": 43,
227
+ "LABEL_44": 44,
228
+ "LABEL_45": 45,
229
+ "LABEL_46": 46,
230
+ "LABEL_47": 47,
231
+ "LABEL_48": 48,
232
+ "LABEL_49": 49,
233
+ "LABEL_5": 5,
234
+ "LABEL_50": 50,
235
+ "LABEL_51": 51,
236
+ "LABEL_52": 52,
237
+ "LABEL_53": 53,
238
+ "LABEL_54": 54,
239
+ "LABEL_55": 55,
240
+ "LABEL_56": 56,
241
+ "LABEL_57": 57,
242
+ "LABEL_58": 58,
243
+ "LABEL_59": 59,
244
+ "LABEL_6": 6,
245
+ "LABEL_60": 60,
246
+ "LABEL_61": 61,
247
+ "LABEL_62": 62,
248
+ "LABEL_63": 63,
249
+ "LABEL_64": 64,
250
+ "LABEL_65": 65,
251
+ "LABEL_66": 66,
252
+ "LABEL_67": 67,
253
+ "LABEL_68": 68,
254
+ "LABEL_69": 69,
255
+ "LABEL_7": 7,
256
+ "LABEL_70": 70,
257
+ "LABEL_71": 71,
258
+ "LABEL_72": 72,
259
+ "LABEL_73": 73,
260
+ "LABEL_74": 74,
261
+ "LABEL_75": 75,
262
+ "LABEL_76": 76,
263
+ "LABEL_77": 77,
264
+ "LABEL_78": 78,
265
+ "LABEL_79": 79,
266
+ "LABEL_8": 8,
267
+ "LABEL_80": 80,
268
+ "LABEL_81": 81,
269
+ "LABEL_82": 82,
270
+ "LABEL_83": 83,
271
+ "LABEL_84": 84,
272
+ "LABEL_85": 85,
273
+ "LABEL_86": 86,
274
+ "LABEL_87": 87,
275
+ "LABEL_88": 88,
276
+ "LABEL_89": 89,
277
+ "LABEL_9": 9,
278
+ "LABEL_90": 90,
279
+ "LABEL_91": 91,
280
+ "LABEL_92": 92,
281
+ "LABEL_93": 93,
282
+ "LABEL_94": 94,
283
+ "LABEL_95": 95,
284
+ "LABEL_96": 96,
285
+ "LABEL_97": 97,
286
+ "LABEL_98": 98,
287
+ "LABEL_99": 99
288
+ },
289
+ "layer_norm_eps": 1e-05,
290
+ "max_position_embeddings": 514,
291
+ "model_type": "roberta",
292
+ "num_attention_heads": 16,
293
+ "num_hidden_layers": 24,
294
+ "pad_token_id": 1,
295
+ "position_embedding_type": "absolute",
296
+ "tokenizer_class": "BertTokenizer",
297
+ "torch_dtype": "float32",
298
+ "transformers_version": "4.43.4",
299
+ "type_vocab_size": 1,
300
+ "use_cache": true,
301
+ "vocab_size": 23292
302
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:445f9a505f64df92f6bec62a1b7c321d7201669ff9083066f0ed13894075390c
3
+ size 1307360620
runs/Aug06_13-07-05_0e9d90d47e2c/events.out.tfevents.1722949651.0e9d90d47e2c.301.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27ee2d0c27700b1506c3624e9fd963c9c373422e3faba391a192e8e6e3cefc7d
3
+ size 17497
runs/Aug06_13-07-05_0e9d90d47e2c/events.out.tfevents.1722952333.0e9d90d47e2c.301.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f5da64e61fc48c403f51455ab5e816c5eecb70461bf7950450378246c33f0e9
3
+ size 560
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[CLS]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[PAD]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[SEP]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[UNK]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "23291": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 1000000000000000019884624838656,
50
+ "never_split": null,
51
+ "pad_token": "[PAD]",
52
+ "sep_token": "[SEP]",
53
+ "strip_accents": null,
54
+ "tokenize_chinese_chars": true,
55
+ "tokenizer_class": "BertTokenizer",
56
+ "unk_token": "[UNK]"
57
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df6c4d3363d76c9668d4187a593a53b884420b47b18e8d91031b99e94f602fac
3
+ size 5304
vocab.txt ADDED
The diff for this file is too large to render. See raw diff