File size: 33,440 Bytes
c18a8a0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
2022-10-26 15:28:10,168 ----------------------------------------------------------------------------------------------------
2022-10-26 15:28:10,173 Model: "SequenceTagger(
  (embeddings): TransformerWordEmbeddings(
    (model): XLMRobertaModel(
      (embeddings): RobertaEmbeddings(
        (word_embeddings): Embedding(250002, 768, padding_idx=1)
        (position_embeddings): Embedding(514, 768, padding_idx=1)
        (token_type_embeddings): Embedding(1, 768)
        (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
        (dropout): Dropout(p=0.1, inplace=False)
      )
      (encoder): RobertaEncoder(
        (layer): ModuleList(
          (0): RobertaLayer(
            (attention): RobertaAttention(
              (self): RobertaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): RobertaSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): RobertaIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): RobertaOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (1): RobertaLayer(
            (attention): RobertaAttention(
              (self): RobertaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): RobertaSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): RobertaIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): RobertaOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (2): RobertaLayer(
            (attention): RobertaAttention(
              (self): RobertaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): RobertaSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): RobertaIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): RobertaOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (3): RobertaLayer(
            (attention): RobertaAttention(
              (self): RobertaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): RobertaSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): RobertaIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): RobertaOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (4): RobertaLayer(
            (attention): RobertaAttention(
              (self): RobertaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): RobertaSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): RobertaIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): RobertaOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (5): RobertaLayer(
            (attention): RobertaAttention(
              (self): RobertaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): RobertaSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): RobertaIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): RobertaOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (6): RobertaLayer(
            (attention): RobertaAttention(
              (self): RobertaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): RobertaSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): RobertaIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): RobertaOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (7): RobertaLayer(
            (attention): RobertaAttention(
              (self): RobertaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): RobertaSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): RobertaIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): RobertaOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (8): RobertaLayer(
            (attention): RobertaAttention(
              (self): RobertaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): RobertaSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): RobertaIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): RobertaOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (9): RobertaLayer(
            (attention): RobertaAttention(
              (self): RobertaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): RobertaSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): RobertaIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): RobertaOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (10): RobertaLayer(
            (attention): RobertaAttention(
              (self): RobertaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): RobertaSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): RobertaIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): RobertaOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
          (11): RobertaLayer(
            (attention): RobertaAttention(
              (self): RobertaSelfAttention(
                (query): Linear(in_features=768, out_features=768, bias=True)
                (key): Linear(in_features=768, out_features=768, bias=True)
                (value): Linear(in_features=768, out_features=768, bias=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
              (output): RobertaSelfOutput(
                (dense): Linear(in_features=768, out_features=768, bias=True)
                (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
                (dropout): Dropout(p=0.1, inplace=False)
              )
            )
            (intermediate): RobertaIntermediate(
              (dense): Linear(in_features=768, out_features=3072, bias=True)
              (intermediate_act_fn): GELUActivation()
            )
            (output): RobertaOutput(
              (dense): Linear(in_features=3072, out_features=768, bias=True)
              (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
              (dropout): Dropout(p=0.1, inplace=False)
            )
          )
        )
      )
      (pooler): RobertaPooler(
        (dense): Linear(in_features=768, out_features=768, bias=True)
        (activation): Tanh()
      )
    )
  )
  (word_dropout): WordDropout(p=0.05)
  (locked_dropout): LockedDropout(p=0.5)
  (embedding2nn): Linear(in_features=768, out_features=768, bias=True)
  (rnn): LSTM(768, 256, batch_first=True, bidirectional=True)
  (linear): Linear(in_features=512, out_features=15, bias=True)
  (loss_function): ViterbiLoss()
  (crf): CRF()
)"
2022-10-26 15:28:10,176 ----------------------------------------------------------------------------------------------------
2022-10-26 15:28:10,180 Corpus: "Corpus: 8551 train + 1425 dev + 1425 test sentences"
2022-10-26 15:28:10,182 ----------------------------------------------------------------------------------------------------
2022-10-26 15:28:10,184 Parameters:
2022-10-26 15:28:10,186  - learning_rate: "0.010000"
2022-10-26 15:28:10,187  - mini_batch_size: "8"
2022-10-26 15:28:10,188  - patience: "3"
2022-10-26 15:28:10,189  - anneal_factor: "0.5"
2022-10-26 15:28:10,191  - max_epochs: "10"
2022-10-26 15:28:10,192  - shuffle: "True"
2022-10-26 15:28:10,193  - train_with_dev: "False"
2022-10-26 15:28:10,194  - batch_growth_annealing: "False"
2022-10-26 15:28:10,196 ----------------------------------------------------------------------------------------------------
2022-10-26 15:28:10,197 Model training base path: "/content/model/xlmr_ner"
2022-10-26 15:28:10,198 ----------------------------------------------------------------------------------------------------
2022-10-26 15:28:10,199 Device: cuda:0
2022-10-26 15:28:10,201 ----------------------------------------------------------------------------------------------------
2022-10-26 15:28:10,202 Embeddings storage mode: none
2022-10-26 15:28:10,203 ----------------------------------------------------------------------------------------------------
2022-10-26 15:30:29,962 epoch 1 - iter 106/1069 - loss 0.55101171 - samples/sec: 6.07 - lr: 0.010000
2022-10-26 15:32:28,714 epoch 1 - iter 212/1069 - loss 0.35636418 - samples/sec: 7.14 - lr: 0.010000
2022-10-26 15:34:23,625 epoch 1 - iter 318/1069 - loss 0.28047260 - samples/sec: 7.38 - lr: 0.010000
2022-10-26 15:36:24,015 epoch 1 - iter 424/1069 - loss 0.23890211 - samples/sec: 7.04 - lr: 0.010000
2022-10-26 15:38:21,987 epoch 1 - iter 530/1069 - loss 0.21322222 - samples/sec: 7.19 - lr: 0.010000
2022-10-26 15:40:22,521 epoch 1 - iter 636/1069 - loss 0.19431796 - samples/sec: 7.04 - lr: 0.010000
2022-10-26 15:42:18,754 epoch 1 - iter 742/1069 - loss 0.18084010 - samples/sec: 7.30 - lr: 0.010000
2022-10-26 15:44:18,344 epoch 1 - iter 848/1069 - loss 0.16975329 - samples/sec: 7.09 - lr: 0.010000
2022-10-26 15:46:14,738 epoch 1 - iter 954/1069 - loss 0.16158584 - samples/sec: 7.29 - lr: 0.010000
2022-10-26 15:48:14,067 epoch 1 - iter 1060/1069 - loss 0.15491697 - samples/sec: 7.11 - lr: 0.010000
2022-10-26 15:48:24,569 ----------------------------------------------------------------------------------------------------
2022-10-26 15:48:24,577 EPOCH 1 done: loss 0.1543 - lr 0.010000
2022-10-26 15:50:17,480 Evaluating as a multi-label problem: False
2022-10-26 15:50:17,512 DEV : loss 0.060714565217494965 - f1-score (micro avg)  0.7908
2022-10-26 15:50:17,553 BAD EPOCHS (no improvement): 0
2022-10-26 15:50:17,554 saving best model
2022-10-26 15:50:23,470 ----------------------------------------------------------------------------------------------------
2022-10-26 15:52:24,219 epoch 2 - iter 106/1069 - loss 0.08869057 - samples/sec: 7.02 - lr: 0.010000
2022-10-26 15:54:21,594 epoch 2 - iter 212/1069 - loss 0.08600343 - samples/sec: 7.23 - lr: 0.010000
2022-10-26 15:56:19,809 epoch 2 - iter 318/1069 - loss 0.08546665 - samples/sec: 7.17 - lr: 0.010000
2022-10-26 15:58:17,214 epoch 2 - iter 424/1069 - loss 0.08476718 - samples/sec: 7.22 - lr: 0.010000
2022-10-26 16:00:16,114 epoch 2 - iter 530/1069 - loss 0.08542624 - samples/sec: 7.13 - lr: 0.010000
2022-10-26 16:02:13,540 epoch 2 - iter 636/1069 - loss 0.08522910 - samples/sec: 7.22 - lr: 0.010000
2022-10-26 16:04:12,854 epoch 2 - iter 742/1069 - loss 0.08502467 - samples/sec: 7.11 - lr: 0.010000
2022-10-26 16:06:13,219 epoch 2 - iter 848/1069 - loss 0.08373459 - samples/sec: 7.05 - lr: 0.010000
2022-10-26 16:08:09,808 epoch 2 - iter 954/1069 - loss 0.08316639 - samples/sec: 7.27 - lr: 0.010000
2022-10-26 16:10:11,036 epoch 2 - iter 1060/1069 - loss 0.08215396 - samples/sec: 7.00 - lr: 0.010000
2022-10-26 16:10:21,246 ----------------------------------------------------------------------------------------------------
2022-10-26 16:10:21,249 EPOCH 2 done: loss 0.0821 - lr 0.010000
2022-10-26 16:12:13,875 Evaluating as a multi-label problem: False
2022-10-26 16:12:13,905 DEV : loss 0.05180404335260391 - f1-score (micro avg)  0.8408
2022-10-26 16:12:13,947 BAD EPOCHS (no improvement): 0
2022-10-26 16:12:13,948 saving best model
2022-10-26 16:12:19,344 ----------------------------------------------------------------------------------------------------
2022-10-26 16:14:19,879 epoch 3 - iter 106/1069 - loss 0.06627178 - samples/sec: 7.04 - lr: 0.010000
2022-10-26 16:16:18,272 epoch 3 - iter 212/1069 - loss 0.07094348 - samples/sec: 7.16 - lr: 0.010000
2022-10-26 16:18:18,453 epoch 3 - iter 318/1069 - loss 0.07194093 - samples/sec: 7.06 - lr: 0.010000
2022-10-26 16:20:15,802 epoch 3 - iter 424/1069 - loss 0.07242840 - samples/sec: 7.23 - lr: 0.010000
2022-10-26 16:22:12,248 epoch 3 - iter 530/1069 - loss 0.07171872 - samples/sec: 7.28 - lr: 0.010000
2022-10-26 16:24:12,231 epoch 3 - iter 636/1069 - loss 0.07162092 - samples/sec: 7.07 - lr: 0.010000
2022-10-26 16:26:10,382 epoch 3 - iter 742/1069 - loss 0.07130310 - samples/sec: 7.18 - lr: 0.010000
2022-10-26 16:28:08,953 epoch 3 - iter 848/1069 - loss 0.07050136 - samples/sec: 7.15 - lr: 0.010000
2022-10-26 16:30:09,728 epoch 3 - iter 954/1069 - loss 0.07070517 - samples/sec: 7.02 - lr: 0.010000
2022-10-26 16:32:08,721 epoch 3 - iter 1060/1069 - loss 0.07033198 - samples/sec: 7.13 - lr: 0.010000
2022-10-26 16:32:18,654 ----------------------------------------------------------------------------------------------------
2022-10-26 16:32:18,656 EPOCH 3 done: loss 0.0702 - lr 0.010000
2022-10-26 16:34:10,956 Evaluating as a multi-label problem: False
2022-10-26 16:34:10,986 DEV : loss 0.04575943946838379 - f1-score (micro avg)  0.8693
2022-10-26 16:34:11,026 BAD EPOCHS (no improvement): 0
2022-10-26 16:34:11,029 saving best model
2022-10-26 16:34:16,564 ----------------------------------------------------------------------------------------------------
2022-10-26 16:36:12,350 epoch 4 - iter 106/1069 - loss 0.06432601 - samples/sec: 7.32 - lr: 0.010000
2022-10-26 16:38:08,474 epoch 4 - iter 212/1069 - loss 0.06376094 - samples/sec: 7.30 - lr: 0.010000
2022-10-26 16:40:03,219 epoch 4 - iter 318/1069 - loss 0.06273795 - samples/sec: 7.39 - lr: 0.010000
2022-10-26 16:41:59,110 epoch 4 - iter 424/1069 - loss 0.06153989 - samples/sec: 7.32 - lr: 0.010000
2022-10-26 16:43:57,347 epoch 4 - iter 530/1069 - loss 0.06137878 - samples/sec: 7.17 - lr: 0.010000
2022-10-26 16:45:55,146 epoch 4 - iter 636/1069 - loss 0.06072772 - samples/sec: 7.20 - lr: 0.010000
2022-10-26 16:47:53,049 epoch 4 - iter 742/1069 - loss 0.06031769 - samples/sec: 7.19 - lr: 0.010000
2022-10-26 16:49:50,705 epoch 4 - iter 848/1069 - loss 0.06084099 - samples/sec: 7.21 - lr: 0.010000
2022-10-26 16:51:49,833 epoch 4 - iter 954/1069 - loss 0.06096388 - samples/sec: 7.12 - lr: 0.010000
2022-10-26 16:53:45,640 epoch 4 - iter 1060/1069 - loss 0.06061743 - samples/sec: 7.32 - lr: 0.010000
2022-10-26 16:53:54,974 ----------------------------------------------------------------------------------------------------
2022-10-26 16:53:54,976 EPOCH 4 done: loss 0.0606 - lr 0.010000
2022-10-26 16:55:45,518 Evaluating as a multi-label problem: False
2022-10-26 16:55:45,548 DEV : loss 0.04747875779867172 - f1-score (micro avg)  0.8627
2022-10-26 16:55:45,589 BAD EPOCHS (no improvement): 1
2022-10-26 16:55:45,590 ----------------------------------------------------------------------------------------------------
2022-10-26 16:57:41,259 epoch 5 - iter 106/1069 - loss 0.05285565 - samples/sec: 7.33 - lr: 0.010000
2022-10-26 16:59:40,296 epoch 5 - iter 212/1069 - loss 0.05049977 - samples/sec: 7.12 - lr: 0.010000
2022-10-26 17:01:35,184 epoch 5 - iter 318/1069 - loss 0.05297933 - samples/sec: 7.38 - lr: 0.010000
2022-10-26 17:03:34,028 epoch 5 - iter 424/1069 - loss 0.05293744 - samples/sec: 7.14 - lr: 0.010000
2022-10-26 17:05:29,295 epoch 5 - iter 530/1069 - loss 0.05359386 - samples/sec: 7.36 - lr: 0.010000
2022-10-26 17:07:25,593 epoch 5 - iter 636/1069 - loss 0.05307424 - samples/sec: 7.29 - lr: 0.010000
2022-10-26 17:09:22,893 epoch 5 - iter 742/1069 - loss 0.05323355 - samples/sec: 7.23 - lr: 0.010000
2022-10-26 17:11:22,602 epoch 5 - iter 848/1069 - loss 0.05272547 - samples/sec: 7.08 - lr: 0.010000
2022-10-26 17:13:22,960 epoch 5 - iter 954/1069 - loss 0.05280553 - samples/sec: 7.05 - lr: 0.010000
2022-10-26 17:15:20,527 epoch 5 - iter 1060/1069 - loss 0.05265360 - samples/sec: 7.21 - lr: 0.010000
2022-10-26 17:15:29,931 ----------------------------------------------------------------------------------------------------
2022-10-26 17:15:29,932 EPOCH 5 done: loss 0.0526 - lr 0.010000
2022-10-26 17:17:21,728 Evaluating as a multi-label problem: False
2022-10-26 17:17:21,760 DEV : loss 0.03879784420132637 - f1-score (micro avg)  0.8864
2022-10-26 17:17:21,803 BAD EPOCHS (no improvement): 0
2022-10-26 17:17:21,804 saving best model
2022-10-26 17:17:27,330 ----------------------------------------------------------------------------------------------------
2022-10-26 17:19:26,401 epoch 6 - iter 106/1069 - loss 0.04801558 - samples/sec: 7.12 - lr: 0.010000
2022-10-26 17:21:22,988 epoch 6 - iter 212/1069 - loss 0.05008290 - samples/sec: 7.27 - lr: 0.010000
2022-10-26 17:23:16,794 epoch 6 - iter 318/1069 - loss 0.04925649 - samples/sec: 7.45 - lr: 0.010000
2022-10-26 17:25:15,532 epoch 6 - iter 424/1069 - loss 0.04786643 - samples/sec: 7.14 - lr: 0.010000
2022-10-26 17:27:13,913 epoch 6 - iter 530/1069 - loss 0.04879792 - samples/sec: 7.16 - lr: 0.010000
2022-10-26 17:29:10,114 epoch 6 - iter 636/1069 - loss 0.04800786 - samples/sec: 7.30 - lr: 0.010000
2022-10-26 17:31:07,810 epoch 6 - iter 742/1069 - loss 0.04755361 - samples/sec: 7.21 - lr: 0.010000
2022-10-26 17:33:04,496 epoch 6 - iter 848/1069 - loss 0.04782375 - samples/sec: 7.27 - lr: 0.010000
2022-10-26 17:35:05,834 epoch 6 - iter 954/1069 - loss 0.04776160 - samples/sec: 6.99 - lr: 0.010000
2022-10-26 17:37:03,878 epoch 6 - iter 1060/1069 - loss 0.04743945 - samples/sec: 7.18 - lr: 0.010000
2022-10-26 17:37:14,466 ----------------------------------------------------------------------------------------------------
2022-10-26 17:37:14,468 EPOCH 6 done: loss 0.0475 - lr 0.010000
2022-10-26 17:39:07,562 Evaluating as a multi-label problem: False
2022-10-26 17:39:07,592 DEV : loss 0.03874654322862625 - f1-score (micro avg)  0.8908
2022-10-26 17:39:07,633 BAD EPOCHS (no improvement): 0
2022-10-26 17:39:07,635 saving best model
2022-10-26 17:39:13,242 ----------------------------------------------------------------------------------------------------
2022-10-26 17:41:11,924 epoch 7 - iter 106/1069 - loss 0.04334369 - samples/sec: 7.15 - lr: 0.010000
2022-10-26 17:43:11,382 epoch 7 - iter 212/1069 - loss 0.04192565 - samples/sec: 7.10 - lr: 0.010000
2022-10-26 17:45:08,087 epoch 7 - iter 318/1069 - loss 0.04115627 - samples/sec: 7.27 - lr: 0.010000
2022-10-26 17:47:06,615 epoch 7 - iter 424/1069 - loss 0.04114928 - samples/sec: 7.16 - lr: 0.010000
2022-10-26 17:49:03,863 epoch 7 - iter 530/1069 - loss 0.04105023 - samples/sec: 7.23 - lr: 0.010000
2022-10-26 17:51:02,216 epoch 7 - iter 636/1069 - loss 0.04125208 - samples/sec: 7.17 - lr: 0.010000
2022-10-26 17:53:04,293 epoch 7 - iter 742/1069 - loss 0.04151765 - samples/sec: 6.95 - lr: 0.010000
2022-10-26 17:55:01,446 epoch 7 - iter 848/1069 - loss 0.04170200 - samples/sec: 7.24 - lr: 0.010000
2022-10-26 17:56:59,848 epoch 7 - iter 954/1069 - loss 0.04180177 - samples/sec: 7.16 - lr: 0.010000
2022-10-26 17:58:56,175 epoch 7 - iter 1060/1069 - loss 0.04203413 - samples/sec: 7.29 - lr: 0.010000
2022-10-26 17:59:05,814 ----------------------------------------------------------------------------------------------------
2022-10-26 17:59:05,816 EPOCH 7 done: loss 0.0420 - lr 0.010000
2022-10-26 18:00:59,457 Evaluating as a multi-label problem: False
2022-10-26 18:00:59,486 DEV : loss 0.04413652420043945 - f1-score (micro avg)  0.8968
2022-10-26 18:00:59,527 BAD EPOCHS (no improvement): 0
2022-10-26 18:00:59,529 saving best model
2022-10-26 18:01:05,372 ----------------------------------------------------------------------------------------------------
2022-10-26 18:03:03,422 epoch 8 - iter 106/1069 - loss 0.03592615 - samples/sec: 7.18 - lr: 0.010000
2022-10-26 18:05:00,466 epoch 8 - iter 212/1069 - loss 0.03676863 - samples/sec: 7.25 - lr: 0.010000
2022-10-26 18:06:58,178 epoch 8 - iter 318/1069 - loss 0.03702258 - samples/sec: 7.20 - lr: 0.010000
2022-10-26 18:08:55,170 epoch 8 - iter 424/1069 - loss 0.03704658 - samples/sec: 7.25 - lr: 0.010000
2022-10-26 18:10:52,222 epoch 8 - iter 530/1069 - loss 0.03711348 - samples/sec: 7.25 - lr: 0.010000
2022-10-26 18:12:51,244 epoch 8 - iter 636/1069 - loss 0.03715815 - samples/sec: 7.13 - lr: 0.010000
2022-10-26 18:14:50,229 epoch 8 - iter 742/1069 - loss 0.03708747 - samples/sec: 7.13 - lr: 0.010000
2022-10-26 18:16:47,946 epoch 8 - iter 848/1069 - loss 0.03734575 - samples/sec: 7.20 - lr: 0.010000
2022-10-26 18:18:45,873 epoch 8 - iter 954/1069 - loss 0.03736843 - samples/sec: 7.19 - lr: 0.010000
2022-10-26 18:20:43,504 epoch 8 - iter 1060/1069 - loss 0.03737578 - samples/sec: 7.21 - lr: 0.010000
2022-10-26 18:20:53,262 ----------------------------------------------------------------------------------------------------
2022-10-26 18:20:53,265 EPOCH 8 done: loss 0.0374 - lr 0.010000
2022-10-26 18:22:46,256 Evaluating as a multi-label problem: False
2022-10-26 18:22:46,293 DEV : loss 0.03726610541343689 - f1-score (micro avg)  0.9117
2022-10-26 18:22:46,336 BAD EPOCHS (no improvement): 0
2022-10-26 18:22:46,337 saving best model
2022-10-26 18:22:51,847 ----------------------------------------------------------------------------------------------------
2022-10-26 18:24:50,402 epoch 9 - iter 106/1069 - loss 0.03606101 - samples/sec: 7.15 - lr: 0.010000
2022-10-26 18:26:47,577 epoch 9 - iter 212/1069 - loss 0.03466163 - samples/sec: 7.24 - lr: 0.010000
2022-10-26 18:28:47,029 epoch 9 - iter 318/1069 - loss 0.03420843 - samples/sec: 7.10 - lr: 0.010000
2022-10-26 18:30:43,235 epoch 9 - iter 424/1069 - loss 0.03406325 - samples/sec: 7.30 - lr: 0.010000
2022-10-26 18:32:41,132 epoch 9 - iter 530/1069 - loss 0.03393077 - samples/sec: 7.19 - lr: 0.010000
2022-10-26 18:34:35,953 epoch 9 - iter 636/1069 - loss 0.03438052 - samples/sec: 7.39 - lr: 0.010000
2022-10-26 18:36:33,872 epoch 9 - iter 742/1069 - loss 0.03435922 - samples/sec: 7.19 - lr: 0.010000
2022-10-26 18:38:30,457 epoch 9 - iter 848/1069 - loss 0.03351594 - samples/sec: 7.27 - lr: 0.010000
2022-10-26 18:40:26,775 epoch 9 - iter 954/1069 - loss 0.03363514 - samples/sec: 7.29 - lr: 0.010000
2022-10-26 18:42:26,040 epoch 9 - iter 1060/1069 - loss 0.03301736 - samples/sec: 7.11 - lr: 0.010000
2022-10-26 18:42:34,477 ----------------------------------------------------------------------------------------------------
2022-10-26 18:42:34,480 EPOCH 9 done: loss 0.0330 - lr 0.010000
2022-10-26 18:44:24,572 Evaluating as a multi-label problem: False
2022-10-26 18:44:24,602 DEV : loss 0.04557322338223457 - f1-score (micro avg)  0.9084
2022-10-26 18:44:24,644 BAD EPOCHS (no improvement): 1
2022-10-26 18:44:24,646 ----------------------------------------------------------------------------------------------------
2022-10-26 18:46:21,774 epoch 10 - iter 106/1069 - loss 0.02992093 - samples/sec: 7.24 - lr: 0.010000
2022-10-26 18:48:20,730 epoch 10 - iter 212/1069 - loss 0.02886380 - samples/sec: 7.13 - lr: 0.010000
2022-10-26 18:50:20,679 epoch 10 - iter 318/1069 - loss 0.03109654 - samples/sec: 7.07 - lr: 0.010000
2022-10-26 18:52:14,564 epoch 10 - iter 424/1069 - loss 0.03091892 - samples/sec: 7.45 - lr: 0.010000
2022-10-26 18:54:14,888 epoch 10 - iter 530/1069 - loss 0.02977117 - samples/sec: 7.05 - lr: 0.010000
2022-10-26 18:56:13,992 epoch 10 - iter 636/1069 - loss 0.02969566 - samples/sec: 7.12 - lr: 0.010000
2022-10-26 18:58:12,618 epoch 10 - iter 742/1069 - loss 0.02979601 - samples/sec: 7.15 - lr: 0.010000
2022-10-26 19:00:10,398 epoch 10 - iter 848/1069 - loss 0.03040781 - samples/sec: 7.20 - lr: 0.010000
2022-10-26 19:02:06,063 epoch 10 - iter 954/1069 - loss 0.03029135 - samples/sec: 7.33 - lr: 0.010000
2022-10-26 19:04:05,626 epoch 10 - iter 1060/1069 - loss 0.03035206 - samples/sec: 7.09 - lr: 0.010000
2022-10-26 19:04:15,538 ----------------------------------------------------------------------------------------------------
2022-10-26 19:04:15,540 EPOCH 10 done: loss 0.0303 - lr 0.010000
2022-10-26 19:06:06,586 Evaluating as a multi-label problem: False
2022-10-26 19:06:06,621 DEV : loss 0.03892701491713524 - f1-score (micro avg)  0.9132
2022-10-26 19:06:06,663 BAD EPOCHS (no improvement): 0
2022-10-26 19:06:06,665 saving best model
2022-10-26 19:06:17,597 ----------------------------------------------------------------------------------------------------
2022-10-26 19:06:17,723 loading file /content/model/xlmr_ner/best-model.pt
2022-10-26 19:06:24,597 SequenceTagger predicts: Dictionary with 15 tags: O, S-PER, B-PER, E-PER, I-PER, S-MISC, B-MISC, E-MISC, I-MISC, S-LOC, B-LOC, E-LOC, I-LOC, <START>, <STOP>
2022-10-26 19:08:17,003 Evaluating as a multi-label problem: False
2022-10-26 19:08:17,040 0.9053	0.9316	0.9182	0.8955
2022-10-26 19:08:17,041 
Results:
- F-score (micro) 0.9182
- F-score (macro) 0.8875
- Accuracy 0.8955

By class:
              precision    recall  f1-score   support

         PER     0.9339    0.9633    0.9484      2127
        MISC     0.8469    0.9250    0.8842       933
         LOC     0.8955    0.7732    0.8299       388

   micro avg     0.9053    0.9316    0.9182      3448
   macro avg     0.8921    0.8872    0.8875      3448
weighted avg     0.9060    0.9316    0.9177      3448

2022-10-26 19:08:17,045 ----------------------------------------------------------------------------------------------------