amtam0 commited on
Commit
e107dad
1 Parent(s): 5a2318a

new training + add new token

Browse files
Files changed (6) hide show
  1. README.md +3 -2
  2. dev.tsv +0 -0
  3. loss.tsv +8 -6
  4. pytorch_model.bin +2 -2
  5. test.tsv +0 -0
  6. training.log +164 -132
README.md CHANGED
@@ -9,7 +9,7 @@ widget:
9
 
10
  ---
11
 
12
- 6-class NER French model using [Flair TransformerWordEmbeddings - camembert-base](https://github.com/flairNLP/flair/).
13
 
14
  | **tag** | **meaning** |
15
  |---------------------------------|-----------|
@@ -19,12 +19,13 @@ widget:
19
  | duration_br_hr | Duration btwn rounds in hours |
20
  | duration_wt_sd | workout duration in seconds |
21
  | duration_wt_min | workout duration in minutes |
 
22
  ---
23
  The dataset was created manually (perfectible). Sentences example :
24
  ```
25
  19 séries de 3 minutes 21 minutes entre chaque série
26
  préparer 7 sets de 32 secondes
27
- start 13 séries de 26 secondes
28
  initie 8 series de 3 minutes
29
  2 séries de 30 secondes 35 minutes entre chaque série
30
  ...
9
 
10
  ---
11
 
12
+ 7-class NER French model using [Flair TransformerWordEmbeddings - camembert-base](https://github.com/flairNLP/flair/).
13
 
14
  | **tag** | **meaning** |
15
  |---------------------------------|-----------|
19
  | duration_br_hr | Duration btwn rounds in hours |
20
  | duration_wt_sd | workout duration in seconds |
21
  | duration_wt_min | workout duration in minutes |
22
+ | duration_wt_hr | workout duration in hours |
23
  ---
24
  The dataset was created manually (perfectible). Sentences example :
25
  ```
26
  19 séries de 3 minutes 21 minutes entre chaque série
27
  préparer 7 sets de 32 secondes
28
+ lance 13 séries de 26 secondes
29
  initie 8 series de 3 minutes
30
  2 séries de 30 secondes 35 minutes entre chaque série
31
  ...
dev.tsv CHANGED
The diff for this file is too large to render. See raw diff
loss.tsv CHANGED
@@ -1,7 +1,9 @@
1
  EPOCH TIMESTAMP BAD_EPOCHS LEARNING_RATE TRAIN_LOSS DEV_LOSS DEV_PRECISION DEV_RECALL DEV_F1 DEV_ACCURACY
2
- 1 12:37:54 0 0.0001 0.19262203057455343 0.003528536530211568 0.997 0.997 0.997 0.997
3
- 2 12:40:59 0 0.0001 0.12883353277153947 0.005357138346880674 0.998 0.998 0.998 0.998
4
- 3 12:44:14 1 0.0001 0.12444597448765583 0.003483039792627096 0.9975 0.9975 0.9975 0.9975
5
- 4 12:47:29 0 0.0001 0.10980423253004212 0.00215003895573318 0.9986 0.9986 0.9986 0.9986
6
- 5 12:50:45 1 0.0001 0.10795608051487815 0.003141788998618722 0.9986 0.9986 0.9986 0.9986
7
- 6 12:53:51 2 0.0001 0.1212372548750031 0.0021491716615855694 0.9985 0.9985 0.9985 0.9985
 
 
1
  EPOCH TIMESTAMP BAD_EPOCHS LEARNING_RATE TRAIN_LOSS DEV_LOSS DEV_PRECISION DEV_RECALL DEV_F1 DEV_ACCURACY
2
+ 1 01:16:14 0 0.0001 0.22980373936831228 0.0016565551050007343 0.9988 0.9988 0.9988 0.9988
3
+ 2 01:19:41 1 0.0001 0.11098705174088415 0.0011662252945825458 0.9987 0.9987 0.9987 0.9987
4
+ 3 01:23:10 2 0.0001 0.11012807391442034 0.0018373305210843682 0.9972 0.9983 0.9977 0.9962
5
+ 4 01:26:56 0 0.0001 0.10986683981523941 0.0014131164643913507 0.999 0.999 0.999 0.999
6
+ 5 01:30:32 0 0.0001 0.10877995152001577 0.0017454695189371705 0.9993 0.9993 0.9993 0.9993
7
+ 6 01:34:08 1 0.0001 0.10935465438798125 0.0012574659194797277 0.9991 0.9991 0.9991 0.9991
8
+ 7 01:37:46 0 0.0001 0.10931714524032 0.0008941686828620732 0.9994 0.9994 0.9994 0.9994
9
+ 8 01:41:22 1 0.0001 0.1083667058135449 0.0013162429677322507 0.9994 0.9994 0.9994 0.9994
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf038965ad0f2bfe495b97a38e652e287d4ac66ef698e6b6cdafccd28f3e4ab2
3
- size 452127401
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f6d3c3e0ce425c09806df1ac004273dfb66918e4df065538bf3df5c4569eb12
3
+ size 452139753
test.tsv CHANGED
The diff for this file is too large to render. See raw diff
training.log CHANGED
@@ -1,5 +1,5 @@
1
- 2021-11-13 12:34:53,886 ----------------------------------------------------------------------------------------------------
2
- 2021-11-13 12:34:53,887 Model: "SequenceTagger(
3
  (embeddings): TransformerWordEmbeddings(
4
  (model): CamembertModel(
5
  (embeddings): RobertaEmbeddings(
@@ -298,144 +298,176 @@
298
  (word_dropout): WordDropout(p=0.05)
299
  (locked_dropout): LockedDropout(p=0.5)
300
  (embedding2nn): Linear(in_features=1536, out_features=1536, bias=True)
301
- (linear): Linear(in_features=1536, out_features=14, bias=True)
302
  (beta): 1.0
303
  (weights): None
304
  (weight_tensor) None
305
  )"
306
- 2021-11-13 12:34:53,888 ----------------------------------------------------------------------------------------------------
307
- 2021-11-13 12:34:53,888 Corpus: "Corpus: 44550 train + 4950 dev + 5500 test sentences"
308
- 2021-11-13 12:34:53,888 ----------------------------------------------------------------------------------------------------
309
- 2021-11-13 12:34:53,888 Parameters:
310
- 2021-11-13 12:34:53,889 - learning_rate: "5e-05"
311
- 2021-11-13 12:34:53,890 - mini_batch_size: "32"
312
- 2021-11-13 12:34:53,890 - patience: "5"
313
- 2021-11-13 12:34:53,890 - anneal_factor: "0.5"
314
- 2021-11-13 12:34:53,890 - max_epochs: "6"
315
- 2021-11-13 12:34:53,891 - shuffle: "True"
316
- 2021-11-13 12:34:53,891 - train_with_dev: "False"
317
- 2021-11-13 12:34:53,891 - batch_growth_annealing: "False"
318
- 2021-11-13 12:34:53,892 ----------------------------------------------------------------------------------------------------
319
- 2021-11-13 12:34:53,892 Model training base path: "training/flair_ner/13112021_123428"
320
- 2021-11-13 12:34:53,893 ----------------------------------------------------------------------------------------------------
321
- 2021-11-13 12:34:53,893 Device: cuda
322
- 2021-11-13 12:34:53,893 ----------------------------------------------------------------------------------------------------
323
- 2021-11-13 12:34:53,893 Embeddings storage mode: cpu
324
- 2021-11-13 12:34:53,895 ----------------------------------------------------------------------------------------------------
325
- 2021-11-13 12:35:10,911 epoch 1 - iter 139/1393 - loss 0.65157897 - samples/sec: 261.55 - lr: 0.000050
326
- 2021-11-13 12:35:27,639 epoch 1 - iter 278/1393 - loss 0.40598169 - samples/sec: 266.05 - lr: 0.000050
327
- 2021-11-13 12:35:44,436 epoch 1 - iter 417/1393 - loss 0.32348962 - samples/sec: 264.97 - lr: 0.000050
328
- 2021-11-13 12:36:01,048 epoch 1 - iter 556/1393 - loss 0.28092289 - samples/sec: 267.91 - lr: 0.000050
329
- 2021-11-13 12:36:17,751 epoch 1 - iter 695/1393 - loss 0.25197607 - samples/sec: 266.44 - lr: 0.000050
330
- 2021-11-13 12:36:34,456 epoch 1 - iter 834/1393 - loss 0.23159584 - samples/sec: 266.42 - lr: 0.000050
331
- 2021-11-13 12:36:51,592 epoch 1 - iter 973/1393 - loss 0.21740625 - samples/sec: 259.73 - lr: 0.000050
332
- 2021-11-13 12:37:09,354 epoch 1 - iter 1112/1393 - loss 0.20610324 - samples/sec: 250.57 - lr: 0.000050
333
- 2021-11-13 12:37:26,125 epoch 1 - iter 1251/1393 - loss 0.19941834 - samples/sec: 265.37 - lr: 0.000050
334
- 2021-11-13 12:37:42,895 epoch 1 - iter 1390/1393 - loss 0.19272028 - samples/sec: 265.38 - lr: 0.000050
335
- 2021-11-13 12:37:43,211 ----------------------------------------------------------------------------------------------------
336
- 2021-11-13 12:37:43,212 EPOCH 1 done: loss 0.1926 - lr 0.0000500
337
- 2021-11-13 12:37:54,914 DEV : loss 0.003528536530211568 - f1-score (micro avg) 0.997
338
- 2021-11-13 12:37:54,981 BAD EPOCHS (no improvement): 0
339
- 2021-11-13 12:37:54,982 saving best model
340
- 2021-11-13 12:37:55,415 ----------------------------------------------------------------------------------------------------
341
- 2021-11-13 12:38:12,342 epoch 2 - iter 139/1393 - loss 0.13407926 - samples/sec: 263.01 - lr: 0.000050
342
- 2021-11-13 12:38:29,414 epoch 2 - iter 278/1393 - loss 0.12966840 - samples/sec: 260.72 - lr: 0.000050
343
- 2021-11-13 12:38:46,548 epoch 2 - iter 417/1393 - loss 0.12828205 - samples/sec: 259.77 - lr: 0.000050
344
- 2021-11-13 12:39:03,446 epoch 2 - iter 556/1393 - loss 0.12918177 - samples/sec: 263.39 - lr: 0.000050
345
- 2021-11-13 12:39:20,360 epoch 2 - iter 695/1393 - loss 0.12917633 - samples/sec: 263.15 - lr: 0.000050
346
- 2021-11-13 12:39:37,204 epoch 2 - iter 834/1393 - loss 0.12951091 - samples/sec: 264.25 - lr: 0.000050
347
- 2021-11-13 12:39:54,138 epoch 2 - iter 973/1393 - loss 0.12963772 - samples/sec: 262.83 - lr: 0.000050
348
- 2021-11-13 12:40:11,002 epoch 2 - iter 1112/1393 - loss 0.12888188 - samples/sec: 263.93 - lr: 0.000050
349
- 2021-11-13 12:40:28,038 epoch 2 - iter 1251/1393 - loss 0.12859288 - samples/sec: 261.26 - lr: 0.000050
350
- 2021-11-13 12:40:46,137 epoch 2 - iter 1390/1393 - loss 0.12889509 - samples/sec: 245.91 - lr: 0.000050
351
- 2021-11-13 12:40:46,447 ----------------------------------------------------------------------------------------------------
352
- 2021-11-13 12:40:46,448 EPOCH 2 done: loss 0.1288 - lr 0.0000500
353
- 2021-11-13 12:40:59,621 DEV : loss 0.005357138346880674 - f1-score (micro avg) 0.998
354
- 2021-11-13 12:40:59,689 BAD EPOCHS (no improvement): 0
355
- 2021-11-13 12:40:59,690 saving best model
356
- 2021-11-13 12:41:00,430 ----------------------------------------------------------------------------------------------------
357
- 2021-11-13 12:41:17,870 epoch 3 - iter 139/1393 - loss 0.12735532 - samples/sec: 255.24 - lr: 0.000050
358
- 2021-11-13 12:41:35,278 epoch 3 - iter 278/1393 - loss 0.12676129 - samples/sec: 255.68 - lr: 0.000050
359
- 2021-11-13 12:41:52,670 epoch 3 - iter 417/1393 - loss 0.12660022 - samples/sec: 255.92 - lr: 0.000050
360
- 2021-11-13 12:42:10,374 epoch 3 - iter 556/1393 - loss 0.12659470 - samples/sec: 251.41 - lr: 0.000050
361
- 2021-11-13 12:42:28,596 epoch 3 - iter 695/1393 - loss 0.12774528 - samples/sec: 244.27 - lr: 0.000050
362
- 2021-11-13 12:42:46,645 epoch 3 - iter 834/1393 - loss 0.12840789 - samples/sec: 246.62 - lr: 0.000050
363
- 2021-11-13 12:43:04,782 epoch 3 - iter 973/1393 - loss 0.12765397 - samples/sec: 245.42 - lr: 0.000050
364
- 2021-11-13 12:43:23,039 epoch 3 - iter 1112/1393 - loss 0.12750207 - samples/sec: 243.81 - lr: 0.000050
365
- 2021-11-13 12:43:41,198 epoch 3 - iter 1251/1393 - loss 0.12621200 - samples/sec: 245.11 - lr: 0.000050
366
- 2021-11-13 12:43:59,398 epoch 3 - iter 1390/1393 - loss 0.12451633 - samples/sec: 244.57 - lr: 0.000050
367
- 2021-11-13 12:43:59,730 ----------------------------------------------------------------------------------------------------
368
- 2021-11-13 12:43:59,731 EPOCH 3 done: loss 0.1244 - lr 0.0000500
369
- 2021-11-13 12:44:14,278 DEV : loss 0.003483039792627096 - f1-score (micro avg) 0.9975
370
- 2021-11-13 12:44:14,348 BAD EPOCHS (no improvement): 1
371
- 2021-11-13 12:44:14,349 ----------------------------------------------------------------------------------------------------
372
- 2021-11-13 12:44:32,167 epoch 4 - iter 139/1393 - loss 0.10730463 - samples/sec: 249.83 - lr: 0.000050
373
- 2021-11-13 12:44:50,210 epoch 4 - iter 278/1393 - loss 0.10724947 - samples/sec: 246.70 - lr: 0.000050
374
- 2021-11-13 12:45:08,355 epoch 4 - iter 417/1393 - loss 0.11158603 - samples/sec: 245.30 - lr: 0.000050
375
- 2021-11-13 12:45:26,536 epoch 4 - iter 556/1393 - loss 0.10925373 - samples/sec: 244.83 - lr: 0.000050
376
- 2021-11-13 12:45:44,591 epoch 4 - iter 695/1393 - loss 0.10981859 - samples/sec: 246.53 - lr: 0.000050
377
- 2021-11-13 12:46:02,655 epoch 4 - iter 834/1393 - loss 0.11008840 - samples/sec: 246.40 - lr: 0.000050
378
- 2021-11-13 12:46:20,847 epoch 4 - iter 973/1393 - loss 0.10974020 - samples/sec: 244.68 - lr: 0.000050
379
- 2021-11-13 12:46:38,633 epoch 4 - iter 1112/1393 - loss 0.10983833 - samples/sec: 250.26 - lr: 0.000050
380
- 2021-11-13 12:46:56,657 epoch 4 - iter 1251/1393 - loss 0.10999075 - samples/sec: 246.95 - lr: 0.000050
381
- 2021-11-13 12:47:14,726 epoch 4 - iter 1390/1393 - loss 0.10984034 - samples/sec: 246.34 - lr: 0.000050
382
- 2021-11-13 12:47:15,050 ----------------------------------------------------------------------------------------------------
383
- 2021-11-13 12:47:15,050 EPOCH 4 done: loss 0.1098 - lr 0.0000500
384
- 2021-11-13 12:47:29,567 DEV : loss 0.00215003895573318 - f1-score (micro avg) 0.9986
385
- 2021-11-13 12:47:29,636 BAD EPOCHS (no improvement): 0
386
- 2021-11-13 12:47:29,637 saving best model
387
- 2021-11-13 12:47:30,485 ----------------------------------------------------------------------------------------------------
388
- 2021-11-13 12:47:48,545 epoch 5 - iter 139/1393 - loss 0.10592105 - samples/sec: 246.49 - lr: 0.000050
389
- 2021-11-13 12:48:06,646 epoch 5 - iter 278/1393 - loss 0.10592110 - samples/sec: 245.90 - lr: 0.000050
390
- 2021-11-13 12:48:25,104 epoch 5 - iter 417/1393 - loss 0.10660698 - samples/sec: 241.15 - lr: 0.000050
391
- 2021-11-13 12:48:43,661 epoch 5 - iter 556/1393 - loss 0.10779533 - samples/sec: 239.86 - lr: 0.000050
392
- 2021-11-13 12:49:01,710 epoch 5 - iter 695/1393 - loss 0.10754604 - samples/sec: 246.61 - lr: 0.000050
393
- 2021-11-13 12:49:19,761 epoch 5 - iter 834/1393 - loss 0.10845855 - samples/sec: 246.59 - lr: 0.000050
394
- 2021-11-13 12:49:37,446 epoch 5 - iter 973/1393 - loss 0.10964545 - samples/sec: 251.68 - lr: 0.000050
395
- 2021-11-13 12:49:55,379 epoch 5 - iter 1112/1393 - loss 0.10853572 - samples/sec: 248.21 - lr: 0.000050
396
- 2021-11-13 12:50:13,189 epoch 5 - iter 1251/1393 - loss 0.10788337 - samples/sec: 249.90 - lr: 0.000050
397
- 2021-11-13 12:50:30,835 epoch 5 - iter 1390/1393 - loss 0.10790697 - samples/sec: 252.24 - lr: 0.000050
398
- 2021-11-13 12:50:31,155 ----------------------------------------------------------------------------------------------------
399
- 2021-11-13 12:50:31,156 EPOCH 5 done: loss 0.1080 - lr 0.0000500
400
- 2021-11-13 12:50:45,626 DEV : loss 0.003141788998618722 - f1-score (micro avg) 0.9986
401
- 2021-11-13 12:50:45,699 BAD EPOCHS (no improvement): 1
402
- 2021-11-13 12:50:45,700 ----------------------------------------------------------------------------------------------------
403
- 2021-11-13 12:51:03,155 epoch 6 - iter 139/1393 - loss 0.10518350 - samples/sec: 255.01 - lr: 0.000050
404
- 2021-11-13 12:51:20,343 epoch 6 - iter 278/1393 - loss 0.11992836 - samples/sec: 258.95 - lr: 0.000050
405
- 2021-11-13 12:51:37,487 epoch 6 - iter 417/1393 - loss 0.12769529 - samples/sec: 259.61 - lr: 0.000050
406
- 2021-11-13 12:51:54,590 epoch 6 - iter 556/1393 - loss 0.12227301 - samples/sec: 260.23 - lr: 0.000050
407
- 2021-11-13 12:52:11,624 epoch 6 - iter 695/1393 - loss 0.12038149 - samples/sec: 261.28 - lr: 0.000050
408
- 2021-11-13 12:52:28,786 epoch 6 - iter 834/1393 - loss 0.12603808 - samples/sec: 259.34 - lr: 0.000050
409
- 2021-11-13 12:52:45,889 epoch 6 - iter 973/1393 - loss 0.12365736 - samples/sec: 260.23 - lr: 0.000050
410
- 2021-11-13 12:53:03,061 epoch 6 - iter 1112/1393 - loss 0.12297510 - samples/sec: 259.19 - lr: 0.000050
411
- 2021-11-13 12:53:20,275 epoch 6 - iter 1251/1393 - loss 0.12165199 - samples/sec: 258.55 - lr: 0.000050
412
- 2021-11-13 12:53:37,882 epoch 6 - iter 1390/1393 - loss 0.12125429 - samples/sec: 252.79 - lr: 0.000050
413
- 2021-11-13 12:53:38,197 ----------------------------------------------------------------------------------------------------
414
- 2021-11-13 12:53:38,198 EPOCH 6 done: loss 0.1212 - lr 0.0000500
415
- 2021-11-13 12:53:51,681 DEV : loss 0.0021491716615855694 - f1-score (micro avg) 0.9985
416
- 2021-11-13 12:53:51,752 BAD EPOCHS (no improvement): 2
417
- 2021-11-13 12:53:52,199 ----------------------------------------------------------------------------------------------------
418
- 2021-11-13 12:53:52,200 loading file training/flair_ner/13112021_123428/best-model.pt
419
- 2021-11-13 12:54:09,067 0.998 0.998 0.998 0.998
420
- 2021-11-13 12:54:09,068
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
421
  Results:
422
- - F-score (micro) 0.998
423
- - F-score (macro) 0.9978
424
- - Accuracy 0.998
425
 
426
  By class:
427
  precision recall f1-score support
428
 
429
- nb_rounds 0.9980 0.9965 0.9972 5374
430
- duration_wt_sd 1.0000 1.0000 1.0000 4641
431
- duration_br_min 0.9921 0.9977 0.9949 2148
432
- duration_br_sd 1.0000 0.9976 0.9988 2068
433
- duration_wt_min 1.0000 1.0000 1.0000 764
434
- duration_br_hr 0.9947 0.9973 0.9960 373
 
435
 
436
- micro avg 0.9980 0.9980 0.9980 15368
437
- macro avg 0.9975 0.9982 0.9978 15368
438
- weighted avg 0.9981 0.9980 0.9980 15368
439
- samples avg 0.9980 0.9980 0.9980 15368
440
 
441
- 2021-11-13 12:54:09,068 ----------------------------------------------------------------------------------------------------
1
+ 2021-11-14 01:12:49,647 ----------------------------------------------------------------------------------------------------
2
+ 2021-11-14 01:12:49,648 Model: "SequenceTagger(
3
  (embeddings): TransformerWordEmbeddings(
4
  (model): CamembertModel(
5
  (embeddings): RobertaEmbeddings(
298
  (word_dropout): WordDropout(p=0.05)
299
  (locked_dropout): LockedDropout(p=0.5)
300
  (embedding2nn): Linear(in_features=1536, out_features=1536, bias=True)
301
+ (linear): Linear(in_features=1536, out_features=16, bias=True)
302
  (beta): 1.0
303
  (weights): None
304
  (weight_tensor) None
305
  )"
306
+ 2021-11-14 01:12:49,649 ----------------------------------------------------------------------------------------------------
307
+ 2021-11-14 01:12:49,649 Corpus: "Corpus: 56700 train + 6300 dev + 7000 test sentences"
308
+ 2021-11-14 01:12:49,650 ----------------------------------------------------------------------------------------------------
309
+ 2021-11-14 01:12:49,650 Parameters:
310
+ 2021-11-14 01:12:49,650 - learning_rate: "5e-05"
311
+ 2021-11-14 01:12:49,651 - mini_batch_size: "64"
312
+ 2021-11-14 01:12:49,651 - patience: "3"
313
+ 2021-11-14 01:12:49,652 - anneal_factor: "0.5"
314
+ 2021-11-14 01:12:49,652 - max_epochs: "8"
315
+ 2021-11-14 01:12:49,653 - shuffle: "True"
316
+ 2021-11-14 01:12:49,653 - train_with_dev: "False"
317
+ 2021-11-14 01:12:49,654 - batch_growth_annealing: "False"
318
+ 2021-11-14 01:12:49,654 ----------------------------------------------------------------------------------------------------
319
+ 2021-11-14 01:12:49,655 Model training base path: "training/flair_ner/14112021_011130"
320
+ 2021-11-14 01:12:49,656 ----------------------------------------------------------------------------------------------------
321
+ 2021-11-14 01:12:49,656 Device: cuda
322
+ 2021-11-14 01:12:49,657 ----------------------------------------------------------------------------------------------------
323
+ 2021-11-14 01:12:49,657 Embeddings storage mode: cpu
324
+ 2021-11-14 01:12:49,659 ----------------------------------------------------------------------------------------------------
325
+ 2021-11-14 01:13:08,832 epoch 1 - iter 88/886 - loss 0.98596606 - samples/sec: 293.89 - lr: 0.000050
326
+ 2021-11-14 01:13:28,224 epoch 1 - iter 176/886 - loss 0.56674940 - samples/sec: 290.55 - lr: 0.000050
327
+ 2021-11-14 01:13:46,801 epoch 1 - iter 264/886 - loss 0.42609266 - samples/sec: 303.28 - lr: 0.000050
328
+ 2021-11-14 01:14:05,497 epoch 1 - iter 352/886 - loss 0.35537700 - samples/sec: 301.36 - lr: 0.000050
329
+ 2021-11-14 01:14:24,349 epoch 1 - iter 440/886 - loss 0.31377922 - samples/sec: 298.86 - lr: 0.000050
330
+ 2021-11-14 01:14:43,031 epoch 1 - iter 528/886 - loss 0.28429453 - samples/sec: 301.58 - lr: 0.000050
331
+ 2021-11-14 01:15:02,142 epoch 1 - iter 616/886 - loss 0.27880202 - samples/sec: 294.85 - lr: 0.000050
332
+ 2021-11-14 01:15:20,814 epoch 1 - iter 704/886 - loss 0.26046120 - samples/sec: 301.80 - lr: 0.000050
333
+ 2021-11-14 01:15:39,918 epoch 1 - iter 792/886 - loss 0.24399388 - samples/sec: 295.02 - lr: 0.000050
334
+ 2021-11-14 01:15:58,554 epoch 1 - iter 880/886 - loss 0.23065481 - samples/sec: 302.35 - lr: 0.000050
335
+ 2021-11-14 01:15:59,827 ----------------------------------------------------------------------------------------------------
336
+ 2021-11-14 01:15:59,828 EPOCH 1 done: loss 0.2298 - lr 0.0000500
337
+ 2021-11-14 01:16:14,731 DEV : loss 0.0016565551050007343 - f1-score (micro avg) 0.9988
338
+ 2021-11-14 01:16:14,821 BAD EPOCHS (no improvement): 0
339
+ 2021-11-14 01:16:14,821 saving best model
340
+ 2021-11-14 01:16:15,220 ----------------------------------------------------------------------------------------------------
341
+ 2021-11-14 01:16:34,035 epoch 2 - iter 88/886 - loss 0.11443562 - samples/sec: 299.51 - lr: 0.000050
342
+ 2021-11-14 01:16:52,711 epoch 2 - iter 176/886 - loss 0.11391112 - samples/sec: 301.72 - lr: 0.000050
343
+ 2021-11-14 01:17:11,410 epoch 2 - iter 264/886 - loss 0.11275449 - samples/sec: 301.34 - lr: 0.000050
344
+ 2021-11-14 01:17:30,059 epoch 2 - iter 352/886 - loss 0.11148830 - samples/sec: 302.14 - lr: 0.000050
345
+ 2021-11-14 01:17:48,869 epoch 2 - iter 440/886 - loss 0.11192871 - samples/sec: 299.56 - lr: 0.000050
346
+ 2021-11-14 01:18:07,635 epoch 2 - iter 528/886 - loss 0.11243003 - samples/sec: 300.27 - lr: 0.000050
347
+ 2021-11-14 01:18:27,756 epoch 2 - iter 616/886 - loss 0.11202302 - samples/sec: 280.03 - lr: 0.000050
348
+ 2021-11-14 01:18:46,477 epoch 2 - iter 704/886 - loss 0.11150461 - samples/sec: 301.00 - lr: 0.000050
349
+ 2021-11-14 01:19:05,152 epoch 2 - iter 792/886 - loss 0.11090826 - samples/sec: 301.81 - lr: 0.000050
350
+ 2021-11-14 01:19:23,958 epoch 2 - iter 880/886 - loss 0.11109339 - samples/sec: 299.71 - lr: 0.000050
351
+ 2021-11-14 01:19:25,234 ----------------------------------------------------------------------------------------------------
352
+ 2021-11-14 01:19:25,234 EPOCH 2 done: loss 0.1110 - lr 0.0000500
353
+ 2021-11-14 01:19:41,637 DEV : loss 0.0011662252945825458 - f1-score (micro avg) 0.9987
354
+ 2021-11-14 01:19:41,739 BAD EPOCHS (no improvement): 1
355
+ 2021-11-14 01:19:41,742 ----------------------------------------------------------------------------------------------------
356
+ 2021-11-14 01:20:00,648 epoch 3 - iter 88/886 - loss 0.11136958 - samples/sec: 298.07 - lr: 0.000050
357
+ 2021-11-14 01:20:19,564 epoch 3 - iter 176/886 - loss 0.11280468 - samples/sec: 297.97 - lr: 0.000050
358
+ 2021-11-14 01:20:38,568 epoch 3 - iter 264/886 - loss 0.11045104 - samples/sec: 296.60 - lr: 0.000050
359
+ 2021-11-14 01:20:57,435 epoch 3 - iter 352/886 - loss 0.10911278 - samples/sec: 298.75 - lr: 0.000050
360
+ 2021-11-14 01:21:16,245 epoch 3 - iter 440/886 - loss 0.10930290 - samples/sec: 299.56 - lr: 0.000050
361
+ 2021-11-14 01:21:35,246 epoch 3 - iter 528/886 - loss 0.10928782 - samples/sec: 296.54 - lr: 0.000050
362
+ 2021-11-14 01:21:54,644 epoch 3 - iter 616/886 - loss 0.10980571 - samples/sec: 290.50 - lr: 0.000050
363
+ 2021-11-14 01:22:13,526 epoch 3 - iter 704/886 - loss 0.10986299 - samples/sec: 298.42 - lr: 0.000050
364
+ 2021-11-14 01:22:32,408 epoch 3 - iter 792/886 - loss 0.11021279 - samples/sec: 298.42 - lr: 0.000050
365
+ 2021-11-14 01:22:51,317 epoch 3 - iter 880/886 - loss 0.11010333 - samples/sec: 297.99 - lr: 0.000050
366
+ 2021-11-14 01:22:52,607 ----------------------------------------------------------------------------------------------------
367
+ 2021-11-14 01:22:52,608 EPOCH 3 done: loss 0.1101 - lr 0.0000500
368
+ 2021-11-14 01:23:10,750 DEV : loss 0.0018373305210843682 - f1-score (micro avg) 0.9977
369
+ 2021-11-14 01:23:10,838 BAD EPOCHS (no improvement): 2
370
+ 2021-11-14 01:23:10,839 ----------------------------------------------------------------------------------------------------
371
+ 2021-11-14 01:23:30,566 epoch 4 - iter 88/886 - loss 0.10992709 - samples/sec: 285.68 - lr: 0.000050
372
+ 2021-11-14 01:23:50,362 epoch 4 - iter 176/886 - loss 0.10809355 - samples/sec: 284.67 - lr: 0.000050
373
+ 2021-11-14 01:24:10,080 epoch 4 - iter 264/886 - loss 0.10844173 - samples/sec: 285.87 - lr: 0.000050
374
+ 2021-11-14 01:24:30,946 epoch 4 - iter 352/886 - loss 0.10836201 - samples/sec: 270.06 - lr: 0.000050
375
+ 2021-11-14 01:24:51,474 epoch 4 - iter 440/886 - loss 0.10794139 - samples/sec: 274.51 - lr: 0.000050
376
+ 2021-11-14 01:25:12,388 epoch 4 - iter 528/886 - loss 0.10878776 - samples/sec: 269.43 - lr: 0.000050
377
+ 2021-11-14 01:25:33,189 epoch 4 - iter 616/886 - loss 0.10894668 - samples/sec: 270.92 - lr: 0.000050
378
+ 2021-11-14 01:25:54,237 epoch 4 - iter 704/886 - loss 0.10934898 - samples/sec: 267.79 - lr: 0.000050
379
+ 2021-11-14 01:26:15,172 epoch 4 - iter 792/886 - loss 0.10987029 - samples/sec: 269.18 - lr: 0.000050
380
+ 2021-11-14 01:26:35,568 epoch 4 - iter 880/886 - loss 0.10994285 - samples/sec: 276.35 - lr: 0.000050
381
+ 2021-11-14 01:26:36,958 ----------------------------------------------------------------------------------------------------
382
+ 2021-11-14 01:26:36,959 EPOCH 4 done: loss 0.1099 - lr 0.0000500
383
+ 2021-11-14 01:26:56,814 DEV : loss 0.0014131164643913507 - f1-score (micro avg) 0.999
384
+ 2021-11-14 01:26:56,904 BAD EPOCHS (no improvement): 0
385
+ 2021-11-14 01:26:56,907 saving best model
386
+ 2021-11-14 01:26:57,746 ----------------------------------------------------------------------------------------------------
387
+ 2021-11-14 01:27:17,983 epoch 5 - iter 88/886 - loss 0.10864585 - samples/sec: 278.47 - lr: 0.000050
388
+ 2021-11-14 01:27:37,584 epoch 5 - iter 176/886 - loss 0.10902201 - samples/sec: 287.48 - lr: 0.000050
389
+ 2021-11-14 01:27:57,285 epoch 5 - iter 264/886 - loss 0.10824347 - samples/sec: 286.02 - lr: 0.000050
390
+ 2021-11-14 01:28:16,752 epoch 5 - iter 352/886 - loss 0.10819784 - samples/sec: 289.50 - lr: 0.000050
391
+ 2021-11-14 01:28:35,991 epoch 5 - iter 440/886 - loss 0.10806523 - samples/sec: 292.89 - lr: 0.000050
392
+ 2021-11-14 01:28:55,004 epoch 5 - iter 528/886 - loss 0.10874710 - samples/sec: 296.35 - lr: 0.000050
393
+ 2021-11-14 01:29:14,287 epoch 5 - iter 616/886 - loss 0.10819233 - samples/sec: 292.22 - lr: 0.000050
394
+ 2021-11-14 01:29:33,882 epoch 5 - iter 704/886 - loss 0.10856081 - samples/sec: 287.57 - lr: 0.000050
395
+ 2021-11-14 01:29:53,701 epoch 5 - iter 792/886 - loss 0.10878005 - samples/sec: 284.31 - lr: 0.000050
396
+ 2021-11-14 01:30:13,249 epoch 5 - iter 880/886 - loss 0.10877142 - samples/sec: 288.26 - lr: 0.000050
397
+ 2021-11-14 01:30:14,542 ----------------------------------------------------------------------------------------------------
398
+ 2021-11-14 01:30:14,543 EPOCH 5 done: loss 0.1088 - lr 0.0000500
399
+ 2021-11-14 01:30:32,668 DEV : loss 0.0017454695189371705 - f1-score (micro avg) 0.9993
400
+ 2021-11-14 01:30:32,754 BAD EPOCHS (no improvement): 0
401
+ 2021-11-14 01:30:32,757 saving best model
402
+ 2021-11-14 01:30:33,509 ----------------------------------------------------------------------------------------------------
403
+ 2021-11-14 01:30:52,836 epoch 6 - iter 88/886 - loss 0.10524382 - samples/sec: 291.60 - lr: 0.000050
404
+ 2021-11-14 01:31:12,126 epoch 6 - iter 176/886 - loss 0.10690102 - samples/sec: 292.11 - lr: 0.000050
405
+ 2021-11-14 01:31:31,803 epoch 6 - iter 264/886 - loss 0.10714116 - samples/sec: 286.38 - lr: 0.000050
406
+ 2021-11-14 01:31:51,724 epoch 6 - iter 352/886 - loss 0.10771656 - samples/sec: 282.86 - lr: 0.000050
407
+ 2021-11-14 01:32:11,047 epoch 6 - iter 440/886 - loss 0.10879216 - samples/sec: 291.61 - lr: 0.000050
408
+ 2021-11-14 01:32:30,353 epoch 6 - iter 528/886 - loss 0.10867079 - samples/sec: 291.88 - lr: 0.000050
409
+ 2021-11-14 01:32:49,795 epoch 6 - iter 616/886 - loss 0.10904316 - samples/sec: 289.82 - lr: 0.000050
410
+ 2021-11-14 01:33:09,113 epoch 6 - iter 704/886 - loss 0.10898605 - samples/sec: 291.70 - lr: 0.000050
411
+ 2021-11-14 01:33:28,312 epoch 6 - iter 792/886 - loss 0.10895071 - samples/sec: 293.49 - lr: 0.000050
412
+ 2021-11-14 01:33:48,207 epoch 6 - iter 880/886 - loss 0.10936169 - samples/sec: 283.23 - lr: 0.000050
413
+ 2021-11-14 01:33:49,618 ----------------------------------------------------------------------------------------------------
414
+ 2021-11-14 01:33:49,619 EPOCH 6 done: loss 0.1094 - lr 0.0000500
415
+ 2021-11-14 01:34:08,307 DEV : loss 0.0012574659194797277 - f1-score (micro avg) 0.9991
416
+ 2021-11-14 01:34:08,393 BAD EPOCHS (no improvement): 1
417
+ 2021-11-14 01:34:08,396 ----------------------------------------------------------------------------------------------------
418
+ 2021-11-14 01:34:28,456 epoch 7 - iter 88/886 - loss 0.10772567 - samples/sec: 280.95 - lr: 0.000050
419
+ 2021-11-14 01:34:48,077 epoch 7 - iter 176/886 - loss 0.10831423 - samples/sec: 287.18 - lr: 0.000050
420
+ 2021-11-14 01:35:07,762 epoch 7 - iter 264/886 - loss 0.10889045 - samples/sec: 286.25 - lr: 0.000050
421
+ 2021-11-14 01:35:27,543 epoch 7 - iter 352/886 - loss 0.10923627 - samples/sec: 284.87 - lr: 0.000050
422
+ 2021-11-14 01:35:47,152 epoch 7 - iter 440/886 - loss 0.10891691 - samples/sec: 287.36 - lr: 0.000050
423
+ 2021-11-14 01:36:06,760 epoch 7 - iter 528/886 - loss 0.10886164 - samples/sec: 287.38 - lr: 0.000050
424
+ 2021-11-14 01:36:26,264 epoch 7 - iter 616/886 - loss 0.10925453 - samples/sec: 288.92 - lr: 0.000050
425
+ 2021-11-14 01:36:45,846 epoch 7 - iter 704/886 - loss 0.10944528 - samples/sec: 287.78 - lr: 0.000050
426
+ 2021-11-14 01:37:05,161 epoch 7 - iter 792/886 - loss 0.10963480 - samples/sec: 291.83 - lr: 0.000050
427
+ 2021-11-14 01:37:25,344 epoch 7 - iter 880/886 - loss 0.10941620 - samples/sec: 279.19 - lr: 0.000050
428
+ 2021-11-14 01:37:26,675 ----------------------------------------------------------------------------------------------------
429
+ 2021-11-14 01:37:26,676 EPOCH 7 done: loss 0.1093 - lr 0.0000500
430
+ 2021-11-14 01:37:46,332 DEV : loss 0.0008941686828620732 - f1-score (micro avg) 0.9994
431
+ 2021-11-14 01:37:46,425 BAD EPOCHS (no improvement): 0
432
+ 2021-11-14 01:37:46,428 saving best model
433
+ 2021-11-14 01:37:47,268 ----------------------------------------------------------------------------------------------------
434
+ 2021-11-14 01:38:06,968 epoch 8 - iter 88/886 - loss 0.10842313 - samples/sec: 286.09 - lr: 0.000050
435
+ 2021-11-14 01:38:26,508 epoch 8 - iter 176/886 - loss 0.10686590 - samples/sec: 288.47 - lr: 0.000050
436
+ 2021-11-14 01:38:45,880 epoch 8 - iter 264/886 - loss 0.10866318 - samples/sec: 290.87 - lr: 0.000050
437
+ 2021-11-14 01:39:05,447 epoch 8 - iter 352/886 - loss 0.10886654 - samples/sec: 287.98 - lr: 0.000050
438
+ 2021-11-14 01:39:25,039 epoch 8 - iter 440/886 - loss 0.10893653 - samples/sec: 287.62 - lr: 0.000050
439
+ 2021-11-14 01:39:44,508 epoch 8 - iter 528/886 - loss 0.10845487 - samples/sec: 289.43 - lr: 0.000050
440
+ 2021-11-14 01:40:04,009 epoch 8 - iter 616/886 - loss 0.10849658 - samples/sec: 288.96 - lr: 0.000050
441
+ 2021-11-14 01:40:23,270 epoch 8 - iter 704/886 - loss 0.10852857 - samples/sec: 292.55 - lr: 0.000050
442
+ 2021-11-14 01:40:42,423 epoch 8 - iter 792/886 - loss 0.10825218 - samples/sec: 294.21 - lr: 0.000050
443
+ 2021-11-14 01:41:01,605 epoch 8 - iter 880/886 - loss 0.10839605 - samples/sec: 293.76 - lr: 0.000050
444
+ 2021-11-14 01:41:02,928 ----------------------------------------------------------------------------------------------------
445
+ 2021-11-14 01:41:02,929 EPOCH 8 done: loss 0.1084 - lr 0.0000500
446
+ 2021-11-14 01:41:22,401 DEV : loss 0.0013162429677322507 - f1-score (micro avg) 0.9994
447
+ 2021-11-14 01:41:22,539 BAD EPOCHS (no improvement): 1
448
+ 2021-11-14 01:41:23,014 ----------------------------------------------------------------------------------------------------
449
+ 2021-11-14 01:41:23,015 loading file training/flair_ner/14112021_011130/best-model.pt
450
+ 2021-11-14 01:41:42,464 0.9996 0.9996 0.9996 0.9996
451
+ 2021-11-14 01:41:42,465
452
  Results:
453
+ - F-score (micro) 0.9996
454
+ - F-score (macro) 0.9994
455
+ - Accuracy 0.9996
456
 
457
  By class:
458
  precision recall f1-score support
459
 
460
+ nb_rounds 1.0000 0.9988 0.9994 6894
461
+ duration_wt_sd 1.0000 1.0000 1.0000 3288
462
+ duration_br_min 0.9982 1.0000 0.9991 3251
463
+ duration_wt_min 1.0000 1.0000 1.0000 2677
464
+ duration_br_sd 0.9995 1.0000 0.9998 2080
465
+ duration_wt_hr 1.0000 1.0000 1.0000 1050
466
+ duration_br_hr 0.9957 1.0000 0.9978 230
467
 
468
+ micro avg 0.9996 0.9996 0.9996 19470
469
+ macro avg 0.9990 0.9998 0.9994 19470
470
+ weighted avg 0.9996 0.9996 0.9996 19470
471
+ samples avg 0.9996 0.9996 0.9996 19470
472
 
473
+ 2021-11-14 01:41:42,466 ----------------------------------------------------------------------------------------------------