***** Test results ***** Thu Sep 22 08:47:08 2022 Task: ner Model path: bert-base-uncased Data path: ./data/ud/ Tokenizer: bert-base-uncased Batch size: 32 Epoch: 8 Learning rate: 2e-05 LR Decay End Factor: 0.3LR Decay End Epoch: 5Sequence length: 96 Training: True Num Threads: 24 Num Sentences: 0 Max Grad Norm: 0.0 Use GNN: False Syntax graph style: dep Use label weights: False Clip value: 50 precision recall f1-score support CARDINAL 0.7269 0.6307 0.6754 612 DATE 0.6856 0.7053 0.6953 1045 EVENT 0.4286 0.4500 0.4390 80 FAC 0.3454 0.4437 0.3884 151 GPE 0.8709 0.8574 0.8641 1936 LANGUAGE 0.5758 0.2468 0.3455 77 LAW 0.4314 0.3860 0.4074 57 LOC 0.5829 0.4700 0.5204 217 MONEY 0.5085 0.4918 0.5000 61 NORP 0.7023 0.7156 0.7089 422 ORDINAL 0.8258 0.8596 0.8424 171 ORG 0.5300 0.5776 0.5528 857 PERCENT 0.4255 0.5556 0.4819 36 PERSON 0.7398 0.7841 0.7613 1371 PRODUCT 0.2975 0.3673 0.3288 98 QUANTITY 0.3284 0.4151 0.3667 53 SEP] 0.0000 0.0000 0.0000 0 TIME 0.5586 0.6682 0.6085 214 WORK_OF_ART 0.3010 0.2385 0.2661 130 micro avg 0.6607 0.7024 0.6809 7588 macro avg 0.5192 0.5191 0.5133 7588 weighted avg 0.6968 0.7024 0.6977 7588 Special token predictions: 0