Edit model card

google-bert-large-cased-finetuned-ner-vlsp2021-3090-15June-1

This model is a fine-tuned version of google-bert/bert-large-cased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1555
  • Atetime: {'precision': 0.7904411764705882, 'recall': 0.864321608040201, 'f1': 0.8257321171387422, 'number': 995}
  • Ddress: {'precision': 0.8076923076923077, 'recall': 0.7241379310344828, 'f1': 0.7636363636363636, 'number': 29}
  • Erson: {'precision': 0.8804057661505605, 'recall': 0.8738738738738738, 'f1': 0.8771276595744681, 'number': 1887}
  • Ersontype: {'precision': 0.5577190542420027, 'recall': 0.5967261904761905, 'f1': 0.5765636232925951, 'number': 672}
  • Honenumber: {'precision': 0.375, 'recall': 0.6666666666666666, 'f1': 0.4800000000000001, 'number': 9}
  • Iscellaneous: {'precision': 0.4172661870503597, 'recall': 0.36477987421383645, 'f1': 0.38926174496644295, 'number': 159}
  • Mail: {'precision': 0.9846153846153847, 'recall': 1.0, 'f1': 0.9922480620155039, 'number': 64}
  • Ocation: {'precision': 0.6956237753102548, 'recall': 0.8236658932714617, 'f1': 0.7542492917847026, 'number': 1293}
  • P: {'precision': 0.6363636363636364, 'recall': 0.6363636363636364, 'f1': 0.6363636363636364, 'number': 11}
  • Rl: {'precision': 0.875, 'recall': 0.9333333333333333, 'f1': 0.9032258064516129, 'number': 15}
  • Roduct: {'precision': 0.4963820549927641, 'recall': 0.5523349436392915, 'f1': 0.5228658536585367, 'number': 621}
  • Overall Precision: 0.7268
  • Overall Recall: 0.7798
  • Overall F1: 0.7524
  • Overall Accuracy: 0.9680

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Atetime Ddress Erson Ersontype Honenumber Iscellaneous Mail Ocation P Rl Roduct Overall Precision Overall Recall Overall F1 Overall Accuracy
0.1537 1.0 3263 0.1310 {'precision': 0.7152777777777778, 'recall': 0.828140703517588, 'f1': 0.767582673497904, 'number': 995} {'precision': 0.40476190476190477, 'recall': 0.5862068965517241, 'f1': 0.4788732394366197, 'number': 29} {'precision': 0.8104639684106614, 'recall': 0.8701642819289878, 'f1': 0.8392537694863277, 'number': 1887} {'precision': 0.44173441734417346, 'recall': 0.4851190476190476, 'f1': 0.4624113475177305, 'number': 672} {'precision': 0.11764705882352941, 'recall': 0.2222222222222222, 'f1': 0.15384615384615383, 'number': 9} {'precision': 0.2848101265822785, 'recall': 0.2830188679245283, 'f1': 0.28391167192429023, 'number': 159} {'precision': 0.9411764705882353, 'recall': 1.0, 'f1': 0.9696969696969697, 'number': 64} {'precision': 0.6284970722186076, 'recall': 0.7470997679814385, 'f1': 0.6826855123674911, 'number': 1293} {'precision': 0.625, 'recall': 0.45454545454545453, 'f1': 0.5263157894736842, 'number': 11} {'precision': 0.5714285714285714, 'recall': 0.8, 'f1': 0.6666666666666666, 'number': 15} {'precision': 0.4410029498525074, 'recall': 0.48148148148148145, 'f1': 0.46035411855273284, 'number': 621} 0.6520 0.7301 0.6889 0.9625
0.0882 2.0 6526 0.1432 {'precision': 0.731433506044905, 'recall': 0.8512562814070351, 'f1': 0.786809103576405, 'number': 995} {'precision': 0.45, 'recall': 0.6206896551724138, 'f1': 0.5217391304347826, 'number': 29} {'precision': 0.876265466816648, 'recall': 0.825649178590355, 'f1': 0.8502046384720328, 'number': 1887} {'precision': 0.6190476190476191, 'recall': 0.4836309523809524, 'f1': 0.5430242272347536, 'number': 672} {'precision': 0.4, 'recall': 0.6666666666666666, 'f1': 0.5, 'number': 9} {'precision': 0.304635761589404, 'recall': 0.2893081761006289, 'f1': 0.29677419354838713, 'number': 159} {'precision': 0.9846153846153847, 'recall': 1.0, 'f1': 0.9922480620155039, 'number': 64} {'precision': 0.610655737704918, 'recall': 0.8066511987625676, 'f1': 0.6951016327890702, 'number': 1293} {'precision': 0.875, 'recall': 0.6363636363636364, 'f1': 0.7368421052631579, 'number': 11} {'precision': 0.875, 'recall': 0.9333333333333333, 'f1': 0.9032258064516129, 'number': 15} {'precision': 0.5594989561586639, 'recall': 0.43156199677938806, 'f1': 0.48727272727272725, 'number': 621} 0.7060 0.7291 0.7174 0.9651
0.0618 3.0 9789 0.1293 {'precision': 0.7963709677419355, 'recall': 0.7939698492462312, 'f1': 0.7951685958731756, 'number': 995} {'precision': 0.375, 'recall': 0.5172413793103449, 'f1': 0.4347826086956522, 'number': 29} {'precision': 0.8490759753593429, 'recall': 0.8765235824059353, 'f1': 0.8625814863102998, 'number': 1887} {'precision': 0.4895287958115183, 'recall': 0.5565476190476191, 'f1': 0.5208913649025069, 'number': 672} {'precision': 0.4, 'recall': 0.6666666666666666, 'f1': 0.5, 'number': 9} {'precision': 0.36666666666666664, 'recall': 0.34591194968553457, 'f1': 0.3559870550161812, 'number': 159} {'precision': 0.9846153846153847, 'recall': 1.0, 'f1': 0.9922480620155039, 'number': 64} {'precision': 0.7187060478199718, 'recall': 0.7904098994586234, 'f1': 0.7528545119705341, 'number': 1293} {'precision': 0.6666666666666666, 'recall': 0.5454545454545454, 'f1': 0.6, 'number': 11} {'precision': 0.875, 'recall': 0.9333333333333333, 'f1': 0.9032258064516129, 'number': 15} {'precision': 0.5252525252525253, 'recall': 0.5024154589371981, 'f1': 0.5135802469135803, 'number': 621} 0.7169 0.7493 0.7327 0.9658
0.0413 4.0 13052 0.1444 {'precision': 0.7855839416058394, 'recall': 0.8653266331658291, 'f1': 0.8235294117647058, 'number': 995} {'precision': 0.6428571428571429, 'recall': 0.6206896551724138, 'f1': 0.6315789473684211, 'number': 29} {'precision': 0.8776709401709402, 'recall': 0.8706942236354, 'f1': 0.874168661878159, 'number': 1887} {'precision': 0.5483405483405484, 'recall': 0.5654761904761905, 'f1': 0.5567765567765568, 'number': 672} {'precision': 0.2727272727272727, 'recall': 0.6666666666666666, 'f1': 0.3870967741935484, 'number': 9} {'precision': 0.4251968503937008, 'recall': 0.33962264150943394, 'f1': 0.37762237762237755, 'number': 159} {'precision': 0.9411764705882353, 'recall': 1.0, 'f1': 0.9696969696969697, 'number': 64} {'precision': 0.6886422976501305, 'recall': 0.8159319412219644, 'f1': 0.7469026548672566, 'number': 1293} {'precision': 0.6363636363636364, 'recall': 0.6363636363636364, 'f1': 0.6363636363636364, 'number': 11} {'precision': 0.8235294117647058, 'recall': 0.9333333333333333, 'f1': 0.8749999999999999, 'number': 15} {'precision': 0.47619047619047616, 'recall': 0.5636070853462157, 'f1': 0.5162241887905604, 'number': 621} 0.7179 0.7736 0.7447 0.9668
0.0245 5.0 16315 0.1555 {'precision': 0.7904411764705882, 'recall': 0.864321608040201, 'f1': 0.8257321171387422, 'number': 995} {'precision': 0.8076923076923077, 'recall': 0.7241379310344828, 'f1': 0.7636363636363636, 'number': 29} {'precision': 0.8804057661505605, 'recall': 0.8738738738738738, 'f1': 0.8771276595744681, 'number': 1887} {'precision': 0.5577190542420027, 'recall': 0.5967261904761905, 'f1': 0.5765636232925951, 'number': 672} {'precision': 0.375, 'recall': 0.6666666666666666, 'f1': 0.4800000000000001, 'number': 9} {'precision': 0.4172661870503597, 'recall': 0.36477987421383645, 'f1': 0.38926174496644295, 'number': 159} {'precision': 0.9846153846153847, 'recall': 1.0, 'f1': 0.9922480620155039, 'number': 64} {'precision': 0.6956237753102548, 'recall': 0.8236658932714617, 'f1': 0.7542492917847026, 'number': 1293} {'precision': 0.6363636363636364, 'recall': 0.6363636363636364, 'f1': 0.6363636363636364, 'number': 11} {'precision': 0.875, 'recall': 0.9333333333333333, 'f1': 0.9032258064516129, 'number': 15} {'precision': 0.4963820549927641, 'recall': 0.5523349436392915, 'f1': 0.5228658536585367, 'number': 621} 0.7268 0.7798 0.7524 0.9680

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
5
Safetensors
Model size
333M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).

Finetuned from