camembert-ner-finetuned-jul
This model is a fine-tuned version of Jean-Baptiste/camembert-ner on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.1177
- Loc: {'precision': 0.7309417040358744, 'recall': 0.7546296296296297, 'f1': 0.7425968109339408, 'number': 216}
- Misc: {'precision': 0.5862068965517241, 'recall': 0.425, 'f1': 0.4927536231884058, 'number': 40}
- Org: {'precision': 0.8333333333333334, 'recall': 0.825, 'f1': 0.8291457286432161, 'number': 200}
- Per: {'precision': 0.7823834196891192, 'recall': 0.7704081632653061, 'f1': 0.776349614395887, 'number': 196}
- Overall Precision: 0.7714
- Overall Recall: 0.7607
- Overall F1: 0.7660
- Overall Accuracy: 0.9812
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Loc | Misc | Org | Per | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|---|
No log | 1.0 | 408 | 0.0602 | {'precision': 0.6894977168949772, 'recall': 0.7330097087378641, 'f1': 0.7105882352941175, 'number': 206} | {'precision': 0.8461538461538461, 'recall': 0.2972972972972973, 'f1': 0.44000000000000006, 'number': 37} | {'precision': 0.7472527472527473, 'recall': 0.7472527472527473, 'f1': 0.7472527472527473, 'number': 182} | {'precision': 0.8195876288659794, 'recall': 0.7571428571428571, 'f1': 0.787128712871287, 'number': 210} | 0.7516 | 0.7197 | 0.7353 | 0.9830 |
0.0903 | 2.0 | 816 | 0.0568 | {'precision': 0.776255707762557, 'recall': 0.8252427184466019, 'f1': 0.7999999999999999, 'number': 206} | {'precision': 0.5217391304347826, 'recall': 0.32432432432432434, 'f1': 0.4, 'number': 37} | {'precision': 0.7731958762886598, 'recall': 0.8241758241758241, 'f1': 0.7978723404255318, 'number': 182} | {'precision': 0.822429906542056, 'recall': 0.8380952380952381, 'f1': 0.830188679245283, 'number': 210} | 0.7815 | 0.8 | 0.7907 | 0.9845 |
0.0357 | 3.0 | 1224 | 0.0631 | {'precision': 0.7339055793991416, 'recall': 0.8300970873786407, 'f1': 0.7790432801822323, 'number': 206} | {'precision': 0.6363636363636364, 'recall': 0.3783783783783784, 'f1': 0.4745762711864407, 'number': 37} | {'precision': 0.7969543147208121, 'recall': 0.8626373626373627, 'f1': 0.8284960422163589, 'number': 182} | {'precision': 0.8317757009345794, 'recall': 0.8476190476190476, 'f1': 0.839622641509434, 'number': 210} | 0.7808 | 0.8189 | 0.7994 | 0.9851 |
0.0201 | 4.0 | 1632 | 0.0761 | {'precision': 0.772093023255814, 'recall': 0.8058252427184466, 'f1': 0.7885985748218527, 'number': 206} | {'precision': 0.6, 'recall': 0.32432432432432434, 'f1': 0.4210526315789474, 'number': 37} | {'precision': 0.8263157894736842, 'recall': 0.8626373626373627, 'f1': 0.8440860215053764, 'number': 182} | {'precision': 0.8293838862559242, 'recall': 0.8333333333333334, 'f1': 0.8313539192399049, 'number': 210} | 0.8019 | 0.8031 | 0.8025 | 0.9846 |
0.0113 | 5.0 | 2040 | 0.0745 | {'precision': 0.7477876106194691, 'recall': 0.8203883495145631, 'f1': 0.7824074074074074, 'number': 206} | {'precision': 0.4666666666666667, 'recall': 0.3783783783783784, 'f1': 0.417910447761194, 'number': 37} | {'precision': 0.8229166666666666, 'recall': 0.8681318681318682, 'f1': 0.8449197860962567, 'number': 182} | {'precision': 0.8388625592417062, 'recall': 0.8428571428571429, 'f1': 0.840855106888361, 'number': 210} | 0.7860 | 0.8157 | 0.8006 | 0.9849 |
0.0113 | 6.0 | 2448 | 0.0815 | {'precision': 0.7654867256637168, 'recall': 0.8398058252427184, 'f1': 0.8009259259259259, 'number': 206} | {'precision': 0.43333333333333335, 'recall': 0.35135135135135137, 'f1': 0.3880597014925374, 'number': 37} | {'precision': 0.8253968253968254, 'recall': 0.8571428571428571, 'f1': 0.8409703504043127, 'number': 182} | {'precision': 0.8673469387755102, 'recall': 0.8095238095238095, 'f1': 0.8374384236453202, 'number': 210} | 0.7988 | 0.8063 | 0.8025 | 0.9844 |
0.0085 | 7.0 | 2856 | 0.0850 | {'precision': 0.7579908675799086, 'recall': 0.8058252427184466, 'f1': 0.7811764705882352, 'number': 206} | {'precision': 0.5416666666666666, 'recall': 0.35135135135135137, 'f1': 0.4262295081967213, 'number': 37} | {'precision': 0.828125, 'recall': 0.8736263736263736, 'f1': 0.8502673796791443, 'number': 182} | {'precision': 0.8805970149253731, 'recall': 0.8428571428571429, 'f1': 0.8613138686131387, 'number': 210} | 0.8097 | 0.8110 | 0.8104 | 0.9853 |
0.0045 | 8.0 | 3264 | 0.0846 | {'precision': 0.7321428571428571, 'recall': 0.7961165048543689, 'f1': 0.7627906976744185, 'number': 206} | {'precision': 0.4642857142857143, 'recall': 0.35135135135135137, 'f1': 0.39999999999999997, 'number': 37} | {'precision': 0.8172043010752689, 'recall': 0.8351648351648352, 'f1': 0.8260869565217392, 'number': 182} | {'precision': 0.8756218905472637, 'recall': 0.8380952380952381, 'f1': 0.856447688564477, 'number': 210} | 0.7903 | 0.7953 | 0.7928 | 0.9847 |
0.0044 | 9.0 | 3672 | 0.0845 | {'precision': 0.7614678899082569, 'recall': 0.8058252427184466, 'f1': 0.7830188679245284, 'number': 206} | {'precision': 0.48148148148148145, 'recall': 0.35135135135135137, 'f1': 0.40625, 'number': 37} | {'precision': 0.8297872340425532, 'recall': 0.8571428571428571, 'f1': 0.8432432432432433, 'number': 182} | {'precision': 0.8811881188118812, 'recall': 0.8476190476190476, 'f1': 0.8640776699029127, 'number': 210} | 0.8079 | 0.8079 | 0.8079 | 0.9854 |
0.0031 | 10.0 | 4080 | 0.0855 | {'precision': 0.7568807339449541, 'recall': 0.8009708737864077, 'f1': 0.7783018867924528, 'number': 206} | {'precision': 0.4482758620689655, 'recall': 0.35135135135135137, 'f1': 0.393939393939394, 'number': 37} | {'precision': 0.8297872340425532, 'recall': 0.8571428571428571, 'f1': 0.8432432432432433, 'number': 182} | {'precision': 0.8855721393034826, 'recall': 0.8476190476190476, 'f1': 0.8661800486618005, 'number': 210} | 0.8050 | 0.8063 | 0.8057 | 0.9852 |
Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.