legal-NER
This model is a fine-tuned version of nlpaueb/legal-bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
Loss: 0.0922
Accuracy: 0.9840
Precision: 0.9221
Recall: 0.9260
F1: 0.9240
Classification Report: precision recall f1-score support
LOC 0.94 0.96 0.95 1837 MISC 0.88 0.87 0.87 922 ORG 0.88 0.87 0.88 1341 PER 0.96 0.96 0.96 1842
micro avg 0.92 0.93 0.92 5942 macro avg 0.91 0.91 0.91 5942
weighted avg 0.92 0.93 0.92 5942
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Classification Report |
---|---|---|---|---|---|---|---|---|
0.1354 | 0.2668 | 500 | 0.1185 | 0.9686 | 0.8405 | 0.8304 | 0.8354 | precision recall f1-score support |
LOC 0.83 0.94 0.88 1837
MISC 0.78 0.70 0.74 922
ORG 0.81 0.62 0.70 1341
PER 0.90 0.94 0.92 1842
micro avg 0.84 0.83 0.84 5942 macro avg 0.83 0.80 0.81 5942 weighted avg 0.84 0.83 0.83 5942 | | 0.0971 | 0.5336 | 1000 | 0.1045 | 0.9744 | 0.8578 | 0.8721 | 0.8649 | precision recall f1-score support
LOC 0.86 0.96 0.91 1837
MISC 0.89 0.71 0.79 922
ORG 0.78 0.74 0.76 1341
PER 0.89 0.96 0.93 1842
micro avg 0.86 0.87 0.86 5942 macro avg 0.86 0.84 0.85 5942 weighted avg 0.86 0.87 0.86 5942 | | 0.097 | 0.8004 | 1500 | 0.0849 | 0.9776 | 0.8884 | 0.8812 | 0.8848 | precision recall f1-score support
LOC 0.93 0.91 0.92 1837
MISC 0.77 0.82 0.79 922
ORG 0.82 0.83 0.82 1341
PER 0.96 0.92 0.94 1842
micro avg 0.89 0.88 0.88 5942 macro avg 0.87 0.87 0.87 5942 weighted avg 0.89 0.88 0.89 5942 | | 0.0522 | 1.0672 | 2000 | 0.0838 | 0.9791 | 0.9014 | 0.8955 | 0.8984 | precision recall f1-score support
LOC 0.93 0.95 0.94 1837
MISC 0.82 0.81 0.82 922
ORG 0.88 0.79 0.83 1341
PER 0.93 0.97 0.95 1842
micro avg 0.90 0.90 0.90 5942 macro avg 0.89 0.88 0.88 5942 weighted avg 0.90 0.90 0.90 5942 | | 0.0491 | 1.3340 | 2500 | 0.0734 | 0.9814 | 0.9021 | 0.9088 | 0.9054 | precision recall f1-score support
LOC 0.92 0.95 0.93 1837
MISC 0.86 0.82 0.84 922
ORG 0.84 0.85 0.84 1341
PER 0.95 0.96 0.96 1842
micro avg 0.90 0.91 0.91 5942 macro avg 0.89 0.89 0.89 5942 weighted avg 0.90 0.91 0.91 5942 | | 0.0435 | 1.6009 | 3000 | 0.0891 | 0.9776 | 0.8685 | 0.8972 | 0.8826 | precision recall f1-score support
LOC 0.93 0.94 0.94 1837
MISC 0.78 0.82 0.80 922
ORG 0.74 0.90 0.81 1341
PER 0.97 0.89 0.93 1842
micro avg 0.87 0.90 0.88 5942 macro avg 0.86 0.89 0.87 5942 weighted avg 0.88 0.90 0.89 5942 | | 0.0341 | 1.8677 | 3500 | 0.0777 | 0.9813 | 0.9072 | 0.9111 | 0.9092 | precision recall f1-score support
LOC 0.91 0.96 0.94 1837
MISC 0.87 0.84 0.85 922
ORG 0.86 0.83 0.85 1341
PER 0.95 0.96 0.95 1842
micro avg 0.91 0.91 0.91 5942 macro avg 0.90 0.90 0.90 5942 weighted avg 0.91 0.91 0.91 5942 | | 0.0246 | 2.1345 | 4000 | 0.0838 | 0.9813 | 0.8991 | 0.9174 | 0.9081 | precision recall f1-score support
LOC 0.92 0.96 0.94 1837
MISC 0.86 0.82 0.84 922
ORG 0.87 0.85 0.86 1341
PER 0.92 0.97 0.95 1842
micro avg 0.90 0.92 0.91 5942 macro avg 0.89 0.90 0.90 5942 weighted avg 0.90 0.92 0.91 5942 | | 0.0205 | 2.4013 | 4500 | 0.0764 | 0.9830 | 0.9104 | 0.9204 | 0.9154 | precision recall f1-score support
LOC 0.96 0.94 0.95 1837
MISC 0.84 0.86 0.85 922
ORG 0.82 0.88 0.85 1341
PER 0.96 0.96 0.96 1842
micro avg 0.91 0.92 0.92 5942 macro avg 0.90 0.91 0.90 5942 weighted avg 0.91 0.92 0.92 5942 | | 0.022 | 2.6681 | 5000 | 0.0856 | 0.9819 | 0.9051 | 0.9192 | 0.9121 | precision recall f1-score support
LOC 0.92 0.96 0.94 1837
MISC 0.87 0.84 0.85 922
ORG 0.85 0.85 0.85 1341
PER 0.95 0.97 0.96 1842
micro avg 0.91 0.92 0.91 5942 macro avg 0.90 0.90 0.90 5942 weighted avg 0.90 0.92 0.91 5942 | | 0.0244 | 2.9349 | 5500 | 0.0850 | 0.9829 | 0.9142 | 0.9194 | 0.9168 | precision recall f1-score support
LOC 0.94 0.96 0.95 1837
MISC 0.88 0.84 0.86 922
ORG 0.86 0.85 0.86 1341
PER 0.95 0.96 0.95 1842
micro avg 0.91 0.92 0.92 5942 macro avg 0.91 0.91 0.91 5942 weighted avg 0.91 0.92 0.92 5942 | | 0.0166 | 3.2017 | 6000 | 0.0861 | 0.9834 | 0.9187 | 0.9191 | 0.9189 | precision recall f1-score support
LOC 0.94 0.96 0.95 1837
MISC 0.90 0.84 0.87 922
ORG 0.86 0.87 0.87 1341
PER 0.94 0.96 0.95 1842
micro avg 0.92 0.92 0.92 5942 macro avg 0.91 0.91 0.91 5942 weighted avg 0.92 0.92 0.92 5942 | | 0.0094 | 3.4685 | 6500 | 0.0905 | 0.9840 | 0.9202 | 0.9236 | 0.9219 | precision recall f1-score support
LOC 0.95 0.96 0.95 1837
MISC 0.89 0.86 0.88 922
ORG 0.85 0.88 0.86 1341
PER 0.96 0.95 0.96 1842
micro avg 0.92 0.92 0.92 5942 macro avg 0.91 0.91 0.91 5942 weighted avg 0.92 0.92 0.92 5942 | | 0.0123 | 3.7353 | 7000 | 0.0927 | 0.9837 | 0.9239 | 0.9219 | 0.9229 | precision recall f1-score support
LOC 0.95 0.95 0.95 1837
MISC 0.86 0.85 0.86 922
ORG 0.90 0.87 0.88 1341
PER 0.95 0.97 0.96 1842
micro avg 0.92 0.92 0.92 5942 macro avg 0.91 0.91 0.91 5942 weighted avg 0.92 0.92 0.92 5942 | | 0.0097 | 4.0021 | 7500 | 0.0947 | 0.9839 | 0.9279 | 0.9221 | 0.9250 | precision recall f1-score support
LOC 0.95 0.96 0.95 1837
MISC 0.88 0.85 0.87 922
ORG 0.90 0.86 0.88 1341
PER 0.95 0.96 0.96 1842
micro avg 0.93 0.92 0.92 5942 macro avg 0.92 0.91 0.91 5942 weighted avg 0.93 0.92 0.92 5942 | | 0.0049 | 4.2689 | 8000 | 0.0903 | 0.9840 | 0.9248 | 0.9251 | 0.9250 | precision recall f1-score support
LOC 0.94 0.96 0.95 1837
MISC 0.90 0.85 0.87 922
ORG 0.87 0.88 0.88 1341
PER 0.95 0.96 0.96 1842
micro avg 0.92 0.93 0.92 5942 macro avg 0.92 0.91 0.91 5942 weighted avg 0.92 0.93 0.92 5942 | | 0.0037 | 4.5358 | 8500 | 0.0903 | 0.9843 | 0.9235 | 0.9283 | 0.9259 | precision recall f1-score support
LOC 0.94 0.96 0.95 1837
MISC 0.89 0.86 0.88 922
ORG 0.88 0.88 0.88 1341
PER 0.95 0.96 0.96 1842
micro avg 0.92 0.93 0.93 5942 macro avg 0.92 0.92 0.92 5942 weighted avg 0.92 0.93 0.93 5942 | | 0.0038 | 4.8026 | 9000 | 0.0922 | 0.9840 | 0.9221 | 0.9260 | 0.9240 | precision recall f1-score support
LOC 0.94 0.96 0.95 1837
MISC 0.88 0.87 0.87 922
ORG 0.88 0.87 0.88 1341
PER 0.96 0.96 0.96 1842
micro avg 0.92 0.93 0.92 5942 macro avg 0.91 0.91 0.91 5942 weighted avg 0.92 0.93 0.92 5942 |
Framework versions
- Transformers 4.47.1
- Pytorch 2.3.1+cpu
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 6
Model tree for dandoune/legal-NER
Base model
nlpaueb/legal-bert-base-uncased