legal-bert-small-NER
This model is a fine-tuned version of nlpaueb/legal-bert-small-uncased on the None dataset. It achieves the following results on the evaluation set:
Loss: 0.2334
Accuracy: 0.9558
Precision: 0.7587
Recall: 0.7950
F1: 0.7764
Classification Report: precision recall f1-score support
LOC 0.85 0.86 0.86 1668 MISC 0.56 0.67 0.61 702 ORG 0.68 0.67 0.68 1661 PER 0.83 0.91 0.87 1617
micro avg 0.76 0.79 0.78 5648 macro avg 0.73 0.78 0.75 5648
weighted avg 0.76 0.79 0.78 5648
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Classification Report |
---|---|---|---|---|---|---|---|---|
0.0289 | 1.0 | 434 | 0.2151 | 0.9555 | 0.7592 | 0.7890 | 0.7738 | precision recall f1-score support |
LOC 0.86 0.85 0.86 1668
MISC 0.57 0.67 0.62 702
ORG 0.69 0.64 0.67 1661
PER 0.81 0.92 0.86 1617
micro avg 0.76 0.79 0.77 5648 macro avg 0.73 0.77 0.75 5648 weighted avg 0.76 0.79 0.77 5648 | | 0.0193 | 2.0 | 868 | 0.2334 | 0.9558 | 0.7587 | 0.7950 | 0.7764 | precision recall f1-score support
LOC 0.85 0.86 0.86 1668
MISC 0.56 0.67 0.61 702
ORG 0.68 0.67 0.68 1661
PER 0.83 0.91 0.87 1617
micro avg 0.76 0.79 0.78 5648 macro avg 0.73 0.78 0.75 5648 weighted avg 0.76 0.79 0.78 5648 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
- Downloads last month
- 4