--- license: mit tags: - generated_from_trainer datasets: - lg-ner metrics: - precision - recall - f1 - accuracy model-index: - name: luganda-ner-v4 results: - task: name: Token Classification type: token-classification dataset: name: lg-ner type: lg-ner config: lug split: test args: lug metrics: - name: Precision type: precision value: 0.7849185946872322 - name: Recall type: recall value: 0.7862660944206008 - name: F1 type: f1 value: 0.7855917667238421 - name: Accuracy type: accuracy value: 0.9542220362038296 --- # luganda-ner-v4 This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the lg-ner dataset. It achieves the following results on the evaluation set: - Loss: 0.2222 - Precision: 0.7849 - Recall: 0.7863 - F1: 0.7856 - Accuracy: 0.9542 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 261 | 0.3533 | 0.6141 | 0.4644 | 0.5288 | 0.9208 | | 0.5126 | 2.0 | 522 | 0.2765 | 0.6658 | 0.6567 | 0.6612 | 0.9326 | | 0.5126 | 3.0 | 783 | 0.2336 | 0.6834 | 0.7133 | 0.6980 | 0.9433 | | 0.2374 | 4.0 | 1044 | 0.2207 | 0.7358 | 0.7433 | 0.7395 | 0.9489 | | 0.2374 | 5.0 | 1305 | 0.2134 | 0.7796 | 0.7528 | 0.7659 | 0.9525 | | 0.1646 | 6.0 | 1566 | 0.2359 | 0.7423 | 0.7665 | 0.7542 | 0.9484 | | 0.1646 | 7.0 | 1827 | 0.2223 | 0.7807 | 0.7854 | 0.7831 | 0.9541 | | 0.1219 | 8.0 | 2088 | 0.2300 | 0.8140 | 0.7665 | 0.7896 | 0.9557 | | 0.1219 | 9.0 | 2349 | 0.2223 | 0.7733 | 0.7966 | 0.7848 | 0.9547 | | 0.1016 | 10.0 | 2610 | 0.2222 | 0.7849 | 0.7863 | 0.7856 | 0.9542 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2