--- license: mit tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: hmBERT-CoNLL-cp1 results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.8690143162744776 - name: Recall type: recall value: 0.8887579939414338 - name: F1 type: f1 value: 0.8787752724852317 - name: Accuracy type: accuracy value: 0.9810170943499085 --- # hmBERT-CoNLL-cp1 This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0710 - Precision: 0.8690 - Recall: 0.8888 - F1: 0.8788 - Accuracy: 0.9810 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.06 | 25 | 0.4115 | 0.3593 | 0.3708 | 0.3649 | 0.9002 | | No log | 0.11 | 50 | 0.2263 | 0.6360 | 0.6898 | 0.6618 | 0.9456 | | No log | 0.17 | 75 | 0.1660 | 0.7250 | 0.7582 | 0.7412 | 0.9564 | | No log | 0.23 | 100 | 0.1520 | 0.7432 | 0.7775 | 0.7600 | 0.9597 | | No log | 0.28 | 125 | 0.1343 | 0.7683 | 0.8103 | 0.7888 | 0.9645 | | No log | 0.34 | 150 | 0.1252 | 0.7973 | 0.8230 | 0.8099 | 0.9691 | | No log | 0.4 | 175 | 0.1021 | 0.8118 | 0.8398 | 0.8255 | 0.9724 | | No log | 0.46 | 200 | 0.1056 | 0.8153 | 0.8411 | 0.8280 | 0.9727 | | No log | 0.51 | 225 | 0.0872 | 0.8331 | 0.8612 | 0.8469 | 0.9755 | | No log | 0.57 | 250 | 0.1055 | 0.8226 | 0.8418 | 0.8321 | 0.9725 | | No log | 0.63 | 275 | 0.0921 | 0.8605 | 0.8640 | 0.8623 | 0.9767 | | No log | 0.68 | 300 | 0.0824 | 0.8600 | 0.8787 | 0.8692 | 0.9788 | | No log | 0.74 | 325 | 0.0834 | 0.8530 | 0.8771 | 0.8649 | 0.9787 | | No log | 0.8 | 350 | 0.0758 | 0.8646 | 0.8876 | 0.8759 | 0.9800 | | No log | 0.85 | 375 | 0.0727 | 0.8705 | 0.8866 | 0.8784 | 0.9810 | | No log | 0.91 | 400 | 0.0734 | 0.8717 | 0.8899 | 0.8807 | 0.9811 | | No log | 0.97 | 425 | 0.0713 | 0.8683 | 0.8889 | 0.8785 | 0.9810 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.4.0 - Tokenizers 0.12.1