--- license: mit tags: - generated_from_trainer datasets: - ncbi_disease metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-base-ncbi_disease-en results: - task: name: Token Classification type: token-classification dataset: name: ncbi_disease type: ncbi_disease config: ncbi_disease split: validation args: ncbi_disease metrics: - name: Precision type: precision value: 0.8562421185372006 - name: Recall type: recall value: 0.8627700127064803 - name: F1 type: f1 value: 0.859493670886076 - name: Accuracy type: accuracy value: 0.9868991989319092 --- # xlm-roberta-base-ncbi_disease-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [ncbi_disease](https://huggingface.co/datasets/ncbi_disease) dataset. It achieves the following results on the evaluation set: - Loss: 0.0496 - Precision: 0.8562 - Recall: 0.8628 - F1: 0.8595 - Accuracy: 0.9869 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 170 | 0.0555 | 0.7949 | 0.7980 | 0.7964 | 0.9833 | | No log | 2.0 | 340 | 0.0524 | 0.7404 | 0.8551 | 0.7936 | 0.9836 | | 0.0803 | 3.0 | 510 | 0.0484 | 0.7932 | 0.8869 | 0.8374 | 0.9849 | | 0.0803 | 4.0 | 680 | 0.0496 | 0.8562 | 0.8628 | 0.8595 | 0.9869 | | 0.0803 | 5.0 | 850 | 0.0562 | 0.7976 | 0.8615 | 0.8283 | 0.9848 | | 0.0152 | 6.0 | 1020 | 0.0606 | 0.8086 | 0.8856 | 0.8454 | 0.9846 | | 0.0152 | 7.0 | 1190 | 0.0709 | 0.8412 | 0.8412 | 0.8412 | 0.9866 | | 0.0152 | 8.0 | 1360 | 0.0735 | 0.8257 | 0.8666 | 0.8456 | 0.9843 | | 0.0059 | 9.0 | 1530 | 0.0730 | 0.8343 | 0.8767 | 0.8550 | 0.9866 | | 0.0059 | 10.0 | 1700 | 0.0855 | 0.8130 | 0.8895 | 0.8495 | 0.9843 | | 0.0059 | 11.0 | 1870 | 0.0868 | 0.8263 | 0.8767 | 0.8508 | 0.9860 | | 0.0026 | 12.0 | 2040 | 0.0862 | 0.8273 | 0.8767 | 0.8513 | 0.9858 | | 0.0026 | 13.0 | 2210 | 0.0875 | 0.8329 | 0.8806 | 0.8561 | 0.9859 | | 0.0026 | 14.0 | 2380 | 0.0889 | 0.8287 | 0.8793 | 0.8533 | 0.9859 | | 0.0013 | 15.0 | 2550 | 0.0884 | 0.8321 | 0.8755 | 0.8533 | 0.9861 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2 ### Citation If you used the datasets and models in this repository, please cite it. ```bibtex @misc{https://doi.org/10.48550/arxiv.2302.09611, doi = {10.48550/ARXIV.2302.09611}, url = {https://arxiv.org/abs/2302.09611}, author = {Sartipi, Amir and Fatemi, Afsaneh}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English}, publisher = {arXiv}, year = {2023}, copyright = {arXiv.org perpetual, non-exclusive license} } ```