File size: 3,593 Bytes
58437c2 cda12df 58437c2 cda12df |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-parsbert-uncased-ncbi_disease
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-parsbert-uncased-ncbi_disease
This model is a fine-tuned version of [HooshvareLab/bert-base-parsbert-uncased](https://huggingface.co/HooshvareLab/bert-base-parsbert-uncased) on the [ncbi-persian](https://huggingface.co/datasets/Amir13/ncbi-persian) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1018
- Precision: 0.8192
- Recall: 0.8645
- F1: 0.8412
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 169 | 0.0648 | 0.7154 | 0.8237 | 0.7657 | 0.9813 |
| No log | 2.0 | 338 | 0.0573 | 0.7870 | 0.8263 | 0.8062 | 0.9853 |
| 0.0596 | 3.0 | 507 | 0.0639 | 0.7893 | 0.8776 | 0.8312 | 0.9858 |
| 0.0596 | 4.0 | 676 | 0.0678 | 0.8150 | 0.8461 | 0.8302 | 0.9860 |
| 0.0596 | 5.0 | 845 | 0.0737 | 0.8070 | 0.8474 | 0.8267 | 0.9862 |
| 0.0065 | 6.0 | 1014 | 0.0834 | 0.8052 | 0.8592 | 0.8313 | 0.9856 |
| 0.0065 | 7.0 | 1183 | 0.0918 | 0.8099 | 0.8355 | 0.8225 | 0.9859 |
| 0.0065 | 8.0 | 1352 | 0.0882 | 0.8061 | 0.8697 | 0.8367 | 0.9857 |
| 0.0021 | 9.0 | 1521 | 0.0903 | 0.8045 | 0.85 | 0.8266 | 0.9860 |
| 0.0021 | 10.0 | 1690 | 0.0965 | 0.8303 | 0.85 | 0.8401 | 0.9866 |
| 0.0021 | 11.0 | 1859 | 0.0954 | 0.8182 | 0.8645 | 0.8407 | 0.9860 |
| 0.0008 | 12.0 | 2028 | 0.0998 | 0.8206 | 0.8605 | 0.8401 | 0.9862 |
| 0.0008 | 13.0 | 2197 | 0.0995 | 0.82 | 0.8632 | 0.8410 | 0.9862 |
| 0.0008 | 14.0 | 2366 | 0.1015 | 0.8214 | 0.8592 | 0.8399 | 0.9861 |
| 0.0004 | 15.0 | 2535 | 0.1018 | 0.8192 | 0.8645 | 0.8412 | 0.9862 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
### Citation
If you used the datasets and models in this repository, please cite it.
```bibtex
@misc{https://doi.org/10.48550/arxiv.2302.09611,
doi = {10.48550/ARXIV.2302.09611},
url = {https://arxiv.org/abs/2302.09611},
author = {Sartipi, Amir and Fatemi, Afsaneh},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Exploring the Potential of Machine Translation for Generating Named Entity Datasets: A Case Study between Persian and English},
publisher = {arXiv},
year = {2023},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|