--- base_model: klue/roberta-base tags: - generated_from_trainer datasets: - klue metrics: - precision - recall - f1 - accuracy model-index: - name: klue_ner_roberta_model results: - task: name: Token Classification type: token-classification dataset: name: klue type: klue config: ner split: validation args: ner metrics: - name: Precision type: precision value: 0.9545986426398315 - name: Recall type: recall value: 0.9557169634489222 - name: F1 type: f1 value: 0.955157475705421 - name: Accuracy type: accuracy value: 0.9883703228112445 --- # klue_ner_roberta_model This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.0487 - Precision: 0.9546 - Recall: 0.9557 - F1: 0.9552 - Accuracy: 0.9884 ## Model description Pretrained RoBERTa Model on Korean Language. See [Github](https://github.com/KLUE-benchmark/KLUE) and [Paper](https://arxiv.org/abs/2105.09680) for more details. ## Intended uses & limitations ## How to use _NOTE:_ Use `BertTokenizer` instead of RobertaTokenizer. (`AutoTokenizer` will load `BertTokenizer`) ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("klue/roberta-base") tokenizer = AutoTokenizer.from_pretrained("klue/roberta-base") ``` ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0449 | 1.0 | 2626 | 0.0601 | 0.9361 | 0.9176 | 0.9267 | 0.9830 | | 0.0262 | 2.0 | 5252 | 0.0469 | 0.9484 | 0.9510 | 0.9497 | 0.9874 | | 0.0144 | 3.0 | 7878 | 0.0487 | 0.9546 | 0.9557 | 0.9552 | 0.9884 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3