khmer-pos-roberta / README.md
seanghay's picture
Update README.md
b107b15
|
raw
history blame
No virus
3.43 kB
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- seanghay/khPOS
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: គាត់ផឹកទឹកនៅភ្នំពេញ
- text: តើលោកស្រីបានសាកសួរទៅគាត់ទេ?
- text: នេត្រា មិនដឹងសោះថាអ្នកជាមនុស្ស!
model-index:
- name: khmer-pos-roberta-10
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: kh_pos
type: kh_pos
config: default
split: train
args: default
metrics:
- name: Precision
type: precision
value: 0.9511876225757245
- name: Recall
type: recall
value: 0.9526407682234832
- name: F1
type: f1
value: 0.9519136408243376
- name: Accuracy
type: accuracy
value: 0.9735370853522176
language:
- km
library_name: transformers
pipeline_tag: token-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Khmer Part of Speech Tagging with XLM RoBERTa
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [khPOS](https://huggingface.co/seanghay/khPOS) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1063
- Precision: 0.9512
- Recall: 0.9526
- F1: 0.9519
- Accuracy: 0.9735
## Model description
The [original paper](https://arxiv.org/pdf/2103.16801.pdf) achieved 98.15% accuracy while this model achieved only 97.35% which is close. However, this is a multilingual model so it has more tokens than the original paper.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 450 | 0.1347 | 0.9314 | 0.9333 | 0.9324 | 0.9603 |
| 0.4834 | 2.0 | 900 | 0.1183 | 0.9407 | 0.9377 | 0.9392 | 0.9653 |
| 0.1323 | 3.0 | 1350 | 0.1026 | 0.9484 | 0.9482 | 0.9483 | 0.9699 |
| 0.095 | 4.0 | 1800 | 0.0986 | 0.9502 | 0.9490 | 0.9496 | 0.9712 |
| 0.0774 | 5.0 | 2250 | 0.0978 | 0.9494 | 0.9491 | 0.9493 | 0.9712 |
| 0.0616 | 6.0 | 2700 | 0.0991 | 0.9493 | 0.9507 | 0.9500 | 0.9715 |
| 0.0494 | 7.0 | 3150 | 0.0989 | 0.9529 | 0.9540 | 0.9534 | 0.9731 |
| 0.0414 | 8.0 | 3600 | 0.1037 | 0.9499 | 0.9501 | 0.9500 | 0.9722 |
| 0.0339 | 9.0 | 4050 | 0.1056 | 0.9516 | 0.9517 | 0.9516 | 0.9734 |
| 0.029 | 10.0 | 4500 | 0.1063 | 0.9512 | 0.9526 | 0.9519 | 0.9735 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3