LiLT-SER-ES-SIN / README.md
kavg's picture
End of training
0e5815e verified
---
license: mit
base_model: kavg/LiLT-SER-ES
tags:
- generated_from_trainer
datasets:
- xfun
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: LiLT-SER-ES-SIN
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xfun
type: xfun
config: xfun.sin
split: validation
args: xfun.sin
metrics:
- name: Precision
type: precision
value: 0.7538829151732378
- name: Recall
type: recall
value: 0.7770935960591133
- name: F1
type: f1
value: 0.7653123104912068
- name: Accuracy
type: accuracy
value: 0.8560807967456866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LiLT-SER-ES-SIN
This model is a fine-tuned version of [kavg/LiLT-SER-ES](https://huggingface.co/kavg/LiLT-SER-ES) on the xfun dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4009
- Precision: 0.7539
- Recall: 0.7771
- F1: 0.7653
- Accuracy: 0.8561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0045 | 21.74 | 500 | 0.8773 | 0.7107 | 0.7352 | 0.7228 | 0.8582 |
| 0.0044 | 43.48 | 1000 | 1.1262 | 0.7030 | 0.7463 | 0.7240 | 0.8495 |
| 0.0021 | 65.22 | 1500 | 1.1512 | 0.6938 | 0.7254 | 0.7092 | 0.8419 |
| 0.0 | 86.96 | 2000 | 1.2416 | 0.7043 | 0.7537 | 0.7281 | 0.8390 |
| 0.0002 | 108.7 | 2500 | 1.2400 | 0.7036 | 0.7426 | 0.7226 | 0.8492 |
| 0.0001 | 130.43 | 3000 | 1.2076 | 0.7095 | 0.7488 | 0.7286 | 0.8432 |
| 0.0001 | 152.17 | 3500 | 1.1215 | 0.7174 | 0.7315 | 0.7244 | 0.8552 |
| 0.0008 | 173.91 | 4000 | 1.1580 | 0.7188 | 0.7303 | 0.7245 | 0.8534 |
| 0.0 | 195.65 | 4500 | 1.2805 | 0.7256 | 0.7328 | 0.7292 | 0.8596 |
| 0.0001 | 217.39 | 5000 | 1.1563 | 0.7110 | 0.7635 | 0.7363 | 0.8526 |
| 0.0 | 239.13 | 5500 | 1.1503 | 0.7585 | 0.7734 | 0.7659 | 0.8645 |
| 0.0 | 260.87 | 6000 | 1.3623 | 0.7419 | 0.7648 | 0.7532 | 0.8557 |
| 0.001 | 282.61 | 6500 | 1.1415 | 0.7405 | 0.7660 | 0.7530 | 0.8707 |
| 0.0 | 304.35 | 7000 | 1.2738 | 0.7390 | 0.7635 | 0.7511 | 0.8644 |
| 0.0 | 326.09 | 7500 | 1.3134 | 0.7682 | 0.7672 | 0.7677 | 0.8683 |
| 0.0 | 347.83 | 8000 | 1.4709 | 0.7608 | 0.7599 | 0.7603 | 0.8475 |
| 0.0 | 369.57 | 8500 | 1.4720 | 0.7509 | 0.75 | 0.7505 | 0.8499 |
| 0.0 | 391.3 | 9000 | 1.4492 | 0.7617 | 0.7635 | 0.7626 | 0.8530 |
| 0.0 | 413.04 | 9500 | 1.4251 | 0.7458 | 0.7734 | 0.7594 | 0.8550 |
| 0.0 | 434.78 | 10000 | 1.4009 | 0.7539 | 0.7771 | 0.7653 | 0.8561 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1