|
--- |
|
base_model: Anwaarma/Merged-Server-praj |
|
tags: |
|
- generated_from_trainer |
|
metrics: |
|
- accuracy |
|
- f1 |
|
model-index: |
|
- name: S02-PC |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# S02-PC |
|
|
|
This model is a fine-tuned version of [Anwaarma/Merged-Server-praj](https://huggingface.co/Anwaarma/Merged-Server-praj) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.5986 |
|
- Accuracy: 0.78 |
|
- F1: 0.8764 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 3e-05 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 16 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 20 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |
|
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| |
|
| No log | 0.0 | 50 | 0.5686 | 0.6 | 0.5981 | |
|
| No log | 0.01 | 100 | 0.5715 | 0.62 | 0.6205 | |
|
| No log | 0.01 | 150 | 0.5591 | 0.64 | 0.6400 | |
|
| No log | 0.01 | 200 | 0.5670 | 0.63 | 0.6288 | |
|
| No log | 0.02 | 250 | 0.5568 | 0.61 | 0.6106 | |
|
| No log | 0.02 | 300 | 0.5761 | 0.64 | 0.6383 | |
|
| No log | 0.02 | 350 | 0.5515 | 0.61 | 0.6106 | |
|
| No log | 0.03 | 400 | 0.5567 | 0.61 | 0.6087 | |
|
| No log | 0.03 | 450 | 0.5590 | 0.62 | 0.6192 | |
|
| 0.606 | 0.03 | 500 | 0.5454 | 0.64 | 0.6404 | |
|
| 0.606 | 0.04 | 550 | 0.5509 | 0.63 | 0.6303 | |
|
| 0.606 | 0.04 | 600 | 0.5451 | 0.64 | 0.6393 | |
|
| 0.606 | 0.04 | 650 | 0.5461 | 0.65 | 0.6488 | |
|
| 0.606 | 0.05 | 700 | 0.5443 | 0.62 | 0.6192 | |
|
| 0.606 | 0.05 | 750 | 0.5461 | 0.66 | 0.6593 | |
|
| 0.606 | 0.05 | 800 | 0.5420 | 0.66 | 0.6604 | |
|
| 0.606 | 0.06 | 850 | 0.5414 | 0.65 | 0.6502 | |
|
| 0.606 | 0.06 | 900 | 0.5411 | 0.65 | 0.6505 | |
|
| 0.606 | 0.06 | 950 | 0.5413 | 0.69 | 0.6834 | |
|
| 0.584 | 0.07 | 1000 | 0.5432 | 0.64 | 0.6353 | |
|
| 0.584 | 0.07 | 1050 | 0.5335 | 0.64 | 0.6383 | |
|
| 0.584 | 0.07 | 1100 | 0.5483 | 0.67 | 0.6702 | |
|
| 0.584 | 0.08 | 1150 | 0.5548 | 0.66 | 0.6605 | |
|
| 0.584 | 0.08 | 1200 | 0.5590 | 0.63 | 0.6306 | |
|
| 0.584 | 0.09 | 1250 | 0.5580 | 0.67 | 0.6697 | |
|
| 0.584 | 0.09 | 1300 | 0.5616 | 0.65 | 0.6502 | |
|
| 0.584 | 0.09 | 1350 | 0.5620 | 0.62 | 0.6131 | |
|
| 0.584 | 0.1 | 1400 | 0.5509 | 0.61 | 0.6059 | |
|
| 0.584 | 0.1 | 1450 | 0.5473 | 0.66 | 0.6605 | |
|
| 0.573 | 0.1 | 1500 | 0.5497 | 0.66 | 0.6593 | |
|
| 0.573 | 0.11 | 1550 | 0.5450 | 0.65 | 0.6502 | |
|
| 0.573 | 0.11 | 1600 | 0.5484 | 0.67 | 0.6689 | |
|
| 0.573 | 0.11 | 1650 | 0.5398 | 0.66 | 0.6584 | |
|
| 0.573 | 0.12 | 1700 | 0.5350 | 0.65 | 0.6477 | |
|
| 0.573 | 0.12 | 1750 | 0.5333 | 0.64 | 0.6370 | |
|
| 0.573 | 0.12 | 1800 | 0.5635 | 0.64 | 0.6400 | |
|
| 0.573 | 0.13 | 1850 | 0.5742 | 0.63 | 0.6297 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.35.2 |
|
- Pytorch 2.1.0+cu121 |
|
- Datasets 2.16.0 |
|
- Tokenizers 0.15.0 |
|
|