cls_finred_llama3_v1
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 0.4061
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.7071 | 0.1116 | 20 | 0.6759 |
0.6162 | 0.2232 | 40 | 0.6174 |
0.6143 | 0.3347 | 60 | 0.5845 |
0.5753 | 0.4463 | 80 | 0.5507 |
0.5712 | 0.5579 | 100 | 0.5225 |
0.5216 | 0.6695 | 120 | 0.5105 |
0.4931 | 0.7810 | 140 | 0.4920 |
0.482 | 0.8926 | 160 | 0.4733 |
0.4562 | 1.0042 | 180 | 0.4624 |
0.3635 | 1.1158 | 200 | 0.4631 |
0.3619 | 1.2273 | 220 | 0.4538 |
0.351 | 1.3389 | 240 | 0.4452 |
0.3458 | 1.4505 | 260 | 0.4392 |
0.3397 | 1.5621 | 280 | 0.4290 |
0.298 | 1.6736 | 300 | 0.4278 |
0.2902 | 1.7852 | 320 | 0.4196 |
0.3425 | 1.8968 | 340 | 0.4061 |
Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 4
Model tree for Sorour/cls_finred_llama3_v1
Base model
meta-llama/Meta-Llama-3-8B-Instruct