results
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.6367
- Rewards/chosen: -0.0965
- Rewards/rejected: -0.4524
- Rewards/accuracies: 1.0
- Rewards/margins: 0.3560
- Logps/rejected: -2.2621
- Logps/chosen: -0.4823
- Logits/rejected: -1.8871
- Logits/chosen: -1.6815
- Nll Loss: 0.5673
- Log Odds Ratio: -0.0697
- Log Odds Chosen: 2.8182
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4.5921 | 0.2126 | 11 | 3.5043 | -0.6803 | -0.8039 | 1.0 | 0.1236 | -4.0195 | -3.4015 | -3.2932 | -3.2264 | 3.3934 | -0.4430 | 0.6340 |
1.3234 | 0.4251 | 22 | 0.8683 | -0.1360 | -0.2818 | 1.0 | 0.1458 | -1.4090 | -0.6802 | -2.6124 | -2.3854 | 0.7738 | -0.2790 | 1.1626 |
0.7333 | 0.6377 | 33 | 0.8013 | -0.1269 | -0.3679 | 1.0 | 0.2410 | -1.8394 | -0.6346 | -2.3638 | -2.1864 | 0.7403 | -0.1581 | 1.8427 |
0.5916 | 0.8502 | 44 | 0.6367 | -0.0965 | -0.4524 | 1.0 | 0.3560 | -2.2621 | -0.4823 | -1.8871 | -1.6815 | 0.5673 | -0.0697 | 2.8182 |
Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
- Downloads last month
- 2
Model tree for mlrun/Meta-Llama-3-8B-fashion-v7.0
Base model
meta-llama/Meta-Llama-3-8B