Llama-3.1-8B-Magpie-Mix-RC-UltraDPO-08
This model is a fine-tuned version of Magpie-Align/Llama-3-1-8B-Magpie-Mix-300KMT-150KR-200KC on the flydust/llama3-ultrafeedback-armorm-2 dataset. It achieves the following results on the evaluation set:
- Loss: 0.3828
- Rewards/chosen: -4.2186
- Rewards/rejected: -5.8751
- Rewards/accuracies: 0.8476
- Rewards/margins: 1.6565
- Logps/rejected: -837.7465
- Logps/chosen: -675.1885
- Logits/rejected: -0.4098
- Logits/chosen: -0.3798
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.5697 | 0.2138 | 100 | 0.5157 | -2.7203 | -3.5480 | 0.7317 | 0.8277 | -605.0303 | -525.3530 | -0.3811 | -0.3571 |
0.517 | 0.4275 | 200 | 0.4364 | -3.3924 | -4.6257 | 0.8110 | 1.2333 | -712.8077 | -592.5715 | -0.3439 | -0.3156 |
0.3774 | 0.6413 | 300 | 0.3968 | -3.9322 | -5.4972 | 0.8455 | 1.5650 | -799.9586 | -646.5525 | -0.4006 | -0.3705 |
0.399 | 0.8550 | 400 | 0.3845 | -4.0703 | -5.6702 | 0.8476 | 1.5999 | -817.2530 | -660.3575 | -0.4061 | -0.3760 |
Framework versions
- Transformers 4.43.2
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 22
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.