qwen2.5-0.5b-expo-DPO-L2EXPO-noES-0.1
This model is a fine-tuned version of hZzy/qwen2.5-0.5b-sft-news-IFT on the hZzy/train_pairwise_weighted dataset. It achieves the following results on the evaluation set:
- Loss: 1.3212
- Logps: -78.9331
- Logits: -0.6046
- Objective: 1.2922
- Dpo Loss: 0.7096
- Regularize: 0.5826
- Ranking Simple: 0.5357
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 12
- total_train_batch_size: 144
- total_eval_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Logps | Logits | Objective | Dpo Loss | Regularize | Ranking Simple |
---|---|---|---|---|---|---|---|---|---|
1.0668 | 0.1417 | 50 | 1.1061 | -90.1942 | -1.4829 | 1.1093 | 0.6857 | 0.4236 | 0.5300 |
1.0269 | 0.2834 | 100 | 1.1677 | -92.1258 | -1.4409 | 1.1407 | 0.6807 | 0.4600 | 0.5326 |
1.0823 | 0.4251 | 150 | 1.2316 | -81.0954 | -1.3424 | 1.2139 | 0.6936 | 0.5204 | 0.5285 |
1.0577 | 0.5668 | 200 | 1.2470 | -82.5828 | -1.0627 | 1.2263 | 0.6939 | 0.5324 | 0.5331 |
0.9955 | 0.7085 | 250 | 1.2858 | -81.0205 | -1.0947 | 1.2580 | 0.7011 | 0.5569 | 0.5347 |
0.954 | 0.8503 | 300 | 1.2727 | -82.4789 | -0.9186 | 1.2483 | 0.6948 | 0.5535 | 0.5409 |
0.9014 | 0.9920 | 350 | 1.2973 | -80.3169 | -0.8042 | 1.2672 | 0.7021 | 0.5651 | 0.5362 |
0.8458 | 1.1337 | 400 | 1.3161 | -78.6994 | -0.5803 | 1.2922 | 0.7089 | 0.5833 | 0.5383 |
0.8325 | 1.2754 | 450 | 1.3054 | -79.2087 | -0.6878 | 1.2837 | 0.7065 | 0.5772 | 0.5378 |
0.796 | 1.4171 | 500 | 1.3290 | -79.5455 | -0.6465 | 1.3067 | 0.7132 | 0.5934 | 0.5383 |
0.7784 | 1.5588 | 550 | 1.3215 | -78.1244 | -0.6049 | 1.2954 | 0.7083 | 0.5871 | 0.5414 |
0.753 | 1.7005 | 600 | 1.3166 | -78.2126 | -0.5817 | 1.2870 | 0.7062 | 0.5808 | 0.5373 |
0.738 | 1.8422 | 650 | 1.3141 | -78.5070 | -0.6067 | 1.2850 | 0.7055 | 0.5794 | 0.5378 |
0.7128 | 1.9839 | 700 | 1.3177 | -78.7581 | -0.6380 | 1.2901 | 0.7085 | 0.5816 | 0.5404 |
0.6518 | 2.1256 | 750 | 1.3227 | -79.7230 | -0.6694 | 1.2915 | 0.7083 | 0.5832 | 0.5393 |
0.6576 | 2.2674 | 800 | 1.3182 | -79.6166 | -0.6259 | 1.2881 | 0.7079 | 0.5801 | 0.5367 |
0.6373 | 2.4091 | 850 | 1.3180 | -79.0937 | -0.5955 | 1.2881 | 0.7083 | 0.5798 | 0.5367 |
0.6338 | 2.5508 | 900 | 1.3195 | -78.9960 | -0.6047 | 1.2905 | 0.7091 | 0.5814 | 0.5347 |
0.631 | 2.6925 | 950 | 1.3215 | -78.8580 | -0.6057 | 1.2924 | 0.7096 | 0.5828 | 0.5362 |
0.6254 | 2.8342 | 1000 | 1.3215 | -78.9196 | -0.6057 | 1.2924 | 0.7096 | 0.5828 | 0.5373 |
0.6187 | 2.9759 | 1050 | 1.3212 | -78.9331 | -0.6046 | 1.2922 | 0.7096 | 0.5826 | 0.5357 |
Framework versions
- Transformers 4.42.0
- Pytorch 2.3.0+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for hZzy/qwen2.5-0.5b-expo-DPO-L2EXPO-noES-0.1
Base model
hZzy/qwen2.5-0.5b-sft-news-IFT