metadata
license: mit
0428
This model is a fine-tuned version of ../../models/Qwen1.5-7B-sft-0425 on the alpaca_formatted_review_new_data_greater_7 dataset. It achieves the following results on the evaluation set:
- Loss: 1.0733
Model description
Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
- 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
- Significant performance improvement in Chat models;
- Multilingual support of both base and chat models;
- Stable support of 32K context length for models of all sizes
- No need of
trust_remote_code
.
For more details, please refer to the blog post and GitHub repo.
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- num_epochs: 5.0
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.8554 | 0.25 | 10 | 1.1541 |
0.6139 | 0.5 | 20 | 1.1258 |
0.629 | 0.75 | 30 | 1.1057 |
0.7943 | 1.0 | 40 | 1.0993 |
0.6658 | 1.25 | 50 | 1.0964 |
0.778 | 1.5 | 60 | 1.0892 |
0.593 | 1.75 | 70 | 1.0868 |
0.8847 | 2.0 | 80 | 1.0816 |
0.5067 | 2.25 | 90 | 1.0806 |
0.9706 | 2.5 | 100 | 1.0789 |
0.7302 | 2.75 | 110 | 1.0763 |
0.6855 | 3.0 | 120 | 1.0768 |
0.4358 | 3.25 | 130 | 1.0754 |
0.5777 | 3.5 | 140 | 1.0740 |
0.5687 | 3.75 | 150 | 1.0732 |
0.6462 | 4.0 | 160 | 1.0732 |
0.5465 | 4.25 | 170 | 1.0733 |
0.7926 | 4.5 | 180 | 1.0737 |
0.4968 | 4.75 | 190 | 1.0735 |
0.6406 | 5.0 | 200 | 1.0733 |
Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.19.1