--- license: other base_model: Qwen/Qwen1.5-4B tags: - generated_from_trainer datasets: - tyzhu/lmind_hotpot_train8000_eval7405_v1_reciteonly_qa metrics: - accuracy model-index: - name: lmind_hotpot_train8000_eval7405_v1_reciteonly_qa_Qwen_Qwen1.5-4B_3e-5_lora2 results: - task: name: Causal Language Modeling type: text-generation dataset: name: tyzhu/lmind_hotpot_train8000_eval7405_v1_reciteonly_qa type: tyzhu/lmind_hotpot_train8000_eval7405_v1_reciteonly_qa metrics: - name: Accuracy type: accuracy value: 0.6657583697234353 library_name: peft --- # lmind_hotpot_train8000_eval7405_v1_reciteonly_qa_Qwen_Qwen1.5-4B_3e-5_lora2 This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_hotpot_train8000_eval7405_v1_reciteonly_qa dataset. It achieves the following results on the evaluation set: - Loss: 1.8696 - Accuracy: 0.6658 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 20.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4742 | 1.0 | 250 | 1.5313 | 0.6748 | | 1.45 | 2.0 | 500 | 1.5196 | 0.6757 | | 1.4269 | 3.0 | 750 | 1.5134 | 0.6761 | | 1.3999 | 4.0 | 1000 | 1.5120 | 0.6762 | | 1.3614 | 5.0 | 1250 | 1.5192 | 0.6760 | | 1.3303 | 6.0 | 1500 | 1.5266 | 0.6755 | | 1.2946 | 7.0 | 1750 | 1.5446 | 0.6747 | | 1.2518 | 8.0 | 2000 | 1.5590 | 0.6745 | | 1.2082 | 9.0 | 2250 | 1.5717 | 0.6740 | | 1.19 | 10.0 | 2500 | 1.6022 | 0.6727 | | 1.1523 | 11.0 | 2750 | 1.6098 | 0.6726 | | 1.1193 | 12.0 | 3000 | 1.6345 | 0.6716 | | 1.0736 | 13.0 | 3250 | 1.6748 | 0.6707 | | 1.0414 | 14.0 | 3500 | 1.6880 | 0.6701 | | 1.0069 | 15.0 | 3750 | 1.7182 | 0.6694 | | 0.9654 | 16.0 | 4000 | 1.7522 | 0.6685 | | 0.9337 | 17.0 | 4250 | 1.7826 | 0.6677 | | 0.9 | 18.0 | 4500 | 1.8080 | 0.6672 | | 0.8704 | 19.0 | 4750 | 1.8350 | 0.6663 | | 0.8407 | 20.0 | 5000 | 1.8696 | 0.6658 | ### Framework versions - PEFT 0.5.0 - Transformers 4.40.2 - Pytorch 2.3.0 - Datasets 2.19.1 - Tokenizers 0.19.1