Edit model card

lmind_hotpot_train8000_eval7405_v1_doc_qa_Qwen_Qwen1.5-4B_lora2

This model is a fine-tuned version of Qwen/Qwen1.5-4B on the tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa dataset. It achieves the following results on the evaluation set:

  • Loss: 3.6298
  • Accuracy: 0.5108

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • total_eval_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 20.0

Training results

Training Loss Epoch Step Accuracy Validation Loss
1.7619 0.9998 1089 0.5172 2.2994
1.648 1.9995 2178 0.5210 2.2683
1.4941 2.9993 3267 0.5214 2.3185
1.3627 4.0 4357 0.5190 2.4249
1.2234 4.9998 5446 0.5152 2.5963
1.1107 5.9995 6535 0.5130 2.7933
0.9891 6.9993 7624 0.5119 2.9422
0.919 8.0 8714 0.5077 3.1141
0.833 8.9998 9803 0.5084 3.1755
0.7635 9.9977 10890 0.5085 3.3117
0.6899 10.9998 11979 3.3147 0.5072
0.6427 11.9995 13068 3.4025 0.5101
0.604 12.9993 14157 3.3905 0.5103
0.5507 14.0 15247 3.4740 0.5088
0.5099 14.9998 16336 3.4772 0.5085
0.478 15.9995 17425 3.5259 0.5088
0.4545 16.9993 18514 3.5391 0.5094
0.427 18.0 19604 3.5887 0.5095
0.4083 18.9998 20693 3.5945 0.5097
0.3818 19.9977 21780 3.6298 0.5108

Framework versions

  • PEFT 0.5.0
  • Transformers 4.40.2
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa_Qwen_Qwen1.5-4B_lora2

Base model

Qwen/Qwen1.5-4B
Adapter
this model

Dataset used to train tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa_Qwen_Qwen1.5-4B_lora2

Evaluation results

  • Accuracy on tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
    self-reported
    0.511