Edit model card

https://github.com/hiyouga/LLaMA-Factory

Colab: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing

CUDA_VISIBLE_DEVICES=0 python src/train_bash.py
--stage sft
--do_train True
--model_name_or_path Qwen/Qwen1.5-0.5B-Chat
--finetuning_type lora
--template qwen
--flash_attn auto
--dataset_dir data
--dataset identity,alpaca_gpt4_zh
--cutoff_len 1024
--learning_rate 0.0002
--num_train_epochs 5.0
--max_samples 1000
--per_device_train_batch_size 4
--gradient_accumulation_steps 4
--lr_scheduler_type cosine
--max_grad_norm 1.0
--logging_steps 5
--save_steps 100
--warmup_steps 0
--optim adamw_torch
--report_to none
--output_dir saves/Qwen1.5-0.5B-Chat/lora/train_2024-04-29-14-46-34
--fp16 True
--lora_rank 8
--lora_alpha 16
--lora_dropout 0.1
--use_dora True
--lora_target all
--plot_loss True

Downloads last month
0
Safetensors
Model size
464M params
Tensor type
BF16
·