--- license: mit datasets: - yahma/alpaca-cleaned - teknium/GPT4-LLM-Cleaned - databricks/databricks-dolly-15k --- This repo contains a low-rank adapter for LLaMA-13b fit on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset. This version of the weights was trained with the following hyperparameters: - Epochs: 10 (load from best epoch) - Batch size: 128 - Cutoff length: 1024 - Learning rate: 2e-5 - Lora _r_: 16 - Lora target modules: q_proj, k_proj, v_proj, o_proj That is trained by using RTX 3090 * 8 pcs around 10 hrs.: ```bash WORLD_SIZE=8 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 nohup torchrun --nproc_per_node=8 --master_port=1234 finetune.py \ --base_model 'decapoda-research/llama-13b-hf' \ --data_path './alpaca_data_gpt4_dolly15k.json' \ --output_dir './lora-alpaca-13B-gpt4-dolly15k' \ --batch_size 128 \ --micro_batch_size 4 \ --num_epochs 10 \ --learning_rate 2e-5 \ --cutoff_len 1024 \ --val_set_size 2000 \ --lora_r 4 \ --lora_alpha 16 \ --lora_dropout 0.05 \ --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' \ --train_on_inputs \ --group_by_length \ & ``` Instructions for running it can be found at https://github.com/tloen/alpaca-lora.