wyeh
/

This repo contains a low-rank adapter for LLaMA-13b fit on the Stanford Alpaca dataset.

This version of the weights was trained with the following hyperparameters:

  • Epochs: 10 (load from best epoch)
  • Batch size: 128
  • Cutoff length: 1024
  • Learning rate: 2e-5
  • Lora r: 16
  • Lora target modules: q_proj, k_proj, v_proj, o_proj

That is trained by using RTX 3090 * 8 pcs around 10 hrs.:

WORLD_SIZE=8 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 nohup torchrun --nproc_per_node=8 --master_port=1234 finetune.py \
    --base_model 'decapoda-research/llama-13b-hf' \
    --data_path './alpaca_data_gpt4_dolly15k.json' \
    --output_dir './lora-alpaca-13B-gpt4-dolly15k' \
    --batch_size 128 \
    --micro_batch_size 4 \
    --num_epochs 10 \
    --learning_rate 2e-5 \
    --cutoff_len 1024 \
    --val_set_size 2000 \
    --lora_r 4 \
    --lora_alpha 16 \
    --lora_dropout 0.05 \
    --lora_target_modules '[q_proj,k_proj,v_proj,o_proj]' \
    --train_on_inputs \
    --group_by_length \
    &

Instructions for running it can be found at https://github.com/tloen/alpaca-lora.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Datasets used to train wyeh/alpaca13B-lora-gpt4-dolly15k