llama-adapter-7b / README.md
winglian's picture
Update README.md
3b71688
metadata
license: mit

training hyperparamters:

--batch_size 64 --micro_batch_size 8 --num_epochs 5 --learning_rate 9e-3 --cutoff_len 2048 --val_set_size 0.05 --train_on_inputs 0

training dataset: https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_gpt4.json