Edit model card

training hyperparamters:

--batch_size 64 --micro_batch_size 8 --num_epochs 5 --learning_rate 9e-3 --cutoff_len 2048 --val_set_size 0.05 --train_on_inputs 0

training dataset: https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_gpt4.json

Downloads last month
0
Unable to determine this model's library. Check the docs .

Space using winglian/llama-adapter-7b 1