Edit model card

This repo is my fine tuned lora of Llama on the first 4 volumes of Eminence in shadow and konosuba to test its ability to record new information. The training used alpaca-lora on a 3090 for 10 hours with :

  • Micro Batch Size 2,
  • batch size 64,
  • 35 epochs,
  • 3e-4 learning rate,
  • lora rank 256,
  • 512 lora alpha,
  • 0.05 lora dropout,
  • 352 cutoff
Downloads last month
0
Unable to determine this model's library. Check the docs .