adamo1139's picture
Update README.md
fdb6787 verified
metadata
license: llama2

LargeWorldModel 7B 1000000 ctx finetuned on AEZAKMI v3.1 dataset for epochs at max_seq_len of 4000 using QLoRA with lora_r 32 and cosine lr decaying from 0.00015. I will be uploading exl2 quants and base model in safetensors format soon.

Fine-tuned with unsloth, FA2 on local RTX 3090 Ti. Training took around 6 hours. I think most of the long ctx capabilities remain.