Adamo's full models
Collection
Full fp16/bf16 versions of my models, with adapter files merged in. Easy way to download a ready model without a need for manual merging.
•
16 items
•
Updated
LargeWorldModel 7B 1000000 ctx finetuned on AEZAKMI v3.1 dataset for epochs at max_seq_len of 4000 using QLoRA with lora_r 32 and cosine lr decaying from 0.00015. I will be uploading exl2 quants and base model in safetensors format soon.
Fine-tuned with unsloth, FA2 on local RTX 3090 Ti. Training took around 6 hours. I think most of the long ctx capabilities remain.