File size: 467 Bytes
f3e4a8a
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
lora merge as it was really tricky to get it to work of https://huggingface.co/152334H/miqu-1-70b-hermes2.5-qlora. 

Base Model: Miqu 70B (Mistral AI Leak) Dequantized by 152234h
Finetune also by 152234h 

Outputs seem good, but the prompting is still a bit buggy, not sure if that's an error on my part. 

For me it wouldn't generate text until I activated flash attention 2 in Oogabooga. You need around 130 GB vram, 2 a100 80 or h100 work, as does 6 3090 or 4090.