YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

lora merge as it was really tricky to get it to work of https://huggingface.co/152334H/miqu-1-70b-hermes2.5-qlora.

Base Model: Miqu 70B (Mistral AI Leak) Dequantized by 152234h Finetune also by 152234h

Outputs seem good, but the prompting is still a bit buggy, not sure if that's an error on my part.

For me it wouldn't generate text until I activated flash attention 2 in Oogabooga. You need around 130 GB vram, 2 a100 80 or h100 work, as does 6 3090 or 4090.

Downloads last month
8
Safetensors
Model size
69B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for alicecomfy/miqu-openhermes-full

Quantizations
2 models