QuantFactory/Blue-Orchid-2x7b-GGUF

This is quantized version of nakodanei/Blue-Orchid-2x7b created using llama.cpp

Model Description

Roleplaying focused MoE Mistral model.

One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B.

  • Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot.
  • Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot.

Prompt template (LimaRP):

### Instruction:
{system prompt}

### Input:
User: {prompt}

### Response:
Character: 

Alpaca prompt template should work fine too.

Downloads last month
182
GGUF
Model size
12.9B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for QuantFactory/Blue-Orchid-2x7b-GGUF

Quantized
(6)
this model