DeusImperator's picture
Upload 11 files
05d2331 verified
metadata
base_model: mistralai/Mistral-Nemo-Base-2407
license: apache-2.0
datasets:
  - BeaverAI/Nemo-Inst-Tune-ds
language:
  - en
library_name: transformers

mistral-doryV2-12b - EXL2 8bpw max

This is a 8bpw EXL2 quant of BeaverAI/mistral-doryV2-12b

This quant was made using exllamav2-0.1.8 with default dataset. I used a slightly modified quantization script to force use of highest bpw methods for all layers in the model (which is usually "1:8b_128g s4") to ensure max quality.

I also added a small fix in config file to set max default context at 128k as original Mistral-Nemo should have.

I tested this quant shortly in some random RPs (including ones over 8k context) and it seems to work fine.

Prompt Templates

Uses Alpaca-like format like mentioned below.

Original readme below


Dory 12b (v2)

(redone) redone instruct finetune of mistral nemo 12b's base. not (E)RP-focused, leave that to drummer.

image/gif

thanks to twisted again for the compute :3

Prompting

alpaca-like:

### System:
[Optional system prompt]

### Instruction:
[Query]

### Response:
[Response]</s>

### Instruction:
[...]

Training details

Rank 64 QDoRA, trained on the following data mix: