Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
4.25bpw h6 exl2 quant of (https://huggingface.co/BeaverAI/mistral-doryV2-12b)
Dory 12b (v2)
(redone) redone instruct finetune of mistral nemo 12b's base. not (E)RP-focused, leave that to drummer.
thanks to twisted again for the compute :3
Prompting
alpaca-like:
### System:
[Optional system prompt]
### Instruction:
[Query]
### Response:
[Response]</s>
### Instruction:
[...]
Training details
Rank 64 QDoRA, trained on the following data mix:
- All of kalomaze/Opus_Instruct_3k
- All conversations with a reward model rating above 5 in Magpie-Align/Magpie-Gemma2-Pro-Preview-Filtered
- 50k of Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- All stories above 4.7 rating and published before 2020 in Fizzarolli/FallingThroughTheSkies-592k-Filtered-Filtered
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Natkituwu/mistral-doryV2-12b-4.25bpw-exl2
Base model
mistralai/Mistral-Nemo-Base-2407