StarDust-12b-v2
Quants
- GGUF: mradermacher/StarDust-12b-v2-GGUF
- weighted/imatrix GGUF: mradermacher/StarDust-12b-v2-i1-GGUF
- exl2: lucyknada/Luni_StarDust-12b-v2-exl2
Description | Usecase
- The result of this merge is in my opinion a more vibrant and less generic sonnet inspired prose, it's able to be gentle and harsh where asked.
- The v2 uses the non-kto magnum which tends to have less "claudeism" (making the story feel rather repetitive)
- Note on Non-Kto: There is a very big gap between people preferring and disliking the KTO. To make things easier, you can still use Luni/StarDust-12b-v1 which has the KTO version.
- In early testing users have reported a much better experience in longer roleplays and a abillity to add a creative touch to the stable experiencve.
Just like with v1:
- This model is intended to be used as a Role-playing model.
- Its direct conversational output is... I can't even say it's luck, it's just not made for it.
- Extension to Conversational output: The Model is designed for roleplay, direct instructing or general purpose is NOT recommended.
Initial Feedback
- Initial feedback has proven the model to be a solid "go-to" choice for creative storywriting
- The prose has been certified as "amazing" with many making it their default model.
Prompting
ChatML has proven to be the BEST choice.
Both Mistral and ChatML should work though I had better results with ChatML: ChatML Example:
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using Sao10K/MN-12B-Lyra-v3 as a base.
Models Merged
The following models were included in the merge:
- nbeerbower/mistral-nemo-bophades-12B
- anthracite-org/magnum-v2-12b
- Gryphe/Pantheon-RP-1.6-12b-Nemo
- Sao10K/MN-12B-Lyra-v3
Special Thanks
Special thanks to the SillyTilly and myself for helping me find the energy to finish this.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 24.06 |
IFEval (0-Shot) | 56.29 |
BBH (3-Shot) | 34.95 |
MATH Lvl 5 (4-Shot) | 5.97 |
GPQA (0-shot) | 5.82 |
MuSR (0-shot) | 14.26 |
MMLU-PRO (5-shot) | 27.10 |
- Downloads last month
- 83
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Luni/StarDust-12b-v2
Base model
mistralai/Mistral-Nemo-Base-2407
Finetuned
Gryphe/Pantheon-RP-1.6-12b-Nemo
Spaces using Luni/StarDust-12b-v2 5
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard56.290
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard34.950
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard5.970
- acc_norm on GPQA (0-shot)Open LLM Leaderboard5.820
- acc_norm on MuSR (0-shot)Open LLM Leaderboard14.260
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard27.100