--- license: llama2 language: - en --- The sister Model of [Stheno-L2-13B](https://huggingface.co/Sao10K/Stheno-L2-13B) Stheno Inverted:
Gradient Merge of Stheno-P2 & Stheno-P1, Models are in Inverted Positions Quants courtesy of TheBloke!
[GPTQ](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GPTQ)
[GGUF](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF)
[GGML](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGML) Test Checklist:
Censorship - Fairly Uncensored
Writing - Good Prose, Fairly Descriptive
NSFW - Yes
IQ Level - Pretty Smart
Formatting - Proper Formatting with Examples *Noticeable difference with Stheno-L2. From personal tests: A bit more verbose, a little less smart, and a little more forward with NSFW compared to regular Stheno.* Stheno-P1 [Ties-Merge]
-----[elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2)
-----[jondurbin/airoboros-l2-13b-2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1)
-----[NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)+[nRuaif/Kimiko-v2 **LORA**](https://huggingface.co/nRuaif/Kimiko-v2-13B) Stheno-P2 [Ties-Merge]
-----[CalderaAI/13B-Legerdemain-L2](https://huggingface.co/CalderaAI/13B-Legerdemain-L2)+[lemonilia/limarp-llama2-v2 **LORA**](https://huggingface.co/lemonilia/limarp-llama2-v2)
-----[ehartford/WizardLM-1.0-Uncensored-Llama2-13b](https://huggingface.co/ehartford/WizardLM-1.0-Uncensored-Llama2-13b)
-----[Henk717/spring-dragon](https://huggingface.co/Henk717/spring-dragon) Most formats could work, but my tests have all been done in Alpaca format and it works well. ``` ### Instruction: Your instruction or question here. For roleplay purposes, I suggest the following - Write 's next reply in a chat between and . Write a single reply only. ### Response: ``` Below is the Illustration for the Final Merge: ![ILLUSTRATION](https://cdn-uploads.huggingface.co/production/uploads/64be6a5376a6e2efccc638c1/4JaMhVMiLCFkeeYDPtU1D.png) Once Again, thanks to [Chargoddard](https://huggingface.co/chargoddard) for his amazing and simple [ties-merge](https://github.com/cg123/ties-merge) script, and [Gryphe](https://huggingface.co/Gryphe) for their great [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) script. Thank you to the original model creators too! ``` Art by wada_kazu / わだかず (pixiv page private?) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Stheno-Inverted-L2-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 49.57 | | ARC (25-shot) | 59.3 | | HellaSwag (10-shot) | 82.9 | | MMLU (5-shot) | 56.45 | | TruthfulQA (0-shot) | 52.04 | | Winogrande (5-shot) | 74.74 | | GSM8K (5-shot) | 13.19 | | DROP (3-shot) | 8.33 |