--- language: - en license: other --- ![ZephyrusV1](https://huggingface.co/Sao10K/Zephyrus-L1-33B/resolve/main/IMG/zephyr1.png) some quants I guess: https://huggingface.co/Sao10K/Zephyrus-L1-33B-GGUF Zephyrus v1 - A multi-merging of several Llama1 30B Models using newer Merging Methods seen after the release of Llama2, with a sprinking of an experimental LoRA on top. The goal is to improve Writing Quality to match those seen in top L2 13B Models, while keeping the improved Logic and Spatial Awareness a 30B Model has. As usual, this is a Roleplay-focused Model, and I cannot test or verify its effectiveness as an Assistant or Tool. Did I succeed? Partially. It's not the best partner for chatting, but I love it for storywriting and as my writing companion. It's fairly smart and spatially aware, and I've noticed no glaring issues so far. While it may appear censored with zero input prompts, in actual Roleplay with a Character in SillyTavern, there are no issues even with NSFL topics as long as minimal context is there. If you face any impersonation / dumb moments, a simple swipe or two fixes things. It has its issues at times, yes, but this is my... first successful attempt. I'll try to work on more in the future. SillyTavern Formats: simple-proxy-for-tavern in ST for Instruct Prompt, change to Default for Context Template.
Ooba Presets I'd recommend are Kobold Godlike, NovelAI Best Guess, simple-proxy-for-tavern or Shortwave with 1.22 Temperature, from My Testing. Test them out, different RPs work best with different presets. **REMEMBER TO SET CONTEXT AT 2048 OR IT WILL BREAK. THIS IS A LLAMA1 MODEL AFTER ALL.** Most formats could work, but Alpaca works the best. Use simple proxy instead, works much better. ``` ### Instruction: Your instruction or question here. For roleplay purposes, I suggest the following - Write 's next reply in a chat between and . Write a single reply only. ### Response: ``` Support me [here](https://ko-fi.com/sao10k) :) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Zephyrus-L1-33B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 55.79 | | ARC (25-shot) | 64.51 | | HellaSwag (10-shot) | 84.15 | | MMLU (5-shot) | 57.37 | | TruthfulQA (0-shot) | 53.87 | | Winogrande (5-shot) | 80.19 | | GSM8K (5-shot) | 23.58 | | DROP (3-shot) | 26.89 |