--- language: - en license: apache-2.0 model-index: - name: HamSter-0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 46.93 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PotatoOff/HamSter-0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 68.08 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PotatoOff/HamSter-0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 43.03 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PotatoOff/HamSter-0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.24 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PotatoOff/HamSter-0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 61.88 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PotatoOff/HamSter-0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PotatoOff/HamSter-0.1 name: Open LLM Leaderboard --- HamSter v0.2
Image

Meet HamSter-0.1 🐹

👋 Uncensored fine tune model roleplay focused of "mistralai/Mistral-7B-v0.2" and first model of the HamSter series. Made with the help of my team ConvexAI.

🚀 For optimal performance, I recommend using a detailed character card! (There is NSFW chub.ai) Check out Chub.ai for some character cards.

🤩 Uses the Llama2 prompt template with chat instructions.

🔥 Produce spicy content.

😄 -> Check out HamSter 0.2 latest model of the HamSter series. Check it out!

HamSter 0.1 Quants Discord Server
Roleplay Test

I had good results with these parameters:

    > temperature: 0.8 <

    > top_p: 0.75

    > min_p: 0

    > top_k: 0

    > repetition_penalty: 1.05

BenchMarks on OpenLLM Leaderboard

OPEN LLM BENCHMARK

More details: HamSter-0.1 OpenLLM BenchMarks

BenchMarks on Ayumi's LLM Role Play & ERP Ranking

Ayumi's LLM Role Play & ERP Ranking

More details: Ayumi's LLM RolePlay & ERP Rankin HamSter-0.1 GGUF version Q6_K

Have Fun

💖

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PotatoOff__HamSter-0.1) | Metric |Value| |---------------------------------|----:| |Avg. |45.19| |AI2 Reasoning Challenge (25-Shot)|46.93| |HellaSwag (10-Shot) |68.08| |MMLU (5-Shot) |43.03| |TruthfulQA (0-shot) |51.24| |Winogrande (5-shot) |61.88| |GSM8k (5-shot) | 0.00|