HamSter-0.2 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
e629da3 verified
|
raw
history blame
11.7 kB
metadata
language:
  - en
license: apache-2.0
model-index:
  - name: HamSter-0.2
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 50.09
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PotatoOff/HamSter-0.2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 73.65
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PotatoOff/HamSter-0.2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 50.39
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PotatoOff/HamSter-0.2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 49.63
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PotatoOff/HamSter-0.2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 69.69
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PotatoOff/HamSter-0.2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 0
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PotatoOff/HamSter-0.2
          name: Open LLM Leaderboard
Image

HamSter 0.2

πŸ‘‹ Uncensored fine tune model roleplay focused of "mistralai/Mistral-7B-v0.2" with the help of my team ConvexAI.

πŸš€ For optimal performance, I recommend using a detailed character card! (There is NSFW chub.ai) Check out Chub.ai for some character cards.

🀩 Uses the Llama2 prompt template with chat instructions.

πŸ”₯ Fine-tuned with a newer dataset for even better results.

πŸ˜„ Next one will be more interesting!

Roleplay Test

I had good results with these parameters:

    > temperature: 0.8 <

    > top_p: 0.75

    > min_p: 0

    > top_k: 0

    > repetition_penalty: 1.05

BenchMarks on OpenLLM Leaderboard

OPEN LLM BENCHMARK

More details: HamSter-0.2 OpenLLM BenchMarks

BenchMarks on Ayumi's LLM Role Play & ERP Ranking

Ayumi's LLM Role Play & ERP Ranking

More details: Ayumi's LLM RolePlay & ERP Rankin HamSter-0.2 GGUF version Q6_K

Have Fun

πŸ’–

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PotatoOff__HamSter-0.2)
Metric Value
Avg. 48.91
AI2 Reasoning Challenge (25-Shot) 50.09
HellaSwag (10-Shot) 73.65
MMLU (5-Shot) 50.39
TruthfulQA (0-shot) 49.63
Winogrande (5-shot) 69.69
GSM8k (5-shot) 0.00