--- language: - en license: apache-2.0 tags: - safetensors - mixtral - not-for-all-audiences - nsfw model-index: - name: InfinityKuno-2x7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.62 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKuno-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.44 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKuno-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.49 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKuno-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 63.28 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKuno-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKuno-2x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 66.34 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=R136a1/InfinityKuno-2x7B name: Open LLM Leaderboard --- ## InfinityKuno-2x7B ![InfinityKuno-2x7B](https://cdn.discordapp.com/attachments/843160171676565508/1219033838454313091/00069-4195457282.jpeg?ex=6609d4bb&is=65f75fbb&hm=4ea1892b3bf2b08040fd84b569ad9f6d4497f6d3d9626d427cb72f229b0218fa&) Experimental model from [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) models. Merged to MoE model with 2x7B parameters. ### Prompt format: Alpaca, Extended Alpaca, Roleplay-Alpaca. (Use any Alpaca based prompt formatting and you should be fine.) Switch: [FP16](https://huggingface.co/R136a1/InfinityKuno-2x7B) - [GGUF](https://huggingface.co/R136a1/InfinityKuno-2x7B-GGUF) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_R136a1__InfinityKuno-2x7B) | Metric |Value| |---------------------------------|----:| |Avg. |72.32| |AI2 Reasoning Challenge (25-Shot)|69.62| |HellaSwag (10-Shot) |87.44| |MMLU (5-Shot) |64.49| |TruthfulQA (0-shot) |63.28| |Winogrande (5-shot) |82.72| |GSM8k (5-shot) |66.34|