--- license: cc-by-nc-4.0 tags: - merge model-index: - name: OpenCM-14 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.28 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cookinai/OpenCM-14 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.89 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cookinai/OpenCM-14 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.01 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cookinai/OpenCM-14 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.07 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cookinai/OpenCM-14 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.29 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cookinai/OpenCM-14 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 72.93 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cookinai/OpenCM-14 name: Open LLM Leaderboard --- Finetune of **cookinai/CM-14** with the **teknium/openhermes** dataset. My first finetune, might have some bugs/overfitting, might reupload Previous model had stopping token errors causing issues with the final token in the ChatML preset. This finetuning job should fix any prompt template errors. Please tell me if you get any such errors. Heard that this error is common amongst heavily merged macaroni models. Might try to stray away from them in the future or dilute them with other models. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cookinai__OpenCM-14) | Metric |Value| |---------------------------------|----:| |Avg. |72.75| |AI2 Reasoning Challenge (25-Shot)|69.28| |HellaSwag (10-Shot) |86.89| |MMLU (5-Shot) |65.01| |TruthfulQA (0-shot) |61.07| |Winogrande (5-shot) |81.29| |GSM8k (5-shot) |72.93|