Files changed (1) hide show
  1. README.md +117 -1
README.md CHANGED
@@ -2,9 +2,125 @@
2
  license: cc-by-nc-4.0
3
  tags:
4
  - merge
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
  Finetune of **cookinai/CM-14** with the **teknium/openhermes** dataset. My first finetune, might have some bugs/overfitting, might reupload
7
 
8
  Previous model had stopping token errors causing issues with the final token in the ChatML preset. This finetuning job should fix any prompt template errors. Please tell me if you get any such errors.
9
 
10
- Heard that this error is common amongst heavily merged macaroni models. Might try to stray away from them in the future or dilute them with other models.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: cc-by-nc-4.0
3
  tags:
4
  - merge
5
+ model-index:
6
+ - name: OpenCM-14
7
+ results:
8
+ - task:
9
+ type: text-generation
10
+ name: Text Generation
11
+ dataset:
12
+ name: AI2 Reasoning Challenge (25-Shot)
13
+ type: ai2_arc
14
+ config: ARC-Challenge
15
+ split: test
16
+ args:
17
+ num_few_shot: 25
18
+ metrics:
19
+ - type: acc_norm
20
+ value: 69.28
21
+ name: normalized accuracy
22
+ source:
23
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cookinai/OpenCM-14
24
+ name: Open LLM Leaderboard
25
+ - task:
26
+ type: text-generation
27
+ name: Text Generation
28
+ dataset:
29
+ name: HellaSwag (10-Shot)
30
+ type: hellaswag
31
+ split: validation
32
+ args:
33
+ num_few_shot: 10
34
+ metrics:
35
+ - type: acc_norm
36
+ value: 86.89
37
+ name: normalized accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cookinai/OpenCM-14
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: MMLU (5-Shot)
46
+ type: cais/mmlu
47
+ config: all
48
+ split: test
49
+ args:
50
+ num_few_shot: 5
51
+ metrics:
52
+ - type: acc
53
+ value: 65.01
54
+ name: accuracy
55
+ source:
56
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cookinai/OpenCM-14
57
+ name: Open LLM Leaderboard
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: TruthfulQA (0-shot)
63
+ type: truthful_qa
64
+ config: multiple_choice
65
+ split: validation
66
+ args:
67
+ num_few_shot: 0
68
+ metrics:
69
+ - type: mc2
70
+ value: 61.07
71
+ source:
72
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cookinai/OpenCM-14
73
+ name: Open LLM Leaderboard
74
+ - task:
75
+ type: text-generation
76
+ name: Text Generation
77
+ dataset:
78
+ name: Winogrande (5-shot)
79
+ type: winogrande
80
+ config: winogrande_xl
81
+ split: validation
82
+ args:
83
+ num_few_shot: 5
84
+ metrics:
85
+ - type: acc
86
+ value: 81.29
87
+ name: accuracy
88
+ source:
89
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cookinai/OpenCM-14
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: GSM8k (5-shot)
96
+ type: gsm8k
97
+ config: main
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 72.93
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cookinai/OpenCM-14
107
+ name: Open LLM Leaderboard
108
  ---
109
  Finetune of **cookinai/CM-14** with the **teknium/openhermes** dataset. My first finetune, might have some bugs/overfitting, might reupload
110
 
111
  Previous model had stopping token errors causing issues with the final token in the ChatML preset. This finetuning job should fix any prompt template errors. Please tell me if you get any such errors.
112
 
113
+ Heard that this error is common amongst heavily merged macaroni models. Might try to stray away from them in the future or dilute them with other models.
114
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
115
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cookinai__OpenCM-14)
116
+
117
+ | Metric |Value|
118
+ |---------------------------------|----:|
119
+ |Avg. |72.75|
120
+ |AI2 Reasoning Challenge (25-Shot)|69.28|
121
+ |HellaSwag (10-Shot) |86.89|
122
+ |MMLU (5-Shot) |65.01|
123
+ |TruthfulQA (0-shot) |61.07|
124
+ |Winogrande (5-shot) |81.29|
125
+ |GSM8k (5-shot) |72.93|
126
+