Files changed (1) hide show
  1. README.md +13 -15
README.md CHANGED
@@ -131,9 +131,20 @@ This is a fine-tuned version of the `Qwen/Qwen2-7B` model. It aims to improve th
131
 
132
  All GGUF models are available here: [MaziyarPanahi/calme-2.2-qwen2-7b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2-7b-GGUF)
133
 
134
- # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
 
 
 
 
 
 
 
 
 
 
 
 
135
 
136
- coming soon!
137
 
138
 
139
  # Prompt Template
@@ -174,16 +185,3 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
174
  tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.2-qwen2-7b")
175
  model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.2-qwen2-7b")
176
  ```
177
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
178
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.2-qwen2-7b)
179
-
180
- | Metric |Value|
181
- |-------------------|----:|
182
- |Avg. |23.23|
183
- |IFEval (0-Shot) |35.97|
184
- |BBH (3-Shot) |33.11|
185
- |MATH Lvl 5 (4-Shot)|19.34|
186
- |GPQA (0-shot) | 5.48|
187
- |MuSR (0-shot) |13.28|
188
- |MMLU-PRO (5-shot) |32.21|
189
-
 
131
 
132
  All GGUF models are available here: [MaziyarPanahi/calme-2.2-qwen2-7b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.2-qwen2-7b-GGUF)
133
 
134
+ # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
135
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__calme-2.2-qwen2-7b)
136
+
137
+ | Metric |Value|
138
+ |-------------------|----:|
139
+ |Avg. |23.23|
140
+ |IFEval (0-Shot) |35.97|
141
+ |BBH (3-Shot) |33.11|
142
+ |MATH Lvl 5 (4-Shot)|19.34|
143
+ |GPQA (0-shot) | 5.48|
144
+ |MuSR (0-shot) |13.28|
145
+ |MMLU-PRO (5-shot) |32.21|
146
+
147
 
 
148
 
149
 
150
  # Prompt Template
 
185
  tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.2-qwen2-7b")
186
  model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.2-qwen2-7b")
187
  ```