Adding Evaluation Results
#1
by
gagan3012
- opened
README.md
CHANGED
@@ -49,4 +49,16 @@ messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in
|
|
49 |
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
50 |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
51 |
print(outputs[0]["generated_text"])
|
52 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
50 |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
51 |
print(outputs[0]["generated_text"])
|
52 |
+
```
|
53 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
54 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_gagan3012__MetaModel_moe)
|
55 |
+
|
56 |
+
| Metric | Value |
|
57 |
+
|-----------------------|---------------------------|
|
58 |
+
| Avg. | 74.42 |
|
59 |
+
| ARC (25-shot) | 71.25 |
|
60 |
+
| HellaSwag (10-shot) | 88.4 |
|
61 |
+
| MMLU (5-shot) | 66.26 |
|
62 |
+
| TruthfulQA (0-shot) | 71.86 |
|
63 |
+
| Winogrande (5-shot) | 83.35 |
|
64 |
+
| GSM8K (5-shot) | 65.43 |
|