Update README.md
Browse files
README.md
CHANGED
@@ -114,6 +114,7 @@ mera-mix-4x7B achieves 76.37 on the openLLM eval v/s 72.7 by Mixtral-8x7B (as sh
|
|
114 |
|
115 |
You can try the model with the [Mera Mixture Chat](https://huggingface.co/spaces/meraGPT/mera-mixture-chat).
|
116 |
|
|
|
117 |
## OpenLLM Eval
|
118 |
|
119 |
| Model | ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average|
|
@@ -121,6 +122,7 @@ You can try the model with the [Mera Mixture Chat](https://huggingface.co/spaces
|
|
121 |
|[mera-mix-4x7B](https://huggingface.co/meraGPT/mera-mix-4x7B)|72.01| 88.82|63.67| 77.45| 84.61|71.65| 76.37|
|
122 |
|
123 |
Raw eval results are available at this [gist](https://gist.github.com/codelion/78f88333230801c9bbaa6fc22078d820)
|
|
|
124 |
|
125 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
126 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_meraGPT__mera-mix-4x7B)
|
|
|
114 |
|
115 |
You can try the model with the [Mera Mixture Chat](https://huggingface.co/spaces/meraGPT/mera-mixture-chat).
|
116 |
|
117 |
+
<!--
|
118 |
## OpenLLM Eval
|
119 |
|
120 |
| Model | ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K|Average|
|
|
|
122 |
|[mera-mix-4x7B](https://huggingface.co/meraGPT/mera-mix-4x7B)|72.01| 88.82|63.67| 77.45| 84.61|71.65| 76.37|
|
123 |
|
124 |
Raw eval results are available at this [gist](https://gist.github.com/codelion/78f88333230801c9bbaa6fc22078d820)
|
125 |
+
-->
|
126 |
|
127 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
128 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_meraGPT__mera-mix-4x7B)
|