plaguss commited on
Commit
f8ef65e
1 Parent(s): d488715

Add nous benchmark

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -18,3 +18,17 @@ tags:
18
  <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
19
  </a>
20
  </p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
19
  </a>
20
  </p>
21
+
22
+
23
+
24
+ ## Benchmark results
25
+ For benchmarking we used the famous "Nous" or "Teknium" benchmark. You can find below an overview, including our first experiment with a less ambitious dataset filtering (removing ties and `score>5`).
26
+
27
+ For running the benchmark we used another awesome contribution from Maxime: [LLM AutoEval](https://github.com/mlabonne/llm-autoeval), check it out!
28
+
29
+ | Model |AGIEval|GPT4ALL|TruthfulQA|Bigbench|Average|
30
+ |-------------------------|------:|------:|---------:|-------:|------:|
31
+ |[argilla/distilabeled-Marcoro14-7B-slerp](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp)| **45.4**| **76.47**| **65.46**| **47.19**| **58.63**|
32
+ |[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67|
33
+ |[argilla/distilabeled-Hermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-Hermes-2.5-Mistral-7B) | 44.64 | 73.35 | 55.96 | 42.21 | 54.04 |
34
+