Adding Evaluation Results

#2
Files changed (1) hide show
  1. README.md +18 -5
README.md CHANGED
@@ -1,5 +1,7 @@
1
  ---
2
- base_model: teknium/OpenHermes-2.5-Mistral-7B
 
 
3
  tags:
4
  - mistral
5
  - instruct
@@ -8,12 +10,10 @@ tags:
8
  - gpt4
9
  - synthetic data
10
  - distillation
 
11
  model-index:
12
  - name: MistralHermes-CodePro-7B-v1
13
  results: []
14
- license: mit
15
- language:
16
- - en
17
  ---
18
 
19
  # MistralHermes-CodePro-7B-v1
@@ -38,4 +38,17 @@ You should use [LM Studio](https://lmstudio.ai/) for chatting with the model.
38
 
39
  # Quantized Models:
40
 
41
- GGUF: [beowolx/MistralHermes-CodePro-7B-v1-GGUF](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1-GGUF)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: mit
5
  tags:
6
  - mistral
7
  - instruct
 
10
  - gpt4
11
  - synthetic data
12
  - distillation
13
+ base_model: teknium/OpenHermes-2.5-Mistral-7B
14
  model-index:
15
  - name: MistralHermes-CodePro-7B-v1
16
  results: []
 
 
 
17
  ---
18
 
19
  # MistralHermes-CodePro-7B-v1
 
38
 
39
  # Quantized Models:
40
 
41
+ GGUF: [beowolx/MistralHermes-CodePro-7B-v1-GGUF](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1-GGUF)
42
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
43
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beowolx__MistralHermes-CodePro-7B-v1)
44
+
45
+ | Metric |Value|
46
+ |---------------------------------|----:|
47
+ |Avg. |66.17|
48
+ |AI2 Reasoning Challenge (25-Shot)|62.46|
49
+ |HellaSwag (10-Shot) |82.68|
50
+ |MMLU (5-Shot) |63.44|
51
+ |TruthfulQA (0-shot) |49.67|
52
+ |Winogrande (5-shot) |77.90|
53
+ |GSM8k (5-shot) |60.88|
54
+