Adding Evaluation Results

#3
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -82,4 +82,17 @@ This model, WizardCoder-Guanaco-15B-V1.1, is simply building on the efforts of t
82
 
83
  A sincere appreciation goes out to the developers and the community involved in the creation and refinement of these models. Their commitment to providing open source tools and datasets have been instrumental in making this project a reality.
84
 
85
- Moreover, a special note of thanks to the [Hugging Face](https://huggingface.co/) team, whose transformative library has not only streamlined the process of model creation and adaptation, but also democratized the access to state-of-the-art machine learning technologies. Their impact on the development of this project cannot be overstated.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
  A sincere appreciation goes out to the developers and the community involved in the creation and refinement of these models. Their commitment to providing open source tools and datasets have been instrumental in making this project a reality.
84
 
85
+ Moreover, a special note of thanks to the [Hugging Face](https://huggingface.co/) team, whose transformative library has not only streamlined the process of model creation and adaptation, but also democratized the access to state-of-the-art machine learning technologies. Their impact on the development of this project cannot be overstated.
86
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
87
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_LoupGarou__WizardCoder-Guanaco-15B-V1.1)
88
+
89
+ | Metric | Value |
90
+ |-----------------------|---------------------------|
91
+ | Avg. | 31.73 |
92
+ | ARC (25-shot) | 32.59 |
93
+ | HellaSwag (10-shot) | 45.42 |
94
+ | MMLU (5-shot) | 25.88 |
95
+ | TruthfulQA (0-shot) | 42.33 |
96
+ | Winogrande (5-shot) | 56.04 |
97
+ | GSM8K (5-shot) | 2.88 |
98
+ | DROP (3-shot) | 16.98 |