Adding Evaluation Results

#4
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -78,3 +78,17 @@ This model, WizardCoder-Guanaco-15B-V1.0, is simply building on the efforts of t
78
  A sincere appreciation goes out to the developers and the community involved in the creation and refinement of these models. Their commitment to providing open source tools and datasets have been instrumental in making this project a reality.
79
 
80
  Moreover, a special note of thanks to the [Hugging Face](https://huggingface.co/) team, whose transformative library has not only streamlined the process of model creation and adaptation, but also democratized the access to state-of-the-art machine learning technologies. Their impact on the development of this project cannot be overstated.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
  A sincere appreciation goes out to the developers and the community involved in the creation and refinement of these models. Their commitment to providing open source tools and datasets have been instrumental in making this project a reality.
79
 
80
  Moreover, a special note of thanks to the [Hugging Face](https://huggingface.co/) team, whose transformative library has not only streamlined the process of model creation and adaptation, but also democratized the access to state-of-the-art machine learning technologies. Their impact on the development of this project cannot be overstated.
81
+
82
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
83
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_LoupGarou__WizardCoder-Guanaco-15B-V1.0)
84
+
85
+ | Metric | Value |
86
+ |-----------------------|---------------------------|
87
+ | Avg. | 30.36 |
88
+ | ARC (25-shot) | 30.46 |
89
+ | HellaSwag (10-shot) | 45.59 |
90
+ | MMLU (5-shot) | 26.79 |
91
+ | TruthfulQA (0-shot) | 46.39 |
92
+ | Winogrande (5-shot) | 53.12 |
93
+ | GSM8K (5-shot) | 1.44 |
94
+ | DROP (3-shot) | 8.71 |