Adding Evaluation Results

#3
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -103,4 +103,17 @@ state of the art, but rather further show that chat-like behaviors in LLMs can b
103
  | dlite-v2-774m | 0.269625 | 0.52904 | 0.613761 | 0.395937 | 0.256 | 0.691513 | 0.566693 |
104
  | gpt2-xl | 0.25 | 0.582912 | 0.617737 | 0.400418 | 0.224 | 0.708379 | 0.583268 |
105
  | dlite-v1-1_5b | 0.268771 | 0.588384 | 0.624159 | 0.401414 | 0.226 | 0.708379 | 0.584846 |
106
- | dlite-v2-1_5b | 0.289249 | 0.565657 | 0.601223 | 0.434077 | 0.272 | 0.703482 | 0.588003 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
103
  | dlite-v2-774m | 0.269625 | 0.52904 | 0.613761 | 0.395937 | 0.256 | 0.691513 | 0.566693 |
104
  | gpt2-xl | 0.25 | 0.582912 | 0.617737 | 0.400418 | 0.224 | 0.708379 | 0.583268 |
105
  | dlite-v1-1_5b | 0.268771 | 0.588384 | 0.624159 | 0.401414 | 0.226 | 0.708379 | 0.584846 |
106
+ | dlite-v2-1_5b | 0.289249 | 0.565657 | 0.601223 | 0.434077 | 0.272 | 0.703482 | 0.588003 |
107
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
108
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_aisquared__dlite-v1-774m)
109
+
110
+ | Metric | Value |
111
+ |-----------------------|---------------------------|
112
+ | Avg. | 27.95 |
113
+ | ARC (25-shot) | 28.07 |
114
+ | HellaSwag (10-shot) | 44.35 |
115
+ | MMLU (5-shot) | 25.91 |
116
+ | TruthfulQA (0-shot) | 36.11 |
117
+ | Winogrande (5-shot) | 54.62 |
118
+ | GSM8K (5-shot) | 0.0 |
119
+ | DROP (3-shot) | 6.62 |