Text Generation
Transformers
PyTorch
Safetensors
English
llama
Eval Results
text-generation-inference
Inference Endpoints
pankajmathur leaderboard-pr-bot commited on
Commit
678bb0b
1 Parent(s): fd2754e

Adding Evaluation Results (#8)

Browse files

- Adding Evaluation Results (cf57d55cb3ca61b38a151634e7457525d6b24a39)


Co-authored-by: Open LLM Leaderboard PR Bot <leaderboard-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -200,4 +200,17 @@ If you found wizardlm_alpaca_dolly_orca_open_llama_3b useful in your research or
200
  journal = {GitHub repository},
201
  howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
202
  }
203
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
200
  journal = {GitHub repository},
201
  howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
202
  }
203
+ ```
204
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
205
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__orca_mini_3b)
206
+
207
+ | Metric | Value |
208
+ |-----------------------|---------------------------|
209
+ | Avg. | 35.5 |
210
+ | ARC (25-shot) | 41.55 |
211
+ | HellaSwag (10-shot) | 61.52 |
212
+ | MMLU (5-shot) | 26.79 |
213
+ | TruthfulQA (0-shot) | 42.42 |
214
+ | Winogrande (5-shot) | 61.8 |
215
+ | GSM8K (5-shot) | 0.08 |
216
+ | DROP (3-shot) | 14.33 |