leaderboard-pr-bot commited on
Commit
6adfb89
1 Parent(s): 41bd193

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -27,4 +27,17 @@ from transformers import pipeline
27
  pipe = pipeline(model='togethercomputer/GPT-JT-6B-v0')
28
 
29
  pipe("Where is Zurich? Ans:")
30
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  pipe = pipeline(model='togethercomputer/GPT-JT-6B-v0')
28
 
29
  pipe("Where is Zurich? Ans:")
30
+ ```
31
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
32
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__GPT-JT-6B-v0)
33
+
34
+ | Metric | Value |
35
+ |-----------------------|---------------------------|
36
+ | Avg. | 38.37 |
37
+ | ARC (25-shot) | 42.06 |
38
+ | HellaSwag (10-shot) | 67.96 |
39
+ | MMLU (5-shot) | 49.34 |
40
+ | TruthfulQA (0-shot) | 38.89 |
41
+ | Winogrande (5-shot) | 64.8 |
42
+ | GSM8K (5-shot) | 1.21 |
43
+ | DROP (3-shot) | 4.31 |