Adding Evaluation Results

#5
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -131,4 +131,17 @@ while True:
131
  print(output_text)
132
  ```
133
  ## Deploying and training the model
134
- The model has been fine-tuned on a specific input format that goes like this ```"<|USER|> {user prompt} <|ASSISTANT|> {model prediction} <|End|>".``` For the best performance from the model the input text should be as follows ```<|USER|> {user prompt} <|ASSISTANT|> ``` and the target/label should be as follows ```<|USER|> {user prompt} <|ASSISTANT|> {dataset output} <|End|>```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
  print(output_text)
132
  ```
133
  ## Deploying and training the model
134
+ The model has been fine-tuned on a specific input format that goes like this ```"<|USER|> {user prompt} <|ASSISTANT|> {model prediction} <|End|>".``` For the best performance from the model the input text should be as follows ```<|USER|> {user prompt} <|ASSISTANT|> ``` and the target/label should be as follows ```<|USER|> {user prompt} <|ASSISTANT|> {dataset output} <|End|>```
135
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
136
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__gpt2-conversational-or-qa)
137
+
138
+ | Metric | Value |
139
+ |-----------------------|---------------------------|
140
+ | Avg. | 25.09 |
141
+ | ARC (25-shot) | 21.42 |
142
+ | HellaSwag (10-shot) | 27.61 |
143
+ | MMLU (5-shot) | 26.51 |
144
+ | TruthfulQA (0-shot) | 47.31 |
145
+ | Winogrande (5-shot) | 51.14 |
146
+ | GSM8K (5-shot) | 0.08 |
147
+ | DROP (3-shot) | 1.55 |