Files changed (1) hide show
  1. README.md +15 -2
README.md CHANGED
@@ -1,10 +1,10 @@
1
  ---
2
  license: mit
3
- base_model: microsoft/phi-2
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
  - teknium/OpenHermes-2.5
 
8
  model-index:
9
  - name: phi-2-OpenHermes-2.5
10
  results: []
@@ -73,4 +73,17 @@ your_instruction = <your_instruction>
73
  infer_prompt = f"### USER: {your_instruction} <|endoftext|>\n### ASSISTANT:"
74
  output = pipe(infer_prompt, do_sample=True, max_new_tokens=256)[0]["generated_text"]
75
  print(output)
76
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
 
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
  - teknium/OpenHermes-2.5
7
+ base_model: microsoft/phi-2
8
  model-index:
9
  - name: phi-2-OpenHermes-2.5
10
  results: []
 
73
  infer_prompt = f"### USER: {your_instruction} <|endoftext|>\n### ASSISTANT:"
74
  output = pipe(infer_prompt, do_sample=True, max_new_tokens=256)[0]["generated_text"]
75
  print(output)
76
+ ```
77
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
78
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_minghaowu__phi-2-OpenHermes-2.5)
79
+
80
+ | Metric |Value|
81
+ |---------------------------------|----:|
82
+ |Avg. |51.05|
83
+ |AI2 Reasoning Challenge (25-Shot)|56.48|
84
+ |HellaSwag (10-Shot) |73.88|
85
+ |MMLU (5-Shot) |54.80|
86
+ |TruthfulQA (0-shot) |48.10|
87
+ |Winogrande (5-shot) |73.01|
88
+ |GSM8k (5-shot) | 0.00|
89
+