Commit
f9082f8
1 Parent(s): c03dfcd

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (229dbf091210ebc5d54434352854a439dbcc74ed)


Co-authored-by: Open LLM Leaderboard PR Bot <leaderboard-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +120 -3
README.md CHANGED
@@ -1,14 +1,131 @@
1
  ---
2
- datasets:
3
- - argilla/distilabel-intel-orca-dpo-pairs
4
  language:
5
  - en
6
  license: cc-by-nc-4.0
 
 
7
  base_model:
8
- - upstage/SOLAR-10.7B-Instruct-v1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
  # Model Card for Model ID
11
 
12
  <!-- Provide a quick summary of what the model is/does. -->
13
  Just testing out LLM Finetuning. Finetuned on [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) using [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs).
14
  Followed the Google Colab mentioned in this article: [https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  language:
3
  - en
4
  license: cc-by-nc-4.0
5
+ datasets:
6
+ - argilla/distilabel-intel-orca-dpo-pairs
7
  base_model:
8
+ - upstage/SOLAR-10.7B-Instruct-v1.0
9
+ model-index:
10
+ - name: BrokenKeyboard
11
+ results:
12
+ - task:
13
+ type: text-generation
14
+ name: Text Generation
15
+ dataset:
16
+ name: AI2 Reasoning Challenge (25-Shot)
17
+ type: ai2_arc
18
+ config: ARC-Challenge
19
+ split: test
20
+ args:
21
+ num_few_shot: 25
22
+ metrics:
23
+ - type: acc_norm
24
+ value: 71.25
25
+ name: normalized accuracy
26
+ source:
27
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dhanushreddy29/BrokenKeyboard
28
+ name: Open LLM Leaderboard
29
+ - task:
30
+ type: text-generation
31
+ name: Text Generation
32
+ dataset:
33
+ name: HellaSwag (10-Shot)
34
+ type: hellaswag
35
+ split: validation
36
+ args:
37
+ num_few_shot: 10
38
+ metrics:
39
+ - type: acc_norm
40
+ value: 88.34
41
+ name: normalized accuracy
42
+ source:
43
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dhanushreddy29/BrokenKeyboard
44
+ name: Open LLM Leaderboard
45
+ - task:
46
+ type: text-generation
47
+ name: Text Generation
48
+ dataset:
49
+ name: MMLU (5-Shot)
50
+ type: cais/mmlu
51
+ config: all
52
+ split: test
53
+ args:
54
+ num_few_shot: 5
55
+ metrics:
56
+ - type: acc
57
+ value: 66.04
58
+ name: accuracy
59
+ source:
60
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dhanushreddy29/BrokenKeyboard
61
+ name: Open LLM Leaderboard
62
+ - task:
63
+ type: text-generation
64
+ name: Text Generation
65
+ dataset:
66
+ name: TruthfulQA (0-shot)
67
+ type: truthful_qa
68
+ config: multiple_choice
69
+ split: validation
70
+ args:
71
+ num_few_shot: 0
72
+ metrics:
73
+ - type: mc2
74
+ value: 71.36
75
+ source:
76
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dhanushreddy29/BrokenKeyboard
77
+ name: Open LLM Leaderboard
78
+ - task:
79
+ type: text-generation
80
+ name: Text Generation
81
+ dataset:
82
+ name: Winogrande (5-shot)
83
+ type: winogrande
84
+ config: winogrande_xl
85
+ split: validation
86
+ args:
87
+ num_few_shot: 5
88
+ metrics:
89
+ - type: acc
90
+ value: 83.19
91
+ name: accuracy
92
+ source:
93
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dhanushreddy29/BrokenKeyboard
94
+ name: Open LLM Leaderboard
95
+ - task:
96
+ type: text-generation
97
+ name: Text Generation
98
+ dataset:
99
+ name: GSM8k (5-shot)
100
+ type: gsm8k
101
+ config: main
102
+ split: test
103
+ args:
104
+ num_few_shot: 5
105
+ metrics:
106
+ - type: acc
107
+ value: 64.29
108
+ name: accuracy
109
+ source:
110
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dhanushreddy29/BrokenKeyboard
111
+ name: Open LLM Leaderboard
112
  ---
113
  # Model Card for Model ID
114
 
115
  <!-- Provide a quick summary of what the model is/does. -->
116
  Just testing out LLM Finetuning. Finetuned on [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) using [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs).
117
  Followed the Google Colab mentioned in this article: [https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac)
118
+
119
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
120
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhanushreddy29__BrokenKeyboard)
121
+
122
+ | Metric |Value|
123
+ |---------------------------------|----:|
124
+ |Avg. |74.08|
125
+ |AI2 Reasoning Challenge (25-Shot)|71.25|
126
+ |HellaSwag (10-Shot) |88.34|
127
+ |MMLU (5-Shot) |66.04|
128
+ |TruthfulQA (0-shot) |71.36|
129
+ |Winogrande (5-shot) |83.19|
130
+ |GSM8k (5-shot) |64.29|
131
+