Commit
658f68a
1 Parent(s): ca373c5

Adding Evaluation Results (#3)

Browse files

- Adding Evaluation Results (b1c28318ad5e4eb5b3714f3821e6c50546ed50bf)


Co-authored-by: Open LLM Leaderboard PR Bot <leaderboard-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +118 -1
README.md CHANGED
@@ -1,8 +1,111 @@
1
  ---
2
- library_name: transformers
3
  license: apache-2.0
 
4
  datasets:
5
  - jondurbin/truthy-dpo-v0.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ---
7
 
8
  # WestLake-7B-v2-laser-truthy-dpo
@@ -121,3 +224,17 @@ If a question does not make any sense, or is not factually coherent, explain why
121
 
122
  Hello, how are you? [/INST] I'm doing great. How can I help you today? </s><s>[INST] I'd like to show off how chat templating works! [/INST] While discussing the concept of chat templating, I understand your intent highlights exemplifying its nature. Kindly provide contextual phrases or scenarios to let me demonstrate how it adapts to various inputs while maintaining a consistent flow of information exchange. This way, you'll witness how templates shape responses in a structured manner within chat dialogues. [[INST]]I apologize if my earlier comment seemed off topic. Let's shift back to the original subject of discussing helpful AI assistants. [INST] Not a problem at all! Our primary objective remains ensuring useful and polite interactions. Let's delve into more aspects of beneficial AI assistance. Feel free to ask specific questions or areas of interest you may have in mind.
123
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  license: apache-2.0
3
+ library_name: transformers
4
  datasets:
5
  - jondurbin/truthy-dpo-v0.1
6
+ model-index:
7
+ - name: WestLake-7B-v2-laser-truthy-dpo
8
+ results:
9
+ - task:
10
+ type: text-generation
11
+ name: Text Generation
12
+ dataset:
13
+ name: AI2 Reasoning Challenge (25-Shot)
14
+ type: ai2_arc
15
+ config: ARC-Challenge
16
+ split: test
17
+ args:
18
+ num_few_shot: 25
19
+ metrics:
20
+ - type: acc_norm
21
+ value: 73.89
22
+ name: normalized accuracy
23
+ source:
24
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/WestLake-7B-v2-laser-truthy-dpo
25
+ name: Open LLM Leaderboard
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: HellaSwag (10-Shot)
31
+ type: hellaswag
32
+ split: validation
33
+ args:
34
+ num_few_shot: 10
35
+ metrics:
36
+ - type: acc_norm
37
+ value: 88.85
38
+ name: normalized accuracy
39
+ source:
40
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/WestLake-7B-v2-laser-truthy-dpo
41
+ name: Open LLM Leaderboard
42
+ - task:
43
+ type: text-generation
44
+ name: Text Generation
45
+ dataset:
46
+ name: MMLU (5-Shot)
47
+ type: cais/mmlu
48
+ config: all
49
+ split: test
50
+ args:
51
+ num_few_shot: 5
52
+ metrics:
53
+ - type: acc
54
+ value: 64.84
55
+ name: accuracy
56
+ source:
57
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/WestLake-7B-v2-laser-truthy-dpo
58
+ name: Open LLM Leaderboard
59
+ - task:
60
+ type: text-generation
61
+ name: Text Generation
62
+ dataset:
63
+ name: TruthfulQA (0-shot)
64
+ type: truthful_qa
65
+ config: multiple_choice
66
+ split: validation
67
+ args:
68
+ num_few_shot: 0
69
+ metrics:
70
+ - type: mc2
71
+ value: 69.81
72
+ source:
73
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/WestLake-7B-v2-laser-truthy-dpo
74
+ name: Open LLM Leaderboard
75
+ - task:
76
+ type: text-generation
77
+ name: Text Generation
78
+ dataset:
79
+ name: Winogrande (5-shot)
80
+ type: winogrande
81
+ config: winogrande_xl
82
+ split: validation
83
+ args:
84
+ num_few_shot: 5
85
+ metrics:
86
+ - type: acc
87
+ value: 86.66
88
+ name: accuracy
89
+ source:
90
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/WestLake-7B-v2-laser-truthy-dpo
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: GSM8k (5-shot)
97
+ type: gsm8k
98
+ config: main
99
+ split: test
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 68.16
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/WestLake-7B-v2-laser-truthy-dpo
108
+ name: Open LLM Leaderboard
109
  ---
110
 
111
  # WestLake-7B-v2-laser-truthy-dpo
 
224
 
225
  Hello, how are you? [/INST] I'm doing great. How can I help you today? </s><s>[INST] I'd like to show off how chat templating works! [/INST] While discussing the concept of chat templating, I understand your intent highlights exemplifying its nature. Kindly provide contextual phrases or scenarios to let me demonstrate how it adapts to various inputs while maintaining a consistent flow of information exchange. This way, you'll witness how templates shape responses in a structured manner within chat dialogues. [[INST]]I apologize if my earlier comment seemed off topic. Let's shift back to the original subject of discussing helpful AI assistants. [INST] Not a problem at all! Our primary objective remains ensuring useful and polite interactions. Let's delve into more aspects of beneficial AI assistance. Feel free to ask specific questions or areas of interest you may have in mind.
226
  ```
227
+
228
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
229
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__WestLake-7B-v2-laser-truthy-dpo)
230
+
231
+ | Metric |Value|
232
+ |---------------------------------|----:|
233
+ |Avg. |75.37|
234
+ |AI2 Reasoning Challenge (25-Shot)|73.89|
235
+ |HellaSwag (10-Shot) |88.85|
236
+ |MMLU (5-Shot) |64.84|
237
+ |TruthfulQA (0-shot) |69.81|
238
+ |Winogrande (5-shot) |86.66|
239
+ |GSM8k (5-shot) |68.16|
240
+