Files changed (1) hide show
  1. README.md +117 -1
README.md CHANGED
@@ -9,6 +9,109 @@ tags:
9
  base_model:
10
  - liminerity/Blurstral-7b-slerp
11
  - diffnamehard/Mistral-CatMacaroni-slerp-uncensored-7B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  # Blured-Ties-7B
@@ -62,4 +165,17 @@ pipeline = transformers.pipeline(
62
 
63
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
64
  print(outputs[0]["generated_text"])
65
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  base_model:
10
  - liminerity/Blurstral-7b-slerp
11
  - diffnamehard/Mistral-CatMacaroni-slerp-uncensored-7B
12
+ model-index:
13
+ - name: Blured-Ties-7B
14
+ results:
15
+ - task:
16
+ type: text-generation
17
+ name: Text Generation
18
+ dataset:
19
+ name: AI2 Reasoning Challenge (25-Shot)
20
+ type: ai2_arc
21
+ config: ARC-Challenge
22
+ split: test
23
+ args:
24
+ num_few_shot: 25
25
+ metrics:
26
+ - type: acc_norm
27
+ value: 63.99
28
+ name: normalized accuracy
29
+ source:
30
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blured-Ties-7B
31
+ name: Open LLM Leaderboard
32
+ - task:
33
+ type: text-generation
34
+ name: Text Generation
35
+ dataset:
36
+ name: HellaSwag (10-Shot)
37
+ type: hellaswag
38
+ split: validation
39
+ args:
40
+ num_few_shot: 10
41
+ metrics:
42
+ - type: acc_norm
43
+ value: 83.56
44
+ name: normalized accuracy
45
+ source:
46
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blured-Ties-7B
47
+ name: Open LLM Leaderboard
48
+ - task:
49
+ type: text-generation
50
+ name: Text Generation
51
+ dataset:
52
+ name: MMLU (5-Shot)
53
+ type: cais/mmlu
54
+ config: all
55
+ split: test
56
+ args:
57
+ num_few_shot: 5
58
+ metrics:
59
+ - type: acc
60
+ value: 63.19
61
+ name: accuracy
62
+ source:
63
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blured-Ties-7B
64
+ name: Open LLM Leaderboard
65
+ - task:
66
+ type: text-generation
67
+ name: Text Generation
68
+ dataset:
69
+ name: TruthfulQA (0-shot)
70
+ type: truthful_qa
71
+ config: multiple_choice
72
+ split: validation
73
+ args:
74
+ num_few_shot: 0
75
+ metrics:
76
+ - type: mc2
77
+ value: 58.12
78
+ source:
79
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blured-Ties-7B
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: Winogrande (5-shot)
86
+ type: winogrande
87
+ config: winogrande_xl
88
+ split: validation
89
+ args:
90
+ num_few_shot: 5
91
+ metrics:
92
+ - type: acc
93
+ value: 79.72
94
+ name: accuracy
95
+ source:
96
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blured-Ties-7B
97
+ name: Open LLM Leaderboard
98
+ - task:
99
+ type: text-generation
100
+ name: Text Generation
101
+ dataset:
102
+ name: GSM8k (5-shot)
103
+ type: gsm8k
104
+ config: main
105
+ split: test
106
+ args:
107
+ num_few_shot: 5
108
+ metrics:
109
+ - type: acc
110
+ value: 46.93
111
+ name: accuracy
112
+ source:
113
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blured-Ties-7B
114
+ name: Open LLM Leaderboard
115
  ---
116
 
117
  # Blured-Ties-7B
 
165
 
166
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
167
  print(outputs[0]["generated_text"])
168
+ ```
169
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
170
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__Blured-Ties-7B)
171
+
172
+ | Metric |Value|
173
+ |---------------------------------|----:|
174
+ |Avg. |65.92|
175
+ |AI2 Reasoning Challenge (25-Shot)|63.99|
176
+ |HellaSwag (10-Shot) |83.56|
177
+ |MMLU (5-Shot) |63.19|
178
+ |TruthfulQA (0-shot) |58.12|
179
+ |Winogrande (5-shot) |79.72|
180
+ |GSM8k (5-shot) |46.93|
181
+