DavidAU commited on
Commit
f726f19
1 Parent(s): aa03c70

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +186 -0
README.md ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ - Mistral
9
+ - instruct
10
+ - finetune
11
+ - chatml
12
+ - gpt4
13
+ - synthetic data
14
+ - science
15
+ - physics
16
+ - chemistry
17
+ - biology
18
+ - math
19
+ - llama-cpp
20
+ - gguf-my-repo
21
+ base_model: mistralai/Mistral-7B-v0.1
22
+ datasets:
23
+ - allenai/ai2_arc
24
+ - camel-ai/physics
25
+ - camel-ai/chemistry
26
+ - camel-ai/biology
27
+ - camel-ai/math
28
+ - metaeval/reclor
29
+ - openbookqa
30
+ - mandyyyyii/scibench
31
+ - derek-thomas/ScienceQA
32
+ - TIGER-Lab/ScienceEval
33
+ - jondurbin/airoboros-3.2
34
+ - LDJnr/Capybara
35
+ - Cot-Alpaca-GPT4-From-OpenHermes-2.5
36
+ - STEM-AI-mtl/Electrical-engineering
37
+ - knowrohit07/saraswati-stem
38
+ - sablo/oasst2_curated
39
+ - glaiveai/glaive-code-assistant
40
+ - lmsys/lmsys-chat-1m
41
+ - TIGER-Lab/MathInstruct
42
+ - bigbio/med_qa
43
+ - meta-math/MetaMathQA-40K
44
+ - openbookqa
45
+ - piqa
46
+ - metaeval/reclor
47
+ - derek-thomas/ScienceQA
48
+ - scibench
49
+ - sciq
50
+ - Open-Orca/SlimOrca
51
+ - migtissera/Synthia-v1.3
52
+ - TIGER-Lab/ScienceEval
53
+ model-index:
54
+ - name: Einstein-v4-7B
55
+ results:
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: AI2 Reasoning Challenge (25-Shot)
61
+ type: ai2_arc
62
+ config: ARC-Challenge
63
+ split: test
64
+ args:
65
+ num_few_shot: 25
66
+ metrics:
67
+ - type: acc_norm
68
+ value: 64.68
69
+ name: normalized accuracy
70
+ source:
71
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: HellaSwag (10-Shot)
78
+ type: hellaswag
79
+ split: validation
80
+ args:
81
+ num_few_shot: 10
82
+ metrics:
83
+ - type: acc_norm
84
+ value: 83.75
85
+ name: normalized accuracy
86
+ source:
87
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
88
+ name: Open LLM Leaderboard
89
+ - task:
90
+ type: text-generation
91
+ name: Text Generation
92
+ dataset:
93
+ name: MMLU (5-Shot)
94
+ type: cais/mmlu
95
+ config: all
96
+ split: test
97
+ args:
98
+ num_few_shot: 5
99
+ metrics:
100
+ - type: acc
101
+ value: 62.31
102
+ name: accuracy
103
+ source:
104
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
105
+ name: Open LLM Leaderboard
106
+ - task:
107
+ type: text-generation
108
+ name: Text Generation
109
+ dataset:
110
+ name: TruthfulQA (0-shot)
111
+ type: truthful_qa
112
+ config: multiple_choice
113
+ split: validation
114
+ args:
115
+ num_few_shot: 0
116
+ metrics:
117
+ - type: mc2
118
+ value: 55.15
119
+ source:
120
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
121
+ name: Open LLM Leaderboard
122
+ - task:
123
+ type: text-generation
124
+ name: Text Generation
125
+ dataset:
126
+ name: Winogrande (5-shot)
127
+ type: winogrande
128
+ config: winogrande_xl
129
+ split: validation
130
+ args:
131
+ num_few_shot: 5
132
+ metrics:
133
+ - type: acc
134
+ value: 76.24
135
+ name: accuracy
136
+ source:
137
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
138
+ name: Open LLM Leaderboard
139
+ - task:
140
+ type: text-generation
141
+ name: Text Generation
142
+ dataset:
143
+ name: GSM8k (5-shot)
144
+ type: gsm8k
145
+ config: main
146
+ split: test
147
+ args:
148
+ num_few_shot: 5
149
+ metrics:
150
+ - type: acc
151
+ value: 57.62
152
+ name: accuracy
153
+ source:
154
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-7B
155
+ name: Open LLM Leaderboard
156
+ ---
157
+
158
+ # DavidAU/Einstein-v4-7B-Q6_K-GGUF
159
+ This model was converted to GGUF format from [`Weyaxi/Einstein-v4-7B`](https://huggingface.co/Weyaxi/Einstein-v4-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
160
+ Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v4-7B) for more details on the model.
161
+ ## Use with llama.cpp
162
+
163
+ Install llama.cpp through brew.
164
+
165
+ ```bash
166
+ brew install ggerganov/ggerganov/llama.cpp
167
+ ```
168
+ Invoke the llama.cpp server or the CLI.
169
+
170
+ CLI:
171
+
172
+ ```bash
173
+ llama-cli --hf-repo DavidAU/Einstein-v4-7B-Q6_K-GGUF --model einstein-v4-7b.Q6_K.gguf -p "The meaning to life and the universe is"
174
+ ```
175
+
176
+ Server:
177
+
178
+ ```bash
179
+ llama-server --hf-repo DavidAU/Einstein-v4-7B-Q6_K-GGUF --model einstein-v4-7b.Q6_K.gguf -c 2048
180
+ ```
181
+
182
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
183
+
184
+ ```
185
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m einstein-v4-7b.Q6_K.gguf -n 128
186
+ ```