hus960 commited on
Commit
f2694d2
1 Parent(s): 134f62e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +188 -0
README.md ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ - phi
9
+ - phi2
10
+ - einstein
11
+ - instruct
12
+ - finetune
13
+ - chatml
14
+ - gpt4
15
+ - synthetic data
16
+ - science
17
+ - physics
18
+ - chemistry
19
+ - biology
20
+ - math
21
+ - llama-cpp
22
+ - gguf-my-repo
23
+ base_model: microsoft/phi-2
24
+ datasets:
25
+ - allenai/ai2_arc
26
+ - camel-ai/physics
27
+ - camel-ai/chemistry
28
+ - camel-ai/biology
29
+ - camel-ai/math
30
+ - metaeval/reclor
31
+ - openbookqa
32
+ - mandyyyyii/scibench
33
+ - derek-thomas/ScienceQA
34
+ - TIGER-Lab/ScienceEval
35
+ - jondurbin/airoboros-3.2
36
+ - LDJnr/Capybara
37
+ - Cot-Alpaca-GPT4-From-OpenHermes-2.5
38
+ - STEM-AI-mtl/Electrical-engineering
39
+ - knowrohit07/saraswati-stem
40
+ - sablo/oasst2_curated
41
+ - glaiveai/glaive-code-assistant
42
+ - lmsys/lmsys-chat-1m
43
+ - TIGER-Lab/MathInstruct
44
+ - bigbio/med_qa
45
+ - meta-math/MetaMathQA-40K
46
+ - openbookqa
47
+ - piqa
48
+ - metaeval/reclor
49
+ - derek-thomas/ScienceQA
50
+ - scibench
51
+ - sciq
52
+ - Open-Orca/SlimOrca
53
+ - migtissera/Synthia-v1.3
54
+ - TIGER-Lab/ScienceEval
55
+ model-index:
56
+ - name: Einstein-v4-phi2
57
+ results:
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: AI2 Reasoning Challenge (25-Shot)
63
+ type: ai2_arc
64
+ config: ARC-Challenge
65
+ split: test
66
+ args:
67
+ num_few_shot: 25
68
+ metrics:
69
+ - type: acc_norm
70
+ value: 59.98
71
+ name: normalized accuracy
72
+ source:
73
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-phi2
74
+ name: Open LLM Leaderboard
75
+ - task:
76
+ type: text-generation
77
+ name: Text Generation
78
+ dataset:
79
+ name: HellaSwag (10-Shot)
80
+ type: hellaswag
81
+ split: validation
82
+ args:
83
+ num_few_shot: 10
84
+ metrics:
85
+ - type: acc_norm
86
+ value: 74.07
87
+ name: normalized accuracy
88
+ source:
89
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-phi2
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: MMLU (5-Shot)
96
+ type: cais/mmlu
97
+ config: all
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 56.89
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-phi2
107
+ name: Open LLM Leaderboard
108
+ - task:
109
+ type: text-generation
110
+ name: Text Generation
111
+ dataset:
112
+ name: TruthfulQA (0-shot)
113
+ type: truthful_qa
114
+ config: multiple_choice
115
+ split: validation
116
+ args:
117
+ num_few_shot: 0
118
+ metrics:
119
+ - type: mc2
120
+ value: 45.8
121
+ source:
122
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-phi2
123
+ name: Open LLM Leaderboard
124
+ - task:
125
+ type: text-generation
126
+ name: Text Generation
127
+ dataset:
128
+ name: Winogrande (5-shot)
129
+ type: winogrande
130
+ config: winogrande_xl
131
+ split: validation
132
+ args:
133
+ num_few_shot: 5
134
+ metrics:
135
+ - type: acc
136
+ value: 73.88
137
+ name: accuracy
138
+ source:
139
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-phi2
140
+ name: Open LLM Leaderboard
141
+ - task:
142
+ type: text-generation
143
+ name: Text Generation
144
+ dataset:
145
+ name: GSM8k (5-shot)
146
+ type: gsm8k
147
+ config: main
148
+ split: test
149
+ args:
150
+ num_few_shot: 5
151
+ metrics:
152
+ - type: acc
153
+ value: 53.98
154
+ name: accuracy
155
+ source:
156
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Einstein-v4-phi2
157
+ name: Open LLM Leaderboard
158
+ ---
159
+
160
+ # hus960/Einstein-v4-phi2-Q8_0-GGUF
161
+ This model was converted to GGUF format from [`Weyaxi/Einstein-v4-phi2`](https://huggingface.co/Weyaxi/Einstein-v4-phi2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
162
+ Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v4-phi2) for more details on the model.
163
+ ## Use with llama.cpp
164
+
165
+ Install llama.cpp through brew.
166
+
167
+ ```bash
168
+ brew install ggerganov/ggerganov/llama.cpp
169
+ ```
170
+ Invoke the llama.cpp server or the CLI.
171
+
172
+ CLI:
173
+
174
+ ```bash
175
+ llama-cli --hf-repo hus960/Einstein-v4-phi2-Q8_0-GGUF --model einstein-v4-phi2.Q8_0.gguf -p "The meaning to life and the universe is"
176
+ ```
177
+
178
+ Server:
179
+
180
+ ```bash
181
+ llama-server --hf-repo hus960/Einstein-v4-phi2-Q8_0-GGUF --model einstein-v4-phi2.Q8_0.gguf -c 2048
182
+ ```
183
+
184
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
185
+
186
+ ```
187
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m einstein-v4-phi2.Q8_0.gguf -n 128
188
+ ```