DavidAU commited on
Commit
abce469
1 Parent(s): 7bc16ae

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +163 -0
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - mistral
8
+ - mixtral
9
+ - solar
10
+ - model-fusion
11
+ - fusechat
12
+ - llama-cpp
13
+ - gguf-my-repo
14
+ base_model: openchat/openchat_3.5
15
+ datasets:
16
+ - FuseAI/FuseChat-Mixture
17
+ pipeline_tag: text-generation
18
+ model-index:
19
+ - name: FuseChat-7B-VaRM
20
+ results:
21
+ - task:
22
+ type: text-generation
23
+ name: Text Generation
24
+ dataset:
25
+ name: MT-Bench
26
+ type: unknown
27
+ metrics:
28
+ - type: unknown
29
+ value: 8.22
30
+ name: score
31
+ source:
32
+ url: https://huggingface.co/spaces/lmsys/mt-bench
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: AI2 Reasoning Challenge (25-Shot)
38
+ type: ai2_arc
39
+ config: ARC-Challenge
40
+ split: test
41
+ args:
42
+ num_few_shot: 25
43
+ metrics:
44
+ - type: acc_norm
45
+ value: 62.88
46
+ name: normalized accuracy
47
+ source:
48
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
49
+ name: Open LLM Leaderboard
50
+ - task:
51
+ type: text-generation
52
+ name: Text Generation
53
+ dataset:
54
+ name: HellaSwag (10-Shot)
55
+ type: hellaswag
56
+ split: validation
57
+ args:
58
+ num_few_shot: 10
59
+ metrics:
60
+ - type: acc_norm
61
+ value: 84.25
62
+ name: normalized accuracy
63
+ source:
64
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: MMLU (5-Shot)
71
+ type: cais/mmlu
72
+ config: all
73
+ split: test
74
+ args:
75
+ num_few_shot: 5
76
+ metrics:
77
+ - type: acc
78
+ value: 63.71
79
+ name: accuracy
80
+ source:
81
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
82
+ name: Open LLM Leaderboard
83
+ - task:
84
+ type: text-generation
85
+ name: Text Generation
86
+ dataset:
87
+ name: TruthfulQA (0-shot)
88
+ type: truthful_qa
89
+ config: multiple_choice
90
+ split: validation
91
+ args:
92
+ num_few_shot: 0
93
+ metrics:
94
+ - type: mc2
95
+ value: 45.67
96
+ source:
97
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
98
+ name: Open LLM Leaderboard
99
+ - task:
100
+ type: text-generation
101
+ name: Text Generation
102
+ dataset:
103
+ name: Winogrande (5-shot)
104
+ type: winogrande
105
+ config: winogrande_xl
106
+ split: validation
107
+ args:
108
+ num_few_shot: 5
109
+ metrics:
110
+ - type: acc
111
+ value: 79.16
112
+ name: accuracy
113
+ source:
114
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
115
+ name: Open LLM Leaderboard
116
+ - task:
117
+ type: text-generation
118
+ name: Text Generation
119
+ dataset:
120
+ name: GSM8k (5-shot)
121
+ type: gsm8k
122
+ config: main
123
+ split: test
124
+ args:
125
+ num_few_shot: 5
126
+ metrics:
127
+ - type: acc
128
+ value: 63.46
129
+ name: accuracy
130
+ source:
131
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FuseAI/FuseChat-7B-VaRM
132
+ name: Open LLM Leaderboard
133
+ ---
134
+
135
+ # DavidAU/FuseChat-7B-VaRM-Q6_K-GGUF
136
+ This model was converted to GGUF format from [`FuseAI/FuseChat-7B-VaRM`](https://huggingface.co/FuseAI/FuseChat-7B-VaRM) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
137
+ Refer to the [original model card](https://huggingface.co/FuseAI/FuseChat-7B-VaRM) for more details on the model.
138
+ ## Use with llama.cpp
139
+
140
+ Install llama.cpp through brew.
141
+
142
+ ```bash
143
+ brew install ggerganov/ggerganov/llama.cpp
144
+ ```
145
+ Invoke the llama.cpp server or the CLI.
146
+
147
+ CLI:
148
+
149
+ ```bash
150
+ llama-cli --hf-repo DavidAU/FuseChat-7B-VaRM-Q6_K-GGUF --model fusechat-7b-varm.Q6_K.gguf -p "The meaning to life and the universe is"
151
+ ```
152
+
153
+ Server:
154
+
155
+ ```bash
156
+ llama-server --hf-repo DavidAU/FuseChat-7B-VaRM-Q6_K-GGUF --model fusechat-7b-varm.Q6_K.gguf -c 2048
157
+ ```
158
+
159
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
160
+
161
+ ```
162
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m fusechat-7b-varm.Q6_K.gguf -n 128
163
+ ```