hus960 commited on
Commit
be09a95
1 Parent(s): 70d940b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +150 -0
README.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+ - alpaca
8
+ - mistral
9
+ - not-for-all-audiences
10
+ - nsfw
11
+ - llama-cpp
12
+ - gguf-my-repo
13
+ base_model:
14
+ - s3nh/SeverusWestLake-7B-DPO
15
+ - icefog72/IceLemonTeaRP-32k-7b
16
+ - amazingvince/Not-WizardLM-2-7B
17
+ model-index:
18
+ - name: WestIceLemonTeaRP-32k-7b
19
+ results:
20
+ - task:
21
+ type: text-generation
22
+ name: Text Generation
23
+ dataset:
24
+ name: AI2 Reasoning Challenge (25-Shot)
25
+ type: ai2_arc
26
+ config: ARC-Challenge
27
+ split: test
28
+ args:
29
+ num_few_shot: 25
30
+ metrics:
31
+ - type: acc_norm
32
+ value: 68.77
33
+ name: normalized accuracy
34
+ source:
35
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/WestIceLemonTeaRP-32k-7b
36
+ name: Open LLM Leaderboard
37
+ - task:
38
+ type: text-generation
39
+ name: Text Generation
40
+ dataset:
41
+ name: HellaSwag (10-Shot)
42
+ type: hellaswag
43
+ split: validation
44
+ args:
45
+ num_few_shot: 10
46
+ metrics:
47
+ - type: acc_norm
48
+ value: 86.89
49
+ name: normalized accuracy
50
+ source:
51
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/WestIceLemonTeaRP-32k-7b
52
+ name: Open LLM Leaderboard
53
+ - task:
54
+ type: text-generation
55
+ name: Text Generation
56
+ dataset:
57
+ name: MMLU (5-Shot)
58
+ type: cais/mmlu
59
+ config: all
60
+ split: test
61
+ args:
62
+ num_few_shot: 5
63
+ metrics:
64
+ - type: acc
65
+ value: 64.28
66
+ name: accuracy
67
+ source:
68
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/WestIceLemonTeaRP-32k-7b
69
+ name: Open LLM Leaderboard
70
+ - task:
71
+ type: text-generation
72
+ name: Text Generation
73
+ dataset:
74
+ name: TruthfulQA (0-shot)
75
+ type: truthful_qa
76
+ config: multiple_choice
77
+ split: validation
78
+ args:
79
+ num_few_shot: 0
80
+ metrics:
81
+ - type: mc2
82
+ value: 62.47
83
+ source:
84
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/WestIceLemonTeaRP-32k-7b
85
+ name: Open LLM Leaderboard
86
+ - task:
87
+ type: text-generation
88
+ name: Text Generation
89
+ dataset:
90
+ name: Winogrande (5-shot)
91
+ type: winogrande
92
+ config: winogrande_xl
93
+ split: validation
94
+ args:
95
+ num_few_shot: 5
96
+ metrics:
97
+ - type: acc
98
+ value: 80.98
99
+ name: accuracy
100
+ source:
101
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/WestIceLemonTeaRP-32k-7b
102
+ name: Open LLM Leaderboard
103
+ - task:
104
+ type: text-generation
105
+ name: Text Generation
106
+ dataset:
107
+ name: GSM8k (5-shot)
108
+ type: gsm8k
109
+ config: main
110
+ split: test
111
+ args:
112
+ num_few_shot: 5
113
+ metrics:
114
+ - type: acc
115
+ value: 64.22
116
+ name: accuracy
117
+ source:
118
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/WestIceLemonTeaRP-32k-7b
119
+ name: Open LLM Leaderboard
120
+ ---
121
+
122
+ # hus960/WestIceLemonTeaRP-32k-7b-Q4_K_M-GGUF
123
+ This model was converted to GGUF format from [`icefog72/WestIceLemonTeaRP-32k-7b`](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
124
+ Refer to the [original model card](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b) for more details on the model.
125
+ ## Use with llama.cpp
126
+
127
+ Install llama.cpp through brew.
128
+
129
+ ```bash
130
+ brew install ggerganov/ggerganov/llama.cpp
131
+ ```
132
+ Invoke the llama.cpp server or the CLI.
133
+
134
+ CLI:
135
+
136
+ ```bash
137
+ llama-cli --hf-repo hus960/WestIceLemonTeaRP-32k-7b-Q4_K_M-GGUF --model westicelemontearp-32k-7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
138
+ ```
139
+
140
+ Server:
141
+
142
+ ```bash
143
+ llama-server --hf-repo hus960/WestIceLemonTeaRP-32k-7b-Q4_K_M-GGUF --model westicelemontearp-32k-7b.Q4_K_M.gguf -c 2048
144
+ ```
145
+
146
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
147
+
148
+ ```
149
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m westicelemontearp-32k-7b.Q4_K_M.gguf -n 128
150
+ ```