CharlieJi commited on
Commit
8209568
1 Parent(s): 90c1207

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +218 -0
README.md ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ To use GGUF locally, first download GGUF models locally.
6
+
7
+ One option you can use is to use `huggingface-cli`. To download `huggingface-cli` please follow tutorials in https://huggingface.co/docs/huggingface_hub/main/en/guides/cli.
8
+
9
+ Then, do command (also replace `{QUANTIZATION_METHOD}` with one of your chosen quantization method)
10
+
11
+ ```bash
12
+ huggingface-cli download gorilla-llm/gorilla-openfunctions-v0-gguf gorilla-openfunctions-v0-{QUANTIZATION_METHOD}.gguf --local-dir gorilla-openfunctions-v0-GGUF
13
+ ```
14
+
15
+ It will store the QUANTIZATION_METHOD GGUF file to your local directory, `gorilla-openfunctions-v0-GGUF`.
16
+
17
+ We support QUANTIZATION_METHOD = {`q2_K`, `q3K_S`, `q3K_M`, `q3K_L`, `q4K_S`, `q4K_M`, `q5K_S`, `q5K_M`, `q6K`}.
18
+ Please let us know what other quantization methods you would like us to include!
19
+
20
+ Then, you can run the following example script to see an example of local inference. Fill in `YOUR_DIRECTORY` in this code snippet. This script is adapted from https://github.com/abetlen/llama-cpp-python and https://github.com/ShishirPatil/gorilla/tree/main/openfunctions
21
+
22
+ ```python
23
+ from llama_cpp import Llama
24
+ import json
25
+
26
+ llm = Llama(model_path="YOUR_DIRECTORY/gorilla-openfunctions-v0-GGUF/gorilla-openfunctions-v0-q2_K.gguf", n_threads=8, n_gpu_layers=35)
27
+
28
+ def get_prompt(user_query: str, functions: list = []) -> str:
29
+ """
30
+ Generates a conversation prompt based on the user's query and a list of functions.
31
+
32
+ Parameters:
33
+ - user_query (str): The user's query.
34
+ - functions (list): A list of functions to include in the prompt.
35
+
36
+ Returns:
37
+ - str: The formatted conversation prompt.
38
+ """
39
+ system = "You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
40
+ if len(functions) == 0:
41
+ return f"{system}\n### Instruction: <<question>> {user_query}\n### Response: "
42
+ functions_string = json.dumps(functions)
43
+ return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response: "
44
+
45
+ query = "What's the weather like in the two cities of Boston and San Francisco?"
46
+ functions = [
47
+ {
48
+ "name": "get_current_weather",
49
+ "description": "Get the current weather in a given location",
50
+ "parameters": {
51
+ "type": "object",
52
+ "properties": {
53
+ "location": {
54
+ "type": "string",
55
+ "description": "The city and state, e.g. San Francisco, CA",
56
+ },
57
+ "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
58
+ },
59
+ "required": ["location"],
60
+ },
61
+ }
62
+ ]
63
+
64
+ user_prompt = get_prompt(query, functions)
65
+
66
+ output = llm(user_prompt,
67
+ max_tokens=512, # Generate up to 512 tokens
68
+ stop=["<|EOT|>"],
69
+ echo=True # Whether to echo the prompt
70
+ )
71
+
72
+ print("Output: ", output)
73
+ ```
74
+
75
+ The expected output of successfully running this script is the following (tested on March 3, 2024)
76
+ ```
77
+ ❯ python quantized_inference.py
78
+ llama_model_loader: loaded meta data with 22 key-value pairs and 273 tensors from /Users/charliecheng-jieji/Downloads/codebase/quantized_eval/gorilla-openfunctions-v0-GGUF/gorilla-openfunctions-v0-q2_K.gguf (version GGUF V3 (latest))
79
+ llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
80
+ llama_model_loader: - kv 0: general.architecture str = llama
81
+ llama_model_loader: - kv 1: general.name str = LLaMA v0
82
+ llama_model_loader: - kv 2: llama.context_length u32 = 4096
83
+ llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
84
+ llama_model_loader: - kv 4: llama.block_count u32 = 30
85
+ llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
86
+ llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
87
+ llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
88
+ llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
89
+ llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001
90
+ llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
91
+ llama_model_loader: - kv 11: general.file_type u32 = 10
92
+ llama_model_loader: - kv 12: tokenizer.ggml.model str = gpt2
93
+ llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,102400] = ["!", "\"", "#", "$", "%", "&", "'", ...
94
+ llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,102400] = [0.000000, 0.000000, 0.000000, 0.0000...
95
+ llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
96
+ llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
97
+ llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 100000
98
+ llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 100015
99
+ llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 100001
100
+ llama_model_loader: - kv 20: tokenizer.chat_template str = {% if not add_generation_prompt is de...
101
+ llama_model_loader: - kv 21: general.quantization_version u32 = 2
102
+ llama_model_loader: - type f32: 61 tensors
103
+ llama_model_loader: - type q2_K: 121 tensors
104
+ llama_model_loader: - type q3_K: 90 tensors
105
+ llama_model_loader: - type q6_K: 1 tensors
106
+ llm_load_vocab: mismatch in special tokens definition ( 2387/102400 vs 2400/102400 ).
107
+ llm_load_print_meta: format = GGUF V3 (latest)
108
+ llm_load_print_meta: arch = llama
109
+ llm_load_print_meta: vocab type = BPE
110
+ llm_load_print_meta: n_vocab = 102400
111
+ llm_load_print_meta: n_merges = 99757
112
+ llm_load_print_meta: n_ctx_train = 4096
113
+ llm_load_print_meta: n_embd = 4096
114
+ llm_load_print_meta: n_head = 32
115
+ llm_load_print_meta: n_head_kv = 32
116
+ llm_load_print_meta: n_layer = 30
117
+ llm_load_print_meta: n_rot = 128
118
+ llm_load_print_meta: n_embd_head_k = 128
119
+ llm_load_print_meta: n_embd_head_v = 128
120
+ llm_load_print_meta: n_gqa = 1
121
+ llm_load_print_meta: n_embd_k_gqa = 4096
122
+ llm_load_print_meta: n_embd_v_gqa = 4096
123
+ llm_load_print_meta: f_norm_eps = 0.0e+00
124
+ llm_load_print_meta: f_norm_rms_eps = 1.0e-06
125
+ llm_load_print_meta: f_clamp_kqv = 0.0e+00
126
+ llm_load_print_meta: f_max_alibi_bias = 0.0e+00
127
+ llm_load_print_meta: n_ff = 11008
128
+ llm_load_print_meta: n_expert = 0
129
+ llm_load_print_meta: n_expert_used = 0
130
+ llm_load_print_meta: pooling type = 0
131
+ llm_load_print_meta: rope type = 0
132
+ llm_load_print_meta: rope scaling = linear
133
+ llm_load_print_meta: freq_base_train = 10000.0
134
+ llm_load_print_meta: freq_scale_train = 1
135
+ llm_load_print_meta: n_yarn_orig_ctx = 4096
136
+ llm_load_print_meta: rope_finetuned = unknown
137
+ llm_load_print_meta: model type = ?B
138
+ llm_load_print_meta: model ftype = Q2_K - Medium
139
+ llm_load_print_meta: model params = 6.91 B
140
+ llm_load_print_meta: model size = 2.53 GiB (3.14 BPW)
141
+ llm_load_print_meta: general.name = LLaMA v2
142
+ llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>'
143
+ llm_load_print_meta: EOS token = 100015 '<|EOT|>'
144
+ llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>'
145
+ llm_load_print_meta: LF token = 126 'Ä'
146
+ llm_load_tensors: ggml ctx size = 0.21 MiB
147
+ ggml_backend_metal_buffer_from_ptr: allocated buffer, size = 2457.45 MiB, ( 2457.52 / 10922.67)
148
+ llm_load_tensors: offloading 30 repeating layers to GPU
149
+ llm_load_tensors: offloading non-repeating layers to GPU
150
+ llm_load_tensors: offloaded 31/31 layers to GPU
151
+ llm_load_tensors: CPU buffer size = 131.25 MiB
152
+ llm_load_tensors: Metal buffer size = 2457.45 MiB
153
+ .....................................................................................
154
+ llama_new_context_with_model: n_ctx = 512
155
+ llama_new_context_with_model: freq_base = 10000.0
156
+ llama_new_context_with_model: freq_scale = 1
157
+ ggml_metal_init: allocating
158
+ ggml_metal_init: found device: Apple M1
159
+ ggml_metal_init: picking default device: Apple M1
160
+ ggml_metal_init: default.metallib not found, loading from source
161
+ ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil
162
+ ggml_metal_init: loading '/Users/charliecheng-jieji/miniconda3/envs/public-api/lib/python3.12/site-packages/llama_cpp/ggml-metal.metal'
163
+ ggml_metal_init: GPU name: Apple M1
164
+ ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
165
+ ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
166
+ ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001)
167
+ ggml_metal_init: simdgroup reduction support = true
168
+ ggml_metal_init: simdgroup matrix mul. support = true
169
+ ggml_metal_init: hasUnifiedMemory = true
170
+ ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB
171
+ ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 240.00 MiB, ( 2699.33 / 10922.67)
172
+ llama_kv_cache_init: Metal KV buffer size = 240.00 MiB
173
+ llama_new_context_with_model: KV self size = 240.00 MiB, K (f16): 120.00 MiB, V (f16): 120.00 MiB
174
+ llama_new_context_with_model: CPU input buffer size = 10.01 MiB
175
+ ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 208.00 MiB, ( 2907.33 / 10922.67)
176
+ llama_new_context_with_model: Metal compute buffer size = 208.00 MiB
177
+ llama_new_context_with_model: CPU compute buffer size = 8.00 MiB
178
+ llama_new_context_with_model: graph splits (measure): 2
179
+ AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 |
180
+ Model metadata: {'general.quantization_version': '2', 'tokenizer.chat_template': "{% if not add_generation_prompt is defined %}\n{% set add_generation_prompt = false %}\n{% endif %}\n{%- set ns = namespace(found=false) -%}\n{%- for message in messages -%}\n {%- if message['role'] == 'system' -%}\n {%- set ns.found = true -%}\n {%- endif -%}\n{%- endfor -%}\n{{bos_token}}{%- if not ns.found -%}\n{{'You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer\\n'}}\n{%- endif %}\n{%- for message in messages %}\n {%- if message['role'] == 'system' %}\n{{ message['content'] }}\n {%- else %}\n {%- if message['role'] == 'user' %}\n{{'### Instruction:\\n' + message['content'] + '\\n'}}\n {%- else %}\n{{'### Response:\\n' + message['content'] + '\\n<|EOT|>\\n'}}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{% if add_generation_prompt %}\n{{'### Response:'}}\n{% endif %}", 'tokenizer.ggml.padding_token_id': '100001', 'tokenizer.ggml.eos_token_id': '100015', 'tokenizer.ggml.bos_token_id': '100000', 'tokenizer.ggml.model': 'gpt2', 'llama.attention.head_count_kv': '32', 'llama.context_length': '4096', 'llama.attention.head_count': '32', 'llama.rope.freq_base': '10000.000000', 'llama.rope.dimension_count': '128', 'general.file_type': '10', 'llama.feed_forward_length': '11008', 'llama.embedding_length': '4096', 'llama.block_count': '30', 'general.architecture': 'llama', 'llama.attention.layer_norm_rms_epsilon': '0.000001', 'general.name': 'LLaMA v2'}
181
+ Using gguf chat template: {% if not add_generation_prompt is defined %}
182
+ {% set add_generation_prompt = false %}
183
+ {% endif %}
184
+ {%- set ns = namespace(found=false) -%}
185
+ {%- for message in messages -%}
186
+ {%- if message['role'] == 'system' -%}
187
+ {%- set ns.found = true -%}
188
+ {%- endif -%}
189
+ {%- endfor -%}
190
+ {{bos_token}}{%- if not ns.found -%}
191
+ {{'You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer\n'}}
192
+ {%- endif %}
193
+ {%- for message in messages %}
194
+ {%- if message['role'] == 'system' %}
195
+ {{ message['content'] }}
196
+ {%- else %}
197
+ {%- if message['role'] == 'user' %}
198
+ {{'### Instruction:\n' + message['content'] + '\n'}}
199
+ {%- else %}
200
+ {{'### Response:\n' + message['content'] + '\n<|EOT|>\n'}}
201
+ {%- endif %}
202
+ {%- endif %}
203
+ {%- endfor %}
204
+ {% if add_generation_prompt %}
205
+ {{'### Response:'}}
206
+ {% endif %}
207
+ Using chat eos_token: <|EOT|>
208
+ Using chat bos_token: <|begin▁of▁sentence|>
209
+
210
+ llama_print_timings: load time = 1890.11 ms
211
+ llama_print_timings: sample time = 23.48 ms / 40 runs ( 0.59 ms per token, 1703.94 tokens per second)
212
+ llama_print_timings: prompt eval time = 1889.91 ms / 181 tokens ( 10.44 ms per token, 95.77 tokens per second)
213
+ llama_print_timings: eval time = 2728.54 ms / 39 runs ( 69.96 ms per token, 14.29 tokens per second)
214
+ llama_print_timings: total time = 5162.12 ms / 220 tokens
215
+ ```
216
+ <code>
217
+ Output: {'id': 'cmpl-0679223d-578f-42be-bbce-0e307faddd28', 'object': 'text_completion', 'created': 1709525244, 'model': '/Users/charliecheng-jieji/Downloads/codebase/quantized_eval/gorilla-openfunctions-v0-GGUF/gorilla-openfunctions-v0-q2_K.gguf', 'choices': [{'text': 'You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.\n### Instruction: <<function>>[{"name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}}, "required": ["location"]}}]\n<<question>>What\'s the weather like in the two cities of Boston and San Francisco?\n### Response: <<function>>get_current_weather(location=\'Boston\', unit=\'fahrenheit\')<<function>>get_current_weather(location=\'San Francisco\', unit=\'fahrenheit\')', 'index': 0, 'logprobs': None, 'finish_reason': 'stop'}], 'usage': {'prompt_tokens': 181, 'completion_tokens': 39, 'total_tokens': 220}}
218
+ </code>