Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,8 @@ license: apache-2.0
|
|
14 |
|
15 |
## Introduction
|
16 |
Zefiro functioncalling extends Large Language Model(LLM) Chat Completion feature to formulate
|
17 |
-
executable APIs call given natural language instructions and API context. With OpenFunctions v2,
|
|
|
18 |
we now support:
|
19 |
1. Relevance detection - when chatting, chat. When asked for function, returns a function
|
20 |
2. REST - native REST support
|
@@ -29,69 +30,118 @@ All of our models are hosted on our Huggingface mii-community org: [zefiro-funci
|
|
29 |
|
30 |
## Training
|
31 |
|
32 |
-
Zefiro functioncalling alpha is a 7B parameter model, and
|
33 |
|
34 |
|
35 |
|
36 |
-
## Example Usage (
|
37 |
|
38 |
-
Please reference `README.md` in https://github.com/ShishirPatil/gorilla/tree/main/openfunctions for file dependencies and used utils.
|
39 |
|
40 |
1. OpenFunctions is compatible with OpenAI Functions
|
41 |
|
42 |
```bash
|
43 |
-
!pip install openai==0.28.1
|
44 |
```
|
45 |
|
46 |
-
2.
|
47 |
|
48 |
```python
|
49 |
-
import
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
model="gorilla-openfunctions-v2",
|
57 |
-
temperature=0.0,
|
58 |
-
messages=[{"role": "user", "content": prompt}],
|
59 |
-
functions=functions,
|
60 |
-
)
|
61 |
-
return completion.choices[0]
|
62 |
-
except Exception as e:
|
63 |
-
print(e, model, prompt)
|
64 |
```
|
65 |
|
66 |
-
3.
|
67 |
|
68 |
```python
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
]
|
87 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |
```
|
89 |
|
90 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
|
92 |
-
Gorilla returns a readily accessible string **AND** Open-AI compatible JSON.
|
93 |
|
|
|
94 |
```python
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
{
|
96 |
"index": 0,
|
97 |
"message": {
|
@@ -115,131 +165,18 @@ Gorilla returns a readily accessible string **AND** Open-AI compatible JSON.
|
|
115 |
"finish_reason": "stop"
|
116 |
}
|
117 |
|
|
|
118 |
```
|
119 |
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
This is possible in OpenFunctions v2, because we ensure that the output includes the name of the argument and not just the value. This enables us to parse the output into a JSON. In those scenarios where the output is not parsable into JSON, we will always return the function call string.
|
124 |
-
|
125 |
-
### End to End Example
|
126 |
-
|
127 |
-
Run the example code in `[inference_hosted.py](https://github.com/ShishirPatil/gorilla/tree/main/openfunctions)` to see how the model works.
|
128 |
-
|
129 |
-
```bash
|
130 |
-
python inference_hosted.py
|
131 |
-
```
|
132 |
-
|
133 |
-
Expected Output:
|
134 |
-
|
135 |
-
```bash
|
136 |
-
(.py3) shishir@dhcp-132-64:~/Work/Gorilla/openfunctions/$ python inference_hosted.py
|
137 |
-
--------------------
|
138 |
-
Function call strings(s): get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')
|
139 |
-
--------------------
|
140 |
-
OpenAI compatible `function_call`: [<OpenAIObject at 0x1139ba890> JSON:
|
141 |
-
{
|
142 |
-
"name": "get_current_weather",
|
143 |
-
"arguments":
|
144 |
-
{
|
145 |
-
"location": "Boston, MA"
|
146 |
-
}
|
147 |
-
}, <OpenAIObject at 0x1139ba930> JSON: {
|
148 |
-
"name": "get_current_weather",
|
149 |
-
"arguments":
|
150 |
-
{
|
151 |
-
"location": "San Francisco, CA"
|
152 |
-
}
|
153 |
-
}]
|
154 |
-
```
|
155 |
-
|
156 |
-
|
157 |
-
## Running OpenFunctions Locally
|
158 |
-
|
159 |
-
If you want to Run OpenFunctions locally, here is the prompt format that we used:
|
160 |
-
|
161 |
-
```python
|
162 |
-
def get_prompt(user_query: str, functions: list = []) -> str:
|
163 |
-
"""
|
164 |
-
Generates a conversation prompt based on the user's query and a list of functions.
|
165 |
-
|
166 |
-
Parameters:
|
167 |
-
- user_query (str): The user's query.
|
168 |
-
- functions (list): A list of functions to include in the prompt.
|
169 |
-
|
170 |
-
Returns:
|
171 |
-
- str: The formatted conversation prompt.
|
172 |
-
"""
|
173 |
-
system = "You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
|
174 |
-
if len(functions) == 0:
|
175 |
-
return f"{system}\n### Instruction: <<question>> {user_query}\n### Response: "
|
176 |
-
functions_string = json.dumps(functions)
|
177 |
-
return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response: "
|
178 |
-
```
|
179 |
-
|
180 |
-
Further, here is how we format the response:
|
181 |
-
|
182 |
-
Install the dependencies with:
|
183 |
-
|
184 |
-
```bash
|
185 |
-
pip3 install tree_sitter
|
186 |
-
git clone https://github.com/tree-sitter/tree-sitter-java.git
|
187 |
-
git clone https://github.com/tree-sitter/tree-sitter-javascript.git
|
188 |
-
```
|
189 |
-
|
190 |
-
And you can use the following code to format the response:
|
191 |
-
|
192 |
-
```python
|
193 |
-
|
194 |
-
from openfunctions_utils import strip_function_calls, parse_function_call
|
195 |
-
|
196 |
-
def format_response(response: str):
|
197 |
-
"""
|
198 |
-
Formats the response from the OpenFunctions model.
|
199 |
-
|
200 |
-
Parameters:
|
201 |
-
- response (str): The response generated by the LLM.
|
202 |
-
|
203 |
-
Returns:
|
204 |
-
- str: The formatted response.
|
205 |
-
- dict: The function call(s) extracted from the response.
|
206 |
-
|
207 |
-
"""
|
208 |
-
function_call_dicts = None
|
209 |
-
try:
|
210 |
-
response = strip_function_calls(response)
|
211 |
-
# Parallel function calls returned as a str, list[dict]
|
212 |
-
if len(response) > 1:
|
213 |
-
function_call_dicts = []
|
214 |
-
for function_call in response:
|
215 |
-
function_call_dicts.append(parse_function_call(function_call))
|
216 |
-
response = ", ".join(response)
|
217 |
-
# Single function call returned as a str, dict
|
218 |
-
else:
|
219 |
-
function_call_dicts = parse_function_call(response[0])
|
220 |
-
response = response[0]
|
221 |
-
except Exception as e:
|
222 |
-
# Just faithfully return the generated response str to the user
|
223 |
-
pass
|
224 |
-
return response, function_call_dicts
|
225 |
-
|
226 |
-
```
|
227 |
-
|
228 |
-
In the current directory, run the example code in `inference_local.py` to see how the model works.
|
229 |
-
|
230 |
-
```bash
|
231 |
-
python inference_local.py
|
232 |
-
```
|
233 |
-
|
234 |
-
**Note:** Use the `get_prompt` and `format_response` only if you are hosting it Locally. If you are using the Berkeley hosted models through the Chat-completion API, we do this in the backend, so you don't have to do this. The model is supported in Hugging Face 🤗 Transformers and can be run up locally:
|
235 |
-
|
236 |
|
237 |
## License
|
238 |
|
239 |
-
|
240 |
-
|
241 |
|
242 |
## Contributing
|
|
|
|
|
243 |
|
244 |
-
|
245 |
-
|
|
|
14 |
|
15 |
## Introduction
|
16 |
Zefiro functioncalling extends Large Language Model(LLM) Chat Completion feature to formulate
|
17 |
+
executable APIs call given Italian based natural language instructions and API context. With OpenFunctions v2,
|
18 |
+
|
19 |
we now support:
|
20 |
1. Relevance detection - when chatting, chat. When asked for function, returns a function
|
21 |
2. REST - native REST support
|
|
|
30 |
|
31 |
## Training
|
32 |
|
33 |
+
Zefiro functioncalling alpha is a 7B parameter model, and is fine tuned version of [gorilla-llm](https://huggingface.co/gorilla-llm/gorilla-openfunctions-v2) that is built on top of the [deepseek coder](https://huggingface.co/deepseek-ai/deepseek-coder-7b-instruct-v1.5) LLM.
|
34 |
|
35 |
|
36 |
|
37 |
+
## Example Usage (Local)
|
38 |
|
|
|
39 |
|
40 |
1. OpenFunctions is compatible with OpenAI Functions
|
41 |
|
42 |
```bash
|
43 |
+
!pip install openai==0.28.1, transformers
|
44 |
```
|
45 |
|
46 |
+
2. Load the model
|
47 |
|
48 |
```python
|
49 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
50 |
+
|
51 |
+
model_id = "giux78/zefiro-funcioncalling-v0.3-merged"
|
52 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
53 |
+
model.to('cuda')
|
54 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
55 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
```
|
57 |
|
58 |
+
3. Prepare your data with a system prompt and an array of json openapi compatible: only the description key should be in Italian all the json in english a part all description keys.
|
59 |
|
60 |
```python
|
61 |
+
json_arr = [{"name": "order_dinner", "description": "Ordina una cena al ristorante", "parameters": {"type": "object", "properties": {"restaurant_name": {"type": "string", "description": "il nome del ristorante", "enum" : ['Bufalo Bill','Pazzas']}}, "required": ["restaurant_name"]}},
|
62 |
+
{"name": "get_weather", "description": "Ottieni le previsioni del tempo meteorologica", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "Il nome del luogo "}}, "required": ["location"]}},
|
63 |
+
{"name": "create_product", "description": "Crea un prodotto da vendere", "parameters": {"type": "object", "properties": {"product_name": {"type": "string", "description": "Il nome del prodotto "}, "size": {"type": "string", "description": "la taglia del prodotto"}, "price": {"type": "integer", "description": "Il prezzo del prodotto "}}, "required": ["product_name", "size", "price"]}},
|
64 |
+
{"name": "get_news", "description": "Dammi le ultime notizie", "parameters": {"type": "object", "properties": {"argument": {"type": "string", "description": "L'argomento su cui fare la ricerca"}}, "required": ["argument"]}},
|
65 |
+
]
|
66 |
+
json_string = ' '.join([json.dumps(json_obj) for json_obj in json_arr2])
|
67 |
+
system_prompt = 'Tu sei un assistenze utile che ha accesso alle seguenti funzioni. Usa le funzioni solo se necessario - \n ' + json_string2 + ' \n '
|
68 |
+
system_prompt
|
69 |
+
```
|
70 |
+
|
71 |
+
4. Call the model
|
72 |
+
|
73 |
+
```python
|
74 |
+
def generate_text():
|
75 |
+
prompt = tokenizer.apply_chat_template(test_message2, tokenize=False)
|
76 |
+
model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
|
77 |
+
generated_ids = model.generate(**model_inputs, max_new_tokens=1024)
|
78 |
+
return tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
79 |
+
|
80 |
+
|
81 |
+
text_response = generate_text()
|
82 |
+
```
|
83 |
+
|
84 |
+
5. Parse the response
|
85 |
+
|
86 |
+
```python
|
87 |
+
FN_CALL_DELIMITER = "<<functioncall>>"
|
88 |
+
|
89 |
+
def strip_function_calls(content: str) -> list[str]:
|
90 |
+
"""
|
91 |
+
Split the content by the function call delimiter and remove empty strings
|
92 |
+
"""
|
93 |
+
return [element.replace('\n', '') for element in content.split(FN_CALL_DELIMITER)[1:] if element ]
|
94 |
+
|
95 |
+
|
96 |
+
functions_string = strip_function_calls(text_response)
|
97 |
+
|
98 |
+
# Output: [' {"name": "create_product", "arguments": \'{"product_name": "AIR", "size": "L", "price": 100}\'}']
|
99 |
```
|
100 |
|
101 |
+
6. Create an object representation of the string
|
102 |
+
|
103 |
+
```python
|
104 |
+
# if functions_string contains a function string create a json cleaning
|
105 |
+
# multiple functions not supported yet
|
106 |
+
if functions_string:
|
107 |
+
obj_to_call = json.loads(functions_string[0].replace('\'', ''))
|
108 |
+
else:
|
109 |
+
print('nothing to do or return a normal chat response')
|
110 |
+
|
111 |
+
# Output: {'name': 'create_product',
|
112 |
+
'arguments': {'product_name': 'AIR', 'size': 'L', 'price': 100}}
|
113 |
+
```
|
114 |
|
|
|
115 |
|
116 |
+
7.
|
117 |
```python
|
118 |
+
def obj_to_func(obj):
|
119 |
+
arguments_keys = obj['arguments'].keys()
|
120 |
+
params = []
|
121 |
+
for key in arguments_keys:
|
122 |
+
param = f'{key}=\"{obj["arguments"][key]}\"'
|
123 |
+
params.append(param)
|
124 |
+
func_params = ','.join(params)
|
125 |
+
print(f'{obj["name"]}({func_params})')
|
126 |
+
return f'{obj["name"]}({func_params})'
|
127 |
+
|
128 |
+
func_str = obj_to_func(obj_to_call)
|
129 |
+
|
130 |
+
openai_response = {
|
131 |
+
"index": 0,
|
132 |
+
"message": {
|
133 |
+
"role": "assistant",
|
134 |
+
"content": func_str,
|
135 |
+
"function_call": [
|
136 |
+
obj_to_call
|
137 |
+
]
|
138 |
+
},
|
139 |
+
"finish_reason": "stop"
|
140 |
+
}
|
141 |
+
|
142 |
+
|
143 |
+
'''
|
144 |
+
Output OpenAI compatible
|
145 |
{
|
146 |
"index": 0,
|
147 |
"message": {
|
|
|
165 |
"finish_reason": "stop"
|
166 |
}
|
167 |
|
168 |
+
'''
|
169 |
```
|
170 |
|
171 |
+
JSON to be OpenAI compatible.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
172 |
|
173 |
## License
|
174 |
|
175 |
+
Zefiro-functioncalling is distributed under the Apache 2.0 license as the base model Gorilla-LLM v0.2. This software incorporates elements from the Deepseek model. Consequently, the licensing of Gorilla OpenFunctions v2 adheres to the Apache 2.0 license, with additional terms as outlined in [Appendix A](https://github.com/deepseek-ai/DeepSeek-LLM/blob/6712a86bfb7dd25c73383c5ad2eb7a8db540258b/LICENSE-MODEL) of the Deepseek license.
|
|
|
176 |
|
177 |
## Contributing
|
178 |
+
Please email us your comments, criticism, and questions. More information about the project can be found at [https://zefiro.ai](https://zefiro.ai)
|
179 |
+
|
180 |
|
181 |
+
## Citation
|
182 |
+
This work is based on Gorilla an open source effort from UC Berkeley and we welcome contributors.
|