duyntnet commited on
Commit
1aff5bf
1 Parent(s): 905c8bb

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +230 -0
README.md ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - gorilla-openfunctions-v2
12
+ ---
13
+ Quantizations of https://huggingface.co/gorilla-llm/gorilla-openfunctions-v2
14
+
15
+
16
+ ### Inference Clients/UIs
17
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp)
18
+ * [KoboldCPP](https://github.com/LostRuins/koboldcpp)
19
+ * [ollama](https://github.com/ollama/ollama)
20
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
21
+ * [GPT4All](https://github.com/nomic-ai/gpt4all)
22
+ * [jan](https://github.com/janhq/jan)
23
+ ---
24
+
25
+ # From original readme
26
+
27
+ ## Introduction
28
+ Gorilla OpenFunctions extends Large Language Model(LLM) Chat Completion feature to formulate
29
+ executable APIs call given natural language instructions and API context. With OpenFunctions v2,
30
+ we now support:
31
+ 1. Multiple functions - choose betwen functions
32
+ 2. Parallel functions - call the same function `N` time with different parameter values
33
+ 3. Multiple & parallel - both of the above in a single chatcompletion call (one generation)
34
+ 4. Relevance detection - when chatting, chat. When asked for function, returns a function
35
+ 5. Python - supports `string, number, boolean, list, tuple, dict` parameter datatypes and `Any` for those not natively supported.
36
+ 6. JAVA - support for `byte, short, int, float, double, long, boolean, char, Array, ArrayList, Set, HashMap, Hashtable, Queue, Stack, and Any` datatypes.
37
+ 7. JavaScript - support for `String, Number, Bigint, Boolean, dict (object), Array, Date, and Any` datatypes.
38
+ 8. REST - native REST support
39
+
40
+ ## Example Usage (Hosted)
41
+
42
+ Please reference `README.md` in https://github.com/ShishirPatil/gorilla/tree/main/openfunctions for file dependencies and used utils.
43
+
44
+ 1. OpenFunctions is compatible with OpenAI Functions
45
+
46
+ ```bash
47
+ !pip install openai==0.28.1
48
+ ```
49
+
50
+ 2. Point to Gorilla hosted servers
51
+
52
+ ```python
53
+ import openai
54
+
55
+ def get_gorilla_response(prompt="Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes", model="gorilla-openfunctions-v0", functions=[]):
56
+ openai.api_key = "EMPTY"
57
+ openai.api_base = "http://luigi.millennium.berkeley.edu:8000/v1"
58
+ try:
59
+ completion = openai.ChatCompletion.create(
60
+ model="gorilla-openfunctions-v2",
61
+ temperature=0.0,
62
+ messages=[{"role": "user", "content": prompt}],
63
+ functions=functions,
64
+ )
65
+ return completion.choices[0]
66
+ except Exception as e:
67
+ print(e, model, prompt)
68
+ ```
69
+
70
+ 3. Pass the user argument and set of functions, Gorilla OpenFunctions returns a fully formatted json
71
+
72
+ ```python
73
+ query = "What's the weather like in the two cities of Boston and San Francisco?"
74
+ functions = [
75
+ {
76
+ "name": "get_current_weather",
77
+ "description": "Get the current weather in a given location",
78
+ "parameters": {
79
+ "type": "object",
80
+ "properties": {
81
+ "location": {
82
+ "type": "string",
83
+ "description": "The city and state, e.g. San Francisco, CA",
84
+ },
85
+ "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
86
+ },
87
+ "required": ["location"],
88
+ },
89
+ }
90
+ ]
91
+ get_gorilla_response(query, functions=functions)
92
+ ```
93
+
94
+ 4. Expected output **NEW**
95
+
96
+ Gorilla returns a readily accessible string **AND** Open-AI compatible JSON.
97
+
98
+ ```python
99
+ {
100
+ "index": 0,
101
+ "message": {
102
+ "role": "assistant",
103
+ "content": "get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')",
104
+ "function_call": [
105
+ {
106
+ "name": "get_current_weather",
107
+ "arguments": {
108
+ "location": "Boston, MA"
109
+ }
110
+ },
111
+ {
112
+ "name": "get_current_weather",
113
+ "arguments": {
114
+ "location": "San Francisco, CA"
115
+ }
116
+ }
117
+ ]
118
+ },
119
+ "finish_reason": "stop"
120
+ }
121
+
122
+ ```
123
+
124
+ We have retained the string functionality that our community loved from OpenFunctions v1 `get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')` above. And Notice the `function_call` key in the JSON to be OpenAI compatible.
125
+
126
+
127
+ This is possible in OpenFunctions v2, because we ensure that the output includes the name of the argument and not just the value. This enables us to parse the output into a JSON. In those scenarios where the output is not parsable into JSON, we will always return the function call string.
128
+
129
+ ### End to End Example
130
+
131
+ Run the example code in `[inference_hosted.py](https://github.com/ShishirPatil/gorilla/tree/main/openfunctions)` to see how the model works.
132
+
133
+ ```bash
134
+ python inference_hosted.py
135
+ ```
136
+
137
+ Expected Output:
138
+
139
+ ```bash
140
+ (.py3) shishir@dhcp-132-64:~/Work/Gorilla/openfunctions/$ python inference_hosted.py
141
+ --------------------
142
+ Function call strings(s): get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')
143
+ --------------------
144
+ OpenAI compatible `function_call`: [<OpenAIObject at 0x1139ba890> JSON:
145
+ {
146
+ "name": "get_current_weather",
147
+ "arguments":
148
+ {
149
+ "location": "Boston, MA"
150
+ }
151
+ }, <OpenAIObject at 0x1139ba930> JSON: {
152
+ "name": "get_current_weather",
153
+ "arguments":
154
+ {
155
+ "location": "San Francisco, CA"
156
+ }
157
+ }]
158
+ ```
159
+
160
+
161
+ ## Running OpenFunctions Locally
162
+
163
+ If you want to Run OpenFunctions locally, here is the prompt format that we used:
164
+
165
+ ```python
166
+ def get_prompt(user_query: str, functions: list = []) -> str:
167
+ """
168
+ Generates a conversation prompt based on the user's query and a list of functions.
169
+
170
+ Parameters:
171
+ - user_query (str): The user's query.
172
+ - functions (list): A list of functions to include in the prompt.
173
+
174
+ Returns:
175
+ - str: The formatted conversation prompt.
176
+ """
177
+ system = "You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
178
+ if len(functions) == 0:
179
+ return f"{system}\n### Instruction: <<question>> {user_query}\n### Response: "
180
+ functions_string = json.dumps(functions)
181
+ return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response: "
182
+ ```
183
+
184
+ Further, here is how we format the response:
185
+
186
+ Install the dependencies with:
187
+
188
+ ```bash
189
+ pip3 install tree_sitter
190
+ git clone https://github.com/tree-sitter/tree-sitter-java.git
191
+ git clone https://github.com/tree-sitter/tree-sitter-javascript.git
192
+ ```
193
+
194
+ And you can use the following code to format the response:
195
+
196
+ ```python
197
+
198
+ from openfunctions_utils import strip_function_calls, parse_function_call
199
+
200
+ def format_response(response: str):
201
+ """
202
+ Formats the response from the OpenFunctions model.
203
+
204
+ Parameters:
205
+ - response (str): The response generated by the LLM.
206
+
207
+ Returns:
208
+ - str: The formatted response.
209
+ - dict: The function call(s) extracted from the response.
210
+
211
+ """
212
+ function_call_dicts = None
213
+ try:
214
+ response = strip_function_calls(response)
215
+ # Parallel function calls returned as a str, list[dict]
216
+ if len(response) > 1:
217
+ function_call_dicts = []
218
+ for function_call in response:
219
+ function_call_dicts.append(parse_function_call(function_call))
220
+ response = ", ".join(response)
221
+ # Single function call returned as a str, dict
222
+ else:
223
+ function_call_dicts = parse_function_call(response[0])
224
+ response = response[0]
225
+ except Exception as e:
226
+ # Just faithfully return the generated response str to the user
227
+ pass
228
+ return response, function_call_dicts
229
+
230
+ ```