shishirpatil commited on
Commit
bb0fe27
1 Parent(s): 3fd971c

Update README with the local inference update

Browse files
Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -141,16 +141,16 @@ This is possible in OpenFunctions v2, because we ensure that the output includes
141
 
142
  ### End to End Example
143
 
144
- Run the example code in `[ofv2_hosted.py](https://github.com/ShishirPatil/gorilla/tree/main/openfunctions)` to see how the model works.
145
 
146
  ```bash
147
- python ofv2_hosted.py
148
  ```
149
 
150
  Expected Output:
151
 
152
  ```bash
153
- (.py3) shishir@dhcp-132-64:~/Work/Gorilla/openfunctions/$ python ofv2_hosted.py
154
  --------------------
155
  Function call strings(s): get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')
156
  --------------------
@@ -242,6 +242,12 @@ def format_response(response: str):
242
 
243
  ```
244
 
 
 
 
 
 
 
245
  **Note:** Use the `get_prompt` and `format_response` only if you are hosting it Locally. If you are using the Berkeley hosted models through the Chat-completion API, we do this in the backend, so you don't have to do this. The model is supported in Hugging Face 🤗 Transformers and can be run up locally:
246
 
247
 
 
141
 
142
  ### End to End Example
143
 
144
+ Run the example code in `[inference_hosted.py](https://github.com/ShishirPatil/gorilla/tree/main/openfunctions)` to see how the model works.
145
 
146
  ```bash
147
+ python inference_hosted.py
148
  ```
149
 
150
  Expected Output:
151
 
152
  ```bash
153
+ (.py3) shishir@dhcp-132-64:~/Work/Gorilla/openfunctions/$ python inference_hosted.py
154
  --------------------
155
  Function call strings(s): get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')
156
  --------------------
 
242
 
243
  ```
244
 
245
+ In the current directory, run the example code in `inference_local.py` to see how the model works.
246
+
247
+ ```bash
248
+ python inference_local.py
249
+ ```
250
+
251
  **Note:** Use the `get_prompt` and `format_response` only if you are hosting it Locally. If you are using the Berkeley hosted models through the Chat-completion API, we do this in the backend, so you don't have to do this. The model is supported in Hugging Face 🤗 Transformers and can be run up locally:
252
 
253