Spaces:
Runtime error
Runtime error
Commit
•
773739d
0
Parent(s):
Duplicate from ysharma/ChatGPT-Plugins-in-Gradio
Browse filesCo-authored-by: yuvraj sharma <ysharma@users.noreply.huggingface.co>
- .gitattributes +35 -0
- README.md +84 -0
- app.py +500 -0
- gpt_function_definitions.py +164 -0
- requirements.txt +2 -0
.gitattributes
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
title: ChatGPT Plugins In Gradio
|
3 |
+
emoji: 💻
|
4 |
+
colorFrom: green
|
5 |
+
colorTo: gray
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 3.35.2
|
8 |
+
app_file: app.py
|
9 |
+
pinned: true
|
10 |
+
license: mit
|
11 |
+
duplicated_from: ysharma/ChatGPT-Plugins-in-Gradio
|
12 |
+
---
|
13 |
+
|
14 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
15 |
+
|
16 |
+
|
17 |
+
## Steps to add new Plugins to your Gradio ChatGPT Chatbot
|
18 |
+
|
19 |
+
1. **Acquire the API Endpoint**
|
20 |
+
- You need an API which you can query, and for this example let's consider using a text-to-speech demo hosted on Huggingface Spaces.
|
21 |
+
- **API Endpoint**: [https://gradio-neon-tts-plugin-coqui.hf.space/](https://gradio-neon-tts-plugin-coqui.hf.space/)
|
22 |
+
|
23 |
+
2. **Create a Function to Query the API**
|
24 |
+
- You can access any Gradio demo as an API via the Gradio Python Client.
|
25 |
+
```python
|
26 |
+
from gradio.client import Client
|
27 |
+
|
28 |
+
def texttospeech(input_text):
|
29 |
+
client = Client("https://gradio-neon-tts-plugin-coqui.hf.space/")
|
30 |
+
result = client.predict(
|
31 |
+
input_text, # str in 'Input' Textbox component
|
32 |
+
"en", # str in 'Language' Radio component
|
33 |
+
api_name="/predict"
|
34 |
+
)
|
35 |
+
return result
|
36 |
+
```
|
37 |
+
|
38 |
+
3. **Describe the Function to GPT-3.5**
|
39 |
+
- You need to describe your function to GPT3.5/4. This function definition will get passed to gpt and will suck up your token. GPT may or may not use this function based on user inputs later on.
|
40 |
+
- You can either use the Gradio demo for converting any given function to the required JSON format for GPT-3.5.
|
41 |
+
- Demo: [Function to JSON](https://huggingface.co/spaces/ysharma/function-to-JSON)
|
42 |
+
- Or, you can create the dictionary object on your own. Note that, the correct format is super important here.
|
43 |
+
- MAke sure to name your JSON object description as `<function_name>_func`.
|
44 |
+
```python
|
45 |
+
texttospeech_func = {
|
46 |
+
"name": "texttospeech",
|
47 |
+
"description": "generate speech from the given input text",
|
48 |
+
"parameters": {
|
49 |
+
"type": "object",
|
50 |
+
"properties": {
|
51 |
+
"input_text": {
|
52 |
+
"type": "string",
|
53 |
+
"description": "text that will be used to generate speech"
|
54 |
+
}
|
55 |
+
},
|
56 |
+
"required": [
|
57 |
+
"input_text"
|
58 |
+
]
|
59 |
+
}
|
60 |
+
}
|
61 |
+
```
|
62 |
+
|
63 |
+
4. **Add Function and JSON Object Details**
|
64 |
+
- Add the function definition and description to the `gpt_function_definitions.py` file (simply copy and paste).
|
65 |
+
- `dict_plugin_functions` is a dictionary of all available plugins. Add your plugin information to this dictionary in the required format.
|
66 |
+
```python
|
67 |
+
'texttospeech_func': {
|
68 |
+
'dict': texttospeech_func,
|
69 |
+
'func': texttospeech
|
70 |
+
}
|
71 |
+
```
|
72 |
+
|
73 |
+
5. **Update the Chatbot Layout**
|
74 |
+
- Go to the Blocks Chatbot layout and add a new checkbox for your plugin as:
|
75 |
+
```python
|
76 |
+
texttospeech = gr.Checkbox(label="📝🗣️Text-To-Speech", value=False)
|
77 |
+
```
|
78 |
+
- Add the new checkbox component to your submit and click events for your chatbot and to the predict function accordingly.
|
79 |
+
- And also to the `plugins` list in `predict`
|
80 |
+
```python
|
81 |
+
plugins = [music_gen, stable_diff, image_cap, top_news, texttospeech]
|
82 |
+
```
|
83 |
+
|
84 |
+
**Thats it! you are have added your own brand new CHATGPT Plugin for yourself. Go PLAY!!**
|
app.py
ADDED
@@ -0,0 +1,500 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import gradio as gr
|
2 |
+
|
3 |
+
import os
|
4 |
+
import openai
|
5 |
+
import time
|
6 |
+
import json
|
7 |
+
import requests
|
8 |
+
import shutil
|
9 |
+
|
10 |
+
import matplotlib.pyplot as plt
|
11 |
+
from gradio_client import Client
|
12 |
+
from newsapi import NewsApiClient
|
13 |
+
from PIL import Image
|
14 |
+
|
15 |
+
from gpt_function_definitions import generate_image, generate_music, generate_caption, generate_caption_func, generate_music_func, generate_image_func, dict_plugin_functions
|
16 |
+
|
17 |
+
#Streaming endpoint
|
18 |
+
API_URL = "https://api.openai.com/v1/chat/completions"
|
19 |
+
# Get the value of the openai_api_key from environment variable
|
20 |
+
openai_api_key = os.getenv("OPENAI_API_KEY")
|
21 |
+
openai.api_key = os.getenv("OPENAI_API_KEY")
|
22 |
+
|
23 |
+
dicts_list = [value['dict'] for value in dict_plugin_functions.values()]
|
24 |
+
|
25 |
+
available_function_defns = {
|
26 |
+
key.split('_func')[0]: value['func']
|
27 |
+
for key, value in dict_plugin_functions.items()
|
28 |
+
}
|
29 |
+
|
30 |
+
add_plugin_steps = """## Steps to add new Plugins to your Gradio ChatGPT Chatbot
|
31 |
+
Do you want to open this information in a separate tab instead? - <a href="https://huggingface.co/spaces/ysharma/ChatGPT-Plugins-in-Gradio/blob/main/README.md" target="_blank">Click here</a>.
|
32 |
+
|
33 |
+
1. **Acquire the API Endpoint**
|
34 |
+
- You need an API which you can query, and for this example let's consider using a text-to-speech demo hosted on Huggingface Spaces.
|
35 |
+
- **API Endpoint**: [https://gradio-neon-tts-plugin-coqui.hf.space/](https://gradio-neon-tts-plugin-coqui.hf.space/)
|
36 |
+
|
37 |
+
2. **Create a Function to Query the API**
|
38 |
+
- You can access any Gradio demo as an API via the Gradio Python Client.
|
39 |
+
```python
|
40 |
+
from gradio.client import Client
|
41 |
+
|
42 |
+
def texttospeech(input_text):
|
43 |
+
client = Client("https://gradio-neon-tts-plugin-coqui.hf.space/")
|
44 |
+
result = client.predict(
|
45 |
+
input_text, # str in 'Input' Textbox component
|
46 |
+
"en", # str in 'Language' Radio component
|
47 |
+
api_name="/predict"
|
48 |
+
)
|
49 |
+
return result
|
50 |
+
```
|
51 |
+
|
52 |
+
3. **Describe the Function to GPT-3.5**
|
53 |
+
- You need to describe your function to GPT3.5/4. This function definition will get passed to gpt and will suck up your token. GPT may or may not use this function based on user inputs later on.
|
54 |
+
- You can either use the Gradio demo for converting any given function to the required JSON format for GPT-3.5.
|
55 |
+
- Demo: [Function to JSON](https://huggingface.co/spaces/ysharma/function-to-JSON)
|
56 |
+
- Or, you can create the dictionary object on your own. Note that, the correct format is super important here.
|
57 |
+
- MAke sure to name your JSON object description as `<function_name>_func`.
|
58 |
+
```python
|
59 |
+
texttospeech_func = {
|
60 |
+
"name": "texttospeech",
|
61 |
+
"description": "generate speech from the given input text",
|
62 |
+
"parameters": {
|
63 |
+
"type": "object",
|
64 |
+
"properties": {
|
65 |
+
"input_text": {
|
66 |
+
"type": "string",
|
67 |
+
"description": "text that will be used to generate speech"
|
68 |
+
}
|
69 |
+
},
|
70 |
+
"required": [
|
71 |
+
"input_text"
|
72 |
+
]
|
73 |
+
}
|
74 |
+
}
|
75 |
+
```
|
76 |
+
|
77 |
+
4. **Add Function and JSON Object Details**
|
78 |
+
- Add the function definition and description to the `gpt_function_definitions.py` file (simply copy and paste).
|
79 |
+
- `dict_plugin_functions` is a dictionary of all available plugins. Add your plugin information to this dictionary in the required format.
|
80 |
+
```python
|
81 |
+
'texttospeech_func': {
|
82 |
+
'dict': texttospeech_func,
|
83 |
+
'func': texttospeech
|
84 |
+
}
|
85 |
+
```
|
86 |
+
|
87 |
+
5. **Update the Chatbot Layout**
|
88 |
+
- Go to the Blocks Chatbot layout and add a new checkbox for your plugin as:
|
89 |
+
```python
|
90 |
+
texttospeech = gr.Checkbox(label="📝🗣️Text-To-Speech", value=False)
|
91 |
+
```
|
92 |
+
- Add the new checkbox component to your submit and click events for your chatbot and to the predict function accordingly.
|
93 |
+
- And also to the `plugins` list in `predict`
|
94 |
+
```python
|
95 |
+
plugins = [music_gen, stable_diff, image_cap, top_news, texttospeech]
|
96 |
+
```
|
97 |
+
|
98 |
+
Thats it! you are have added your own brand new CHATGPT Plugin for yourself. Go PLAY!!
|
99 |
+
"""
|
100 |
+
|
101 |
+
|
102 |
+
# managing conversation with Plugins
|
103 |
+
def run_conversation(user_input, function_call_decision):
|
104 |
+
FLAG_MUSIC, FLAG_IMAGE, FLAG_GEN, FLAG_FUN = False, False, False, False
|
105 |
+
# Step 1: send the conversation and available functions to GPT
|
106 |
+
messages = [{"role": "user", "content": user_input}]
|
107 |
+
functions = dicts_list # example values - [ generate_music_func, generate_image_func]
|
108 |
+
|
109 |
+
# Attempt to make a request to GPT3.5/4 with retries
|
110 |
+
max_retries = 3
|
111 |
+
retry_delay = 5 # seconds
|
112 |
+
|
113 |
+
for attempt in range(max_retries):
|
114 |
+
try:
|
115 |
+
response = openai.ChatCompletion.create(
|
116 |
+
model="gpt-3.5-turbo-0613",
|
117 |
+
messages=messages,
|
118 |
+
functions=functions,
|
119 |
+
function_call=function_call_decision,
|
120 |
+
)
|
121 |
+
response_message = response["choices"][0]["message"]
|
122 |
+
print(f"response message ^^ -{response_message}")
|
123 |
+
break # If successful, exit the loop
|
124 |
+
|
125 |
+
except openai.error.ServiceUnavailableError as e:
|
126 |
+
print(f"OpenAI Server is not available. Error: {e}")
|
127 |
+
if attempt < max_retries - 1:
|
128 |
+
print(f"Retrying in {retry_delay} seconds...")
|
129 |
+
time.sleep(retry_delay)
|
130 |
+
else:
|
131 |
+
print("Max retries reached. Exiting.")
|
132 |
+
return None, None, None, False, False, False, False
|
133 |
+
|
134 |
+
except openai.error.APIError as e:
|
135 |
+
# This will catch API errors from OpenAI
|
136 |
+
print(f"An API error occurred: {e}")
|
137 |
+
if attempt < max_retries - 1:
|
138 |
+
print(f"Retrying in {retry_delay} seconds...")
|
139 |
+
time.sleep(retry_delay)
|
140 |
+
else:
|
141 |
+
print("Max retries reached. Exiting.")
|
142 |
+
return None, None, None, False, False, False, False
|
143 |
+
|
144 |
+
except Exception as e:
|
145 |
+
# This will catch any other exceptions that are raised.
|
146 |
+
print(f"An unexpected error occurred: {e}")
|
147 |
+
return None, None, None, False, False, False, False
|
148 |
+
|
149 |
+
# Step 2: check if GPT wanted to call a function
|
150 |
+
if response_message.get("function_call"):
|
151 |
+
FLAG_FUN = True
|
152 |
+
# Step 3: call the function
|
153 |
+
# Note: the JSON response may not always be valid; be sure to handle errors
|
154 |
+
available_functions = available_function_defns
|
155 |
+
# only one function in this example, but you can have multiple
|
156 |
+
function_name = response_message["function_call"]["name"]
|
157 |
+
print(f"function_name - {function_name}")
|
158 |
+
|
159 |
+
try:
|
160 |
+
function_to_call = available_functions[function_name]
|
161 |
+
function_args = json.loads(response_message["function_call"]["arguments"])
|
162 |
+
print(f"Logging: fuction_name is - {function_name}")
|
163 |
+
print(f"Logging: fuction_to_call is - {function_to_call}")
|
164 |
+
print(f"Logging: function_args is - {function_args}")
|
165 |
+
function_response = function_to_call(**function_args)
|
166 |
+
print(f"Logging: function_response ^^ is -{function_response}")
|
167 |
+
|
168 |
+
except KeyError as e:
|
169 |
+
print(f"Function not found: {e}")
|
170 |
+
return response_message, None, None, False, False, False, False
|
171 |
+
|
172 |
+
except Exception as e:
|
173 |
+
print(f"An error occurred while calling the function: {e}")
|
174 |
+
return response_message, None, None, False, False, False, False
|
175 |
+
|
176 |
+
if isinstance(function_response, str):
|
177 |
+
if function_response.split('.')[-1] == 'png':
|
178 |
+
FLAG_IMAGE = True
|
179 |
+
elif function_response.split('.')[-1] in ['mp4', "wav", "mp3"]:
|
180 |
+
FLAG_MUSIC = True
|
181 |
+
else:
|
182 |
+
FLAG_GEN = True
|
183 |
+
else:
|
184 |
+
print("PLUGIN FUNCTION RETURNS A NON-STRING OUTPUT: FIX IT TO A STRING OUTPUT TO GET A RESPONSE FROM GPT")
|
185 |
+
|
186 |
+
# Step 4: send the info on the function call and function response to GPT
|
187 |
+
messages.append(response_message) # extend conversation with assistant's reply
|
188 |
+
messages.append(
|
189 |
+
{
|
190 |
+
"role": "function",
|
191 |
+
"name": function_name,
|
192 |
+
"content": function_response,
|
193 |
+
}
|
194 |
+
)
|
195 |
+
print(f"Logging: messages is - {messages}")
|
196 |
+
# extend conversation with function response
|
197 |
+
second_response = openai.ChatCompletion.create(
|
198 |
+
model="gpt-3.5-turbo-0613",
|
199 |
+
messages=messages,
|
200 |
+
) # get a new response from GPT where it can see the function response
|
201 |
+
|
202 |
+
print(f"Logging: second_response is - {second_response}")
|
203 |
+
print(f"Logging: values of Music, Image, and General flags are respectively - {FLAG_MUSIC}, {FLAG_IMAGE}, {FLAG_GEN}")
|
204 |
+
return response_message, second_response, function_response, FLAG_MUSIC, FLAG_IMAGE, FLAG_GEN, FLAG_FUN
|
205 |
+
|
206 |
+
else:
|
207 |
+
return response_message, None, None, False, False, False, False #second_response, function_response, FLAG_MUSIC, FLAG_IMAGE, FLAG_GEN, FALG_FUN
|
208 |
+
|
209 |
+
|
210 |
+
# driver
|
211 |
+
def predict(inputs, top_p, temperature, chat_counter, music_gen, stable_diff, image_cap, top_news, file_output, plugin_message, chatbot=[], history=[]): #repetition_penalty, top_k
|
212 |
+
|
213 |
+
#openai.api_key = os.getenv("OPENAI_API_KEY")
|
214 |
+
|
215 |
+
payload = {
|
216 |
+
"model": "gpt-3.5-turbo-0613",
|
217 |
+
"messages": [{"role": "user", "content": f"{inputs}"}],
|
218 |
+
"temperature" : 1.0,
|
219 |
+
"top_p":1.0,
|
220 |
+
"n" : 1,
|
221 |
+
"stream": True,
|
222 |
+
"presence_penalty":0,
|
223 |
+
"frequency_penalty":0,
|
224 |
+
}
|
225 |
+
|
226 |
+
headers = {
|
227 |
+
"Content-Type": "application/json",
|
228 |
+
"Authorization": f"Bearer {openai_api_key}"
|
229 |
+
}
|
230 |
+
|
231 |
+
print(f"chat_counter - {chat_counter}")
|
232 |
+
print(f"music_gen is {music_gen}, stable_diff is {stable_diff}")
|
233 |
+
|
234 |
+
# file handling
|
235 |
+
print(f"Logging: file_output is - {file_output}")
|
236 |
+
if file_output is not None:
|
237 |
+
files_avail = [f.name for f in file_output ]
|
238 |
+
print(f"Logging: files_available are - {files_avail} ")
|
239 |
+
else:
|
240 |
+
print("Logging: No files available at the moment!")
|
241 |
+
|
242 |
+
if chat_counter != 0 :
|
243 |
+
messages=[]
|
244 |
+
for data in chatbot:
|
245 |
+
temp1 = {}
|
246 |
+
temp1["role"] = "user"
|
247 |
+
temp1["content"] = data[0]
|
248 |
+
temp2 = {}
|
249 |
+
temp2["role"] = "assistant"
|
250 |
+
temp2["content"] = data[1]
|
251 |
+
messages.append(temp1)
|
252 |
+
messages.append(temp2)
|
253 |
+
temp3 = {}
|
254 |
+
temp3["role"] = "user"
|
255 |
+
temp3["content"] = inputs
|
256 |
+
messages.append(temp3)
|
257 |
+
#messages
|
258 |
+
payload = {
|
259 |
+
"model": "gpt-3.5-turbo",
|
260 |
+
"messages": messages, #[{"role": "user", "content": f"{inputs}"}],
|
261 |
+
"temperature" : temperature, #1.0,
|
262 |
+
"top_p": top_p, #1.0,
|
263 |
+
"n" : 1,
|
264 |
+
"stream": True,
|
265 |
+
"presence_penalty":0,
|
266 |
+
"frequency_penalty":0,
|
267 |
+
}
|
268 |
+
|
269 |
+
chat_counter+=1
|
270 |
+
history.append(inputs)
|
271 |
+
print(f"Logging: payload is - {payload}")
|
272 |
+
|
273 |
+
plugins = [music_gen, stable_diff, image_cap, top_news, ]
|
274 |
+
function_call_decision = "auto" if any(plugins) else "none"
|
275 |
+
#function_call_decision = "none" if not (music_gen or stable_diff) else "auto"
|
276 |
+
#function_call_decision = "auto" if (music_gen or stable_diff or image_cap) else "none"
|
277 |
+
print(f"Logging: function_call_decision flag (auto/none) is - {function_call_decision}")
|
278 |
+
IS_FUN = False
|
279 |
+
first_response = None
|
280 |
+
|
281 |
+
if function_call_decision == "auto":
|
282 |
+
first_response, second_response, function_response, IS_MUSIC, IS_IMAGE, IS_GEN, IS_FUN = run_conversation(inputs, function_call_decision)
|
283 |
+
print(f"Logging: first_response return value - {first_response}")
|
284 |
+
print(f"Logging: second_response return value - {second_response}")
|
285 |
+
print(f"Logging: function_response return value - {function_response}")
|
286 |
+
print(f"Logging: IS_MUSIC, IS_IMAGE, IS_GEN, IS_FUN, respectively return value - {IS_MUSIC}, {IS_IMAGE}, {IS_GEN}, {IS_FUN}")
|
287 |
+
|
288 |
+
if (second_response is None) and (first_response is None):
|
289 |
+
bot_response_using_plugins_error = 'Something went wrong! It was either your query or the OpenAI server. I would suggest you can either try again from the start or just reword your last message for more appropriate response.'
|
290 |
+
|
291 |
+
history.append(bot_response_using_plugins_error)
|
292 |
+
print(f"Logging: history with plugins is - {history}")
|
293 |
+
chat = [(history[i], history[i+1]) for i in range(0, len(history)-1, 2)] + ([(history[-1],)] if len(history) % 2 != 0 else [])
|
294 |
+
print(f"Logging: chat with plugins is - {chat}")
|
295 |
+
|
296 |
+
yield chat, history, chat_counter, gr.update(visible=True), gr.update(visible=True), gr.update(visible=False)
|
297 |
+
#yield {chatbot: chat, state:history, chat_counter:chat_counter, plugin_message: gr.update(visible=False) }
|
298 |
+
|
299 |
+
if (second_response is not None): # and (first_response is not None):
|
300 |
+
bot_response_using_plugins = second_response['choices'][0]['message']['content']
|
301 |
+
print(f"Logging: bot_response_using_plugins using plugins is - {bot_response_using_plugins}")
|
302 |
+
bot_response_using_plugins = bot_response_using_plugins.replace("sandbox:", "")
|
303 |
+
|
304 |
+
history.append(bot_response_using_plugins)
|
305 |
+
print(f"Logging: history with plugins is - {history}")
|
306 |
+
chat = [(history[i], history[i+1]) for i in range(0, len(history)-1, 2)] + ([(history[-1],)] if len(history) % 2 != 0 else [])
|
307 |
+
print(f"Logging: chat with plugins is - {chat}")
|
308 |
+
|
309 |
+
if IS_MUSIC:
|
310 |
+
yield chat, history, chat_counter, gr.update(value=function_response), gr.update(visible=True), gr.update(value="<big><b>⏳ Using MusicGen Plugin</big></b>")
|
311 |
+
#yield {chatbot: chat, state:history, chat_counter:chat_counter, gen_music:gr.update(value=function_response), plugin_message: gr.update(value="**## ⏳ Using MusicGen Plugin**") }
|
312 |
+
elif IS_IMAGE:
|
313 |
+
yield chat, history, chat_counter, gr.update(visible=True), gr.update(value=function_response), gr.update(value="<big><b>⏳ Using Diffusers Plugin</big></b>")
|
314 |
+
#yield {chatbot: chat, state:history, chat_counter:chat_counter, gen_image:gr.update(value=function_response), plugin_message: gr.update(value="**## ⏳ Using Diffusers Plugin**") }
|
315 |
+
elif IS_GEN:
|
316 |
+
yield chat, history, chat_counter, gr.update(visible=True), gr.update(visible=True), gr.update(value="<big><b>⏳ Using ImageCaption/News Plugin</big></b>")
|
317 |
+
#yield {chatbot: chat, state:history, chat_counter:chat_counter, plugin_message: gr.update(value="**## ⏳ Using ImageCaption/News Plugin**") }
|
318 |
+
|
319 |
+
|
320 |
+
# When no plugins are chosen; or when plugins are chosen but none was used
|
321 |
+
if (function_call_decision == "none") or (first_response is not None and IS_FUN == False):
|
322 |
+
# make a POST request to the API endpoint using the requests.post method, passing in stream=True
|
323 |
+
response = requests.post(API_URL, headers=headers, json=payload, stream=True)
|
324 |
+
#response = requests.post(API_URL, headers=headers, json=payload, stream=True)
|
325 |
+
token_counter = 0
|
326 |
+
partial_words = ""
|
327 |
+
|
328 |
+
counter=0
|
329 |
+
for chunk in response.iter_lines():
|
330 |
+
#Skipping first chunk
|
331 |
+
if counter == 0:
|
332 |
+
counter+=1
|
333 |
+
continue
|
334 |
+
#counter+=1
|
335 |
+
# check whether each line is non-empty
|
336 |
+
if chunk.decode() :
|
337 |
+
chunk = chunk.decode()
|
338 |
+
# decode each line as response data is in bytes
|
339 |
+
if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']:
|
340 |
+
#if len(json.loads(chunk.decode()[6:])['choices'][0]["delta"]) == 0:
|
341 |
+
# break
|
342 |
+
partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"]
|
343 |
+
if token_counter == 0:
|
344 |
+
history.append(" " + partial_words)
|
345 |
+
else:
|
346 |
+
history[-1] = partial_words
|
347 |
+
chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list
|
348 |
+
token_counter+=1
|
349 |
+
yield chat, history, chat_counter, gr.update(visible=True), gr.update(visible=True), gr.update(visible=False)
|
350 |
+
#yield {chatbot: chat, state:history, chat_counter:chat_counter, plugin_message: gr.update(visible=False) }
|
351 |
+
|
352 |
+
|
353 |
+
def reset_textbox():
|
354 |
+
return gr.update(value='')
|
355 |
+
|
356 |
+
def add_image(file_to_save, file_output):
|
357 |
+
print(f"Logging: image file_to_save is - {file_to_save}")
|
358 |
+
print(f"Logging: files available in directory are -{file_output}")
|
359 |
+
|
360 |
+
if file_output is not None:
|
361 |
+
file_output = [f.name for f in file_output]
|
362 |
+
if file_to_save is None:
|
363 |
+
return file_output
|
364 |
+
file_output = [file_to_save] if file_output is None else file_output + [file_to_save]
|
365 |
+
print(f"Logging: Updated file directory - {file_output}")
|
366 |
+
return file_output #gr.update(value="dog1.jpg")
|
367 |
+
|
368 |
+
def add_audio(file_to_save, file_output):
|
369 |
+
print(f"Logging: audio file_to_save is - {file_to_save}")
|
370 |
+
print(f"Logging: files available in directory are -{file_output}")
|
371 |
+
|
372 |
+
if file_output is not None:
|
373 |
+
file_output = [f.name for f in file_output]
|
374 |
+
if file_to_save is None:
|
375 |
+
return file_output
|
376 |
+
file_output = [file_to_save] if file_output is None else file_output + [file_to_save]
|
377 |
+
print(f"Logging: Updated file directory - {file_output}")
|
378 |
+
return file_output #gr.update(value="dog1.jpg")
|
379 |
+
|
380 |
+
def upload_file(file, file_output):
|
381 |
+
print(f"Logging: all files available - {file_output}")
|
382 |
+
print(f"Logging: file uploaded is - {file}")
|
383 |
+
|
384 |
+
img_orig_name = file.name.split('/')[-1]
|
385 |
+
shutil.copy2(file.name, img_orig_name)
|
386 |
+
|
387 |
+
file_output = [file] if file_output is None else file_output + [file]
|
388 |
+
file_output = [f.name for f in file_output]
|
389 |
+
print(f"Logging: Updated file list is - {file_output}")
|
390 |
+
return file_output
|
391 |
+
|
392 |
+
messaging = """
|
393 |
+
How does a Language Model like GPT makes discerning choices regarding which plugins to run? Well, this is done using the Language Model as a reasoning agent and allowing it to assess and process information intelligently:<br><br>
|
394 |
+
<b>Function Calling</b>: Interacting with external APIs via free-form text isn't optimal; instead, employing JSON format proves to be a more efficient method.<br>
|
395 |
+
<b>Gradio Chatbots</b>: Using Gradio and Function Calling you can create chatbots designed to respond to queries by communicating with external APIs. The API responses are fed back to the Language Model for processing and a new response is generated for the user.<br>
|
396 |
+
<b>Describe your functions to GPT</b>: When integrating with GPT-3.5, specific instructions on how to utilize a particular function or plugin are essential; this encompasses specifying the name, description, and required parameters or inputs. Look at gpt_function_definitions.py for more context.<br>
|
397 |
+
<b>Caution</b>: Such function definitions would be conveyed to GPT, so when duplicating to build your own Plugins, proceed with caution as functions consume tokens.<br>
|
398 |
+
<b>Gradio's Usefulness</b>: The versatility of this using Gradio to build LLM applications is immense; In this Gradio app, you can have an array of functions tailored for various purposes, enhancing the breadth and depth of interactions with your Language Model.
|
399 |
+
"""
|
400 |
+
howto = """
|
401 |
+
Welcome to the <b>ChatGPT-Plugins</b> demo, built using Gradio! This interactive chatbot employs the GPT3.5-turbo-0613 model from OpenAI and boasts custom plugins to enhance your chatting experience. Here’s a quick guide to get you started:<br><br>
|
402 |
+
<b>Getting Started</b>: Simply type your messages in the textbox to chat with ChatGPT just like you would in the original app.<br>
|
403 |
+
<b>Using Plugins</b>: Want to try out a plugin? Check the checkbox next to the plugin you want to use.<br><br>
|
404 |
+
|
405 |
+
<b>DIFFUSERS PLUGIN:</b><br>
|
406 |
+
<b>What it does:</b> Generates images based on your text descriptions.<br>
|
407 |
+
<b>How to use:</b> Type a text description of the image you want to generate, and the plugin will create it for you.<br>
|
408 |
+
<b>Example input:</b> "Generate an image of a sunset over the mountains."<br><br>
|
409 |
+
|
410 |
+
<b>MUSIC-GEN PLUGIN:</b><br>
|
411 |
+
<b>What it does:</b> Generates music based on your descriptions.<br>
|
412 |
+
<b>How to use:</b> Describe the type of music you want and select an input melody. Remember to upload a melody first!<br>
|
413 |
+
<b>Example input:</b> "Generate music for a parade using bach.mp3 as input melody."<br><br>
|
414 |
+
|
415 |
+
<b>IMAGE CAPTION PLUGIN:</b><br>
|
416 |
+
<b>What it does:</b> Describes images that you upload.<br>
|
417 |
+
<b>How to use:</b> Upload an image and ask ChatGPT to describe it by name.<br>
|
418 |
+
<b>Example input:</b> "Describe the image dog.jpg."<br><br>
|
419 |
+
|
420 |
+
<b>NEWS PLUGIN:</b><br>
|
421 |
+
<b>What it does:</b> Provides the top 3 news articles based on your search query.<br>
|
422 |
+
<b>How to use:</b> Simply type in a search query and the plugin will present the top 3 news articles matching your query based on relevance.<br>
|
423 |
+
<b>Example input:</b> "Show me the top news about space exploration."<br><br>
|
424 |
+
|
425 |
+
Access Generated Content: Find all generated images and audio in the Gradio Files component located below the input textbox.<br>
|
426 |
+
Have Fun!: Explore and enjoy the versatile features of this <b>Gradio-ChatGPT-PLUGIN demo</b>.<br>
|
427 |
+
Now you’re all set to make the most of this ChatGPT demo. Happy chatting!
|
428 |
+
"""
|
429 |
+
|
430 |
+
with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;}
|
431 |
+
#chatbot {height: 520px; overflow: auto;}""") as demo:
|
432 |
+
gr.HTML('<h1 align="center">Build Your Own 🧩Plugins For ChatGPT using 🚀Gradio</h1>')
|
433 |
+
|
434 |
+
with gr.Accordion("Create Plugins for ChatGPT using Gradio in less than 5 minutes!", open=False ):
|
435 |
+
gr.Markdown(add_plugin_steps)
|
436 |
+
|
437 |
+
with gr.Accordion("How to use the demo and other useful stuff:", open=False):
|
438 |
+
with gr.Accordion("How to use the demo?", open=False):
|
439 |
+
gr.HTML(howto)
|
440 |
+
with gr.Accordion("What is happening?", open=False):
|
441 |
+
gr.HTML(messaging)
|
442 |
+
|
443 |
+
gr.HTML('''<center><a href="https://huggingface.co/spaces/ysharma/ChatGPT-Plugins-in-Gradio?duplicate=true"><img src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a>Duplicate the Space and run securely with your OpenAI API Key</center>''')
|
444 |
+
#with gr.Column(elem_id = "col_container"):
|
445 |
+
with gr.Row():
|
446 |
+
with gr.Column():
|
447 |
+
with gr.Accordion("OpenAI API KEY🔑"):
|
448 |
+
openai_api_key_tb = gr.Textbox(label="Enter your OpenAI API key here", value="🎁GPT3.5 keys are provided by HuggingFace for Free🥳 Don't need to enter yours!😉🙌")
|
449 |
+
plugin_message = gr.Markdown()
|
450 |
+
with gr.Column():
|
451 |
+
with gr.Accordion("Plug-ins🛠️: Check the box against the plugins you want to use (can select all or few or none)",):
|
452 |
+
music_gen = gr.Checkbox(label="🎵MusicGen", value=False)
|
453 |
+
stable_diff = gr.Checkbox(label="🖼️Diffusers", value=False)
|
454 |
+
image_cap = gr.Checkbox(label="🎨Describe Image", value=False)
|
455 |
+
top_news = gr.Checkbox(label="📰News", value=False)
|
456 |
+
|
457 |
+
with gr.Row():
|
458 |
+
with gr.Column(scale=0.7):
|
459 |
+
chatbot = gr.Chatbot(elem_id='chatbot')
|
460 |
+
with gr.Column(scale=0.3):
|
461 |
+
#with gr.Group():
|
462 |
+
gen_audio = gr.Audio(label="generated audio")
|
463 |
+
gen_image = gr.Image(label="generated image", type="filepath")
|
464 |
+
|
465 |
+
with gr.Row():
|
466 |
+
with gr.Column(scale=0.85):
|
467 |
+
inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter")
|
468 |
+
with gr.Column(scale=0.15, min_width=0):
|
469 |
+
btn = gr.UploadButton("📁Upload", file_types=["image", "audio"], file_count="single")
|
470 |
+
|
471 |
+
state = gr.State([]) #s
|
472 |
+
b1 = gr.Button("🏃Run")
|
473 |
+
|
474 |
+
with gr.Row():
|
475 |
+
with gr.Accordion("Parameters", open=False):
|
476 |
+
top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",)
|
477 |
+
temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",)
|
478 |
+
chat_counter = gr.Number(value=0, visible=False, precision=0)
|
479 |
+
with gr.Accordion("Files", open=False):
|
480 |
+
file_output = gr.File(file_count="multiple", file_types=["image", "audio"])
|
481 |
+
|
482 |
+
|
483 |
+
inputs.submit( predict,
|
484 |
+
[inputs, top_p, temperature, chat_counter, music_gen, stable_diff, image_cap, top_news, file_output, plugin_message, chatbot, state],
|
485 |
+
[chatbot, state, chat_counter, gen_audio, gen_image, plugin_message],)
|
486 |
+
b1.click( predict,
|
487 |
+
[inputs, top_p, temperature, chat_counter, music_gen, stable_diff, image_cap, top_news, file_output, plugin_message, chatbot, state],
|
488 |
+
[chatbot, state, chat_counter, gen_audio, gen_image, plugin_message],)
|
489 |
+
|
490 |
+
b1.click(reset_textbox, [], [inputs])
|
491 |
+
inputs.submit(reset_textbox, [], [inputs])
|
492 |
+
|
493 |
+
btn.upload(upload_file, [btn, file_output], file_output)
|
494 |
+
gen_image.change(add_image, [gen_image, file_output], file_output)
|
495 |
+
gen_audio.change(add_audio, [gen_audio, file_output], file_output)
|
496 |
+
|
497 |
+
gr.HTML("""Bonus! Follow these steps for adding your own Plugins to this chatbot: <a href="https://huggingface.co/spaces/ysharma/ChatGPT-Plugins-in-Gradio/blob/main/README.md" target="_blank">How to add new Plugins in ChatGPT in 5 mins!!</a> or open the accordion given on top.""")
|
498 |
+
|
499 |
+
|
500 |
+
demo.queue(concurrency_count=2, max_size=10).launch(debug=True, height = '1000')
|
gpt_function_definitions.py
ADDED
@@ -0,0 +1,164 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
from newsapi import NewsApiClient
|
3 |
+
from gradio_client import Client
|
4 |
+
|
5 |
+
HF_TOKEN = os.getenv("HF_TOKEN")
|
6 |
+
NEWSAPI = os.getenv("NEWSAPI")
|
7 |
+
|
8 |
+
# example input: prompt = "Beautiful Sky with "Gradio is love" written over it"
|
9 |
+
# defining a function to generate music using Gradio demo of TextDiffusers hosted on Spaces
|
10 |
+
def generate_image(prompt):
|
11 |
+
"""
|
12 |
+
generate an image based on the prompt provided
|
13 |
+
"""
|
14 |
+
client = Client("https://jingyechen22-textdiffuser.hf.space/")
|
15 |
+
result = client.predict(
|
16 |
+
prompt, # str in 'Input your prompt here. Please enclose keywords with 'single quotes', you may refer to the examples below. The current version only supports input in English characters.' Textbox component
|
17 |
+
20, # int | float (numeric value between 1 and 50) in 'Sampling step' Slider component
|
18 |
+
7.5, # int | float (numeric value between 1 and 9) in 'Scale of classifier-free guidance' Slider component
|
19 |
+
1, # int | float (numeric value between 1 and 4) in 'Batch size' Slider component
|
20 |
+
"Stable Diffusion v2.1", # str in 'Pre-trained Model' Radio component
|
21 |
+
fn_index=1)
|
22 |
+
return result[0]
|
23 |
+
|
24 |
+
# example input: input_text = "A cheerful country song with acoustic guitars"
|
25 |
+
# defining a function to generate music using Gradio demo of MusicGen hosted on Spaces
|
26 |
+
#input melody example = "/content/bolero_ravel.mp3"
|
27 |
+
def generate_music(input_text, input_melody ):
|
28 |
+
"""
|
29 |
+
generate music based on an input text
|
30 |
+
"""
|
31 |
+
client = Client("https://ysharma-musicgendupe.hf.space/", hf_token=HF_TOKEN)
|
32 |
+
result = client.predict(
|
33 |
+
"melody", # str in 'Model' Radio component
|
34 |
+
input_text, # str in 'Input Text' Textbox component
|
35 |
+
input_melody, # str (filepath or URL to file) in 'Melody Condition (optional)' Audio component
|
36 |
+
5, # int | float (numeric value between 1 and 120) in 'Duration' Slider component
|
37 |
+
250, # int | float in 'Top-k' Number component
|
38 |
+
0, # int | float in 'Top-p' Number component
|
39 |
+
1, # int | float in 'Temperature' Number component
|
40 |
+
3, # int | float in 'Classifier Free Guidance' Number component
|
41 |
+
fn_index=1)
|
42 |
+
return result
|
43 |
+
|
44 |
+
|
45 |
+
|
46 |
+
generate_music_func = {
|
47 |
+
"name": "generate_music",
|
48 |
+
"description": "generate music based on an input text and input melody",
|
49 |
+
"parameters": {
|
50 |
+
"type": "object",
|
51 |
+
"properties": {
|
52 |
+
"input_text": {
|
53 |
+
"type": "string",
|
54 |
+
"description": "input text for the music generation"
|
55 |
+
},
|
56 |
+
"input_melody": {
|
57 |
+
"type": "string",
|
58 |
+
"description": "file path of input melody for the music generation"
|
59 |
+
}
|
60 |
+
},
|
61 |
+
"required": ["input_text", "input_melody"]
|
62 |
+
}
|
63 |
+
}
|
64 |
+
|
65 |
+
# example input: input_image = "cat.jpg"
|
66 |
+
# defining a function to generate caption using a image caption Gradio demo hosted on Spaces
|
67 |
+
def generate_caption(input_image ):
|
68 |
+
"""
|
69 |
+
generate caption for the input image
|
70 |
+
"""
|
71 |
+
client = Client("https://nielsr-comparing-captioning-models.hf.space/")
|
72 |
+
temp = input_image.split('/')
|
73 |
+
if len(temp) == 1:
|
74 |
+
input_image = temp[0]
|
75 |
+
else:
|
76 |
+
input_image = temp[-1]
|
77 |
+
result = client.predict(
|
78 |
+
input_image,
|
79 |
+
api_name="/predict")
|
80 |
+
result = "The image can have any one of the following captions, all captions are correct: " + ", or ".join([f"'{caption.replace('.','')}'" for caption in result])
|
81 |
+
return result
|
82 |
+
|
83 |
+
|
84 |
+
generate_caption_func = {
|
85 |
+
"name": "generate_caption",
|
86 |
+
"description": "generate caption for the image present at the filepath provided",
|
87 |
+
"parameters": {
|
88 |
+
"type": "object",
|
89 |
+
"properties": {
|
90 |
+
"input_image": {
|
91 |
+
"type": "string",
|
92 |
+
"description": "filepath for the input image"
|
93 |
+
},
|
94 |
+
},
|
95 |
+
"required": ["input_image"]
|
96 |
+
}
|
97 |
+
}
|
98 |
+
|
99 |
+
|
100 |
+
generate_image_func = {
|
101 |
+
"name": "generate_image",
|
102 |
+
"description": "generate image based on the input text prompt",
|
103 |
+
"parameters": {
|
104 |
+
"type": "object",
|
105 |
+
"properties": {
|
106 |
+
"prompt": {
|
107 |
+
"type": "string",
|
108 |
+
"description": "input text prompt for the image generation"
|
109 |
+
}
|
110 |
+
},
|
111 |
+
"required": ["prompt"]
|
112 |
+
}
|
113 |
+
}
|
114 |
+
|
115 |
+
|
116 |
+
# defining a function to get the most relevant world news for a given query
|
117 |
+
# example query: Joe Biden presidency
|
118 |
+
def get_news(search_query):
|
119 |
+
"""
|
120 |
+
get top three news items for your search query
|
121 |
+
"""
|
122 |
+
newsapi = NewsApiClient(api_key=NEWSAPI)
|
123 |
+
docs = newsapi.get_everything(q=search_query,
|
124 |
+
language='en',
|
125 |
+
sort_by = 'relevancy',
|
126 |
+
page_size=3,
|
127 |
+
page=1
|
128 |
+
)['articles']
|
129 |
+
res = [news['description'] for news in docs]
|
130 |
+
res = [item.replace('<li>','').replace('</li>','').replace('<ol>','') for item in res]
|
131 |
+
res = "\n".join([f"{i}.{ res[i-1]}" for i in range(1,len(res)+1)])
|
132 |
+
return "Following list has the top three news items for the given search query : \n" + res
|
133 |
+
|
134 |
+
|
135 |
+
get_news_func = {
|
136 |
+
"name": "get_news",
|
137 |
+
"description": "get top three engilsh news items for a given query, sorted by relevancy",
|
138 |
+
"parameters": {
|
139 |
+
"type": "object",
|
140 |
+
"properties": {
|
141 |
+
"search_query": {
|
142 |
+
"type": "string",
|
143 |
+
"description": "input search string to search for relevant news"
|
144 |
+
},
|
145 |
+
},
|
146 |
+
"required": ["search_query"]
|
147 |
+
}
|
148 |
+
}
|
149 |
+
|
150 |
+
|
151 |
+
#dict_plugin_functions = { 'generate_music_func':{'dict': generate_music_func , 'func': generate_music},
|
152 |
+
# 'generate_image_func':{'dict':generate_image_func, 'func':generate_image} }
|
153 |
+
|
154 |
+
#dict_plugin_functions = { 'generate_music_func':{'dict': generate_music_func , 'func': generate_music},
|
155 |
+
# 'generate_image_func':{'dict':generate_image_func, 'func':generate_image},
|
156 |
+
# 'generate_caption_func' : {'dict':generate_caption_func, 'func':generate_caption}
|
157 |
+
# }
|
158 |
+
|
159 |
+
dict_plugin_functions = { 'generate_music_func':{'dict': generate_music_func , 'func': generate_music},
|
160 |
+
'generate_image_func':{'dict':generate_image_func, 'func':generate_image},
|
161 |
+
'generate_caption_func' : {'dict':generate_caption_func, 'func':generate_caption},
|
162 |
+
'get_news_func' : {'dict':get_news_func, 'func':get_news}
|
163 |
+
}
|
164 |
+
|
requirements.txt
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
openai
|
2 |
+
newsapi-python
|