haonanzhang tanish2502 commited on
Commit
53b4111
0 Parent(s):

Duplicate from tanish2502/ChatGPT-AI-Assistant-App

Browse files

Co-authored-by: Tanish Gupta <tanish2502@users.noreply.huggingface.co>

.github/workflows/sync_to_huggingface_hub.yml ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Sync to Hugging Face hub
2
+ on:
3
+ push:
4
+ branches: [main]
5
+
6
+ # to run this workflow manually from the Actions tab
7
+ workflow_dispatch:
8
+
9
+ jobs:
10
+ sync-to-hub:
11
+ runs-on: ubuntu-latest
12
+ steps:
13
+ - uses: actions/checkout@v3
14
+ with:
15
+ fetch-depth: 0
16
+ lfs: true
17
+ - name: Push to hub
18
+ env:
19
+ HF_TOKEN: ${{ secrets.HF_TOKEN }}
20
+ run: git push --force https://tanish2502:$HF_TOKEN@huggingface.co/spaces/tanish2502/ChatGPT-AI-Assistant-App main
README.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: ChatGPT AI Assistant App
3
+ emoji: 😉
4
+ colorFrom: green
5
+ colorTo: yellow
6
+ sdk: gradio
7
+ sdk_version: 3.23.0
8
+ app_file: app.py
9
+ pinned: false
10
+ duplicated_from: tanish2502/ChatGPT-AI-Assistant-App
11
+ ---
12
+
13
+ # ChatGPT-AI-Assistant-App
14
+ Demo of 🤗 Spaces deployment of a Gradio Python AI Assistant App.
15
+
16
+ Link of the App: https://huggingface.co/spaces/tanish2502/ChatGPT-AI-Assistant-App
17
+
18
+ # Description
19
+ This AI assistant is designed to make your life easier by responding to your queries in a conversational manner.
20
+ It can understand both audio and text inputs, making it a versatile tool for all kinds of communication.
21
+
22
+ The backend of our AI assistant is powered by two APIs: Whisper and ChatGPT from OpenAI.
23
+ Whisper API enables our assistant to convert audio inputs to text and ChatGPT API enables it to process and generate responses based on the input.
24
+ The combination of these two powerful tools allows our assistant to provide accurate and helpful responses to your queries.
25
+
26
+ One of the standout features of this AI assistant is its ability to remember previous conversations and respond accordingly.
27
+ This means that it can provide more personalized and relevant responses, making it a valuable tool for ongoing communication.
28
+
29
+ I hope that you find our ChatGPT-based AI assistant useful and enjoyable to use.
30
+
31
+ If you have any questions or feedback, please don't hesitate to reach out to me.
32
+
33
+ Thank you for choosing this AI assistant!
app.py ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import openai
3
+ import os
4
+ from dotenv import load_dotenv
5
+ from pydub import AudioSegment
6
+
7
+ load_dotenv()
8
+
9
+ #accessing openapi Key
10
+ openai.api_key = os.getenv("OPENAI_API_KEY")
11
+
12
+ audio_messages = [{"role": "system", "content": 'You are an AI assistant expert. Respond to all input in precise, crisp and easy to understand language.'}]
13
+ text_messages = [{"role": "system", "content": 'You are an AI assistant expert. Respond to all input in precise, crisp and easy to understand language.'}]
14
+ global user_text_input, text_output, user_audio_input, audio_output
15
+
16
+ """
17
+ It seems like the gr.Audio source is not generating a WAV file, which is required for the openai.Audio.transcribe() method to work.
18
+ To convert the audio file to WAV format, i have used a library like Pydub.
19
+ """
20
+
21
+ def audio_transcribe(audio):
22
+ global audio_messages
23
+ audio_message = audio_messages
24
+
25
+ #audio processing to whisper API.
26
+ audio_file = AudioSegment.from_file(audio)
27
+ audio_file.export("temp.wav", format="wav")
28
+ final_audio_file = open("temp.wav", "rb")
29
+ transcript = openai.Audio.transcribe("whisper-1", final_audio_file)
30
+ os.remove("temp.wav")
31
+
32
+ #transcripted input to chatGPT API for chatCompletion
33
+ audio_message.append({"role": "user", "content": transcript["text"]}) # type: ignore
34
+ response = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=audio_message)
35
+ system_message = response["choices"][0]["message"] # type: ignore
36
+ audio_message.append(system_message)
37
+
38
+ chat_transcript = ""
39
+ for message in audio_message:
40
+ if message['role'] != 'system':
41
+ chat_transcript += message['role'] + ": " + message['content'] + "\n\n"
42
+
43
+ return chat_transcript
44
+
45
+ def text_transcribe(name):
46
+ global text_messages
47
+ text_message = text_messages
48
+ user_text_input.update("")
49
+ #transcripted input to chatGPT API
50
+ text_message.append({"role": "user", "content": name}) # type: ignore
51
+ response = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=text_message)
52
+ system_message = response["choices"][0]["message"] # type: ignore
53
+ text_message.append(system_message)
54
+
55
+ chat_transcript = ""
56
+ for message in text_message:
57
+ if message['role'] != 'system':
58
+ chat_transcript += message['role'] + ": " + message['content'] + "\n\n"
59
+ return chat_transcript
60
+
61
+ title = """<h1 align="center">Your Chat-GPT AI Assistant at your Service!! 😎 </h1>"""
62
+ with gr.Blocks(theme=gr.themes.Soft()) as demo:
63
+ gr.HTML(title)
64
+ with gr.Tab("Audio Input"):
65
+ with gr.Row():
66
+ user_audio_input = (gr.Audio(source="microphone", type="filepath", label="Speak Here"))
67
+ audio_input = user_audio_input
68
+ audio_output = gr.Textbox(label="AI Response", lines=20, placeholder="AI Response will be displayed here...")
69
+ with gr.Row():
70
+ audio_submit_button = gr.Button("Submit")
71
+ with gr.Tab("Text Input"):
72
+ with gr.Row():
73
+ user_text_input = (gr.Textbox(label="Type Here", lines=20, placeholder="Type your message here..."))
74
+ text_input = user_text_input
75
+ text_output = gr.Textbox(label="AI Response", lines=20, placeholder="AI Response will be displayed here...")
76
+ with gr.Row():
77
+ text_submit_button = gr.Button("Submit")
78
+ audio_submit_button.click(fn=audio_transcribe, inputs=audio_input, outputs=audio_output)
79
+ text_submit_button.click(fn=text_transcribe, inputs=text_input, outputs=text_output)
80
+
81
+ gr.Markdown("<center> Made with ❤️ by Tanish Gupta. Credits to 🤗 Spaces for Hosting this App </center>")
82
+
83
+ demo.launch()
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ openai
2
+ gradio
3
+ python-dotenv