sinatayebati commited on
Commit
62009ed
1 Parent(s): 276b8cf

#8 changed to mixtral 8x7b

Browse files
Files changed (5) hide show
  1. README.md +113 -8
  2. app.py +3 -4
  3. helper.txt +8 -10
  4. licence +21 -0
  5. requirements.txt +1 -4
README.md CHANGED
@@ -1,11 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- title: Talking Duck
3
- emoji: 📉
4
- colorFrom: yellow
5
- colorTo: indigo
6
- sdk: docker
7
- pinned: false
8
- license: mit
 
 
 
9
  ---
10
 
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
+ # TALKING DUCK 🦆
2
+
3
+ Welcome to the coolest corner of the coding pond - <a href="https://huggingface.co/spaces/sinatayebati/Talking-Duck">**TALKING DUCK**!</a> This isn't just any duck. It's your feathered friend that quacks code solutions and dives deep into programming puzzles. Ever wished for a coding buddy that could talk back, understand your code woes, and not just stare blankly at you? **TALKING DUCK** is here to change the game!
4
+
5
+ Dive in beak-first and see TALKING DUCK in action over at its very own <a href="https://huggingface.co/spaces/sinatayebati/Talking-Duck">Hugging Face Space</a>. It's like a pond party for coders, and you're invited! 🎉
6
+
7
+
8
+ ## What's the Quack?
9
+
10
+ **TALKING DUCK** is an LLM-powered chatbot that does more than just float around. It listens (literally) to your code problems, thanks to its audio-to-text superpower, and then it gets its webbed feet dirty by diving into the depths of code logic to bring you pearls of wisdom... or at least, the answers to your coding questions.
11
+
12
+ Whether you're tangled in the vines of Python, lost in the braces of JavaScript, or just need someone to tell you "It's gonna be okay, buddy," **TALKING DUCK** is your go-to aquatic ally.
13
+
14
+ ## Features
15
+
16
+ - 🎤 **Audio Input**: Just quack your question into your mic.
17
+ - 🤔 **Smart Analysis**: Leveraging the brainy might of LLM models, including the one and only `mistralai/Mistral-7B-v0.1`.
18
+ - 💡 **Insightful Answers**: Get recommendations, solutions, and sometimes just a friendly quack.
19
+ - 🦆 **Fun Interface**: Because who said coding assistants need to be boring?
20
+
21
+
22
+ # Get Started
23
+ To build Talking Duck, you could build from docker or setup a conda environment.
24
+
25
+
26
+ ## Docker Setup
27
+
28
+ If you prefer to keep your machine clean and run everything in containers, **TALKING DUCK** is Docker-ready! Just follow these simple steps to containerize the quack:
29
+
30
+ ### Build the Docker Image
31
+
32
+ Navigate to the root directory of the cloned project where the `Dockerfile` is located, and run:
33
+
34
+ ```bash
35
+ docker build -t talking-duck .
36
+ ```
37
+
38
+ This command builds a Docker image named `talking-duck` based on the instructions in your `Dockerfile`.
39
+
40
+ ### Run the Container
41
+
42
+ Once the build is complete, you can run **TALKING DUCK** in a Docker container using:
43
+
44
+ ```bash
45
+ docker run -p 7860:7860 talking-duck
46
+ ```
47
+
48
+ This command starts a container from the `talking-duck` image, mapping port 7860 of the container to port 7860 on your host machine.
49
+
50
+ ### Visit Your Duck
51
+
52
+ Open your favorite web browser and navigate to `http://localhost:7860` to interact with **TALKING DUCK**. No installation mess, just pure, unadulterated duck fun.
53
+
54
+ ### Docker Compose (Optional)
55
+
56
+ For an even easier setup, if you have a `docker-compose.yml` file at the root of your project, you can start **TALKING DUCK** with just one command:
57
+
58
+ ```bash
59
+ docker-compose up --build
60
+ ```
61
+
62
+ This command uses Docker Compose to build the image and run the container as defined in your `docker-compose.yml` file. It's perfect for when you want to get up and running with minimal fuss.
63
+
64
+ ---
65
+
66
+ **Note:** Be sure to adjust the `docker build` and `docker run` commands based on your specific Docker setup, including the correct image names and any additional options you might need. The instructions above assume a basic setup for demonstration purposes.
67
+
68
+ Don't forget to check the Docker and Docker Compose documentation for more detailed information on building and running containers.
69
+
70
+ ## Env setup
71
+
72
+ Want your very own **TALKING DUCK**? Follow these steps to clone this repository and get quacking:
73
+
74
+
75
+ 1. **Clone the Repository**
76
+
77
+ ```bash
78
+ git clone https://github.com/your-username/talking-duck.git
79
+ cd talking-duck
80
+ ```
81
+
82
+ 2. **Set Up Your Pond**
83
+
84
+ Make sure you have Python installed, then dive in with:
85
+
86
+ ```bash
87
+ pip install -r requirements.txt
88
+ ```
89
+
90
+ 3. **Wake the Duck**
91
+
92
+ Start your **TALKING DUCK** with:
93
+
94
+ ```bash
95
+ python app.py
96
+ ```
97
+
98
+ Visit `http://localhost:7860` in your web browser to see **TALKING DUCK** in all its glory!
99
+
100
+
101
+ Sure! Here’s an addition to the README that includes instructions for building and running the **TALKING DUCK** project using Docker. This section assumes that you’ve Dockerized your Gradio app, as discussed earlier.
102
+
103
  ---
104
+
105
+
106
+ ## Contribute
107
+
108
+ Got ideas to make **TALKING DUCK** even cooler? Fork this repo, make your changes, and submit a pull request. New jokes, features, and improvements are always welcome. Let's make coding fun together!
109
+
110
+ ## License
111
+
112
+ **TALKING DUCK** is released under the MIT License. See `LICENSE` for more information.
113
+
114
  ---
115
 
116
+ Remember to replace `https://github.com/your-username/talking-duck.git` with the actual URL of your repository. Feel free to adjust the tone and content to match your project's personality and features more closely!
app.py CHANGED
@@ -5,7 +5,7 @@ import os
5
 
6
  API_TOKEN = os.getenv("HF_API_TOKEN")
7
  TRANSCRIBE_API_URL = "https://api-inference.huggingface.co/models/openai/whisper-base.en"
8
- LLM_API_URL = "https://api-inference.huggingface.co/models/mistralai/Mistral-7B-v0.1"
9
 
10
  def transcribe_audio(audio_file):
11
  """Transcribe audio file to text."""
@@ -39,7 +39,6 @@ def get_answer(context, question):
39
  "parameters": {
40
  "temperature": 0.3, # More deterministic
41
  "top_p": 0.95, # Consider top 90% probable tokens at each step
42
- "max_new_tokens": 100, # Limit the response length
43
  "repetition_penalty": 1.2, # Discourage repetition
44
  "num_return_sequences": 1, # Number of responses to generate
45
  "return_full_text": False, # Return only generated text, not the full prompt
@@ -86,8 +85,8 @@ with gr.Blocks() as app:
86
  <div style="display: flex; justify-content: space-around; align-items: center;">
87
  <div>
88
  <img src="https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/mistral-logo.png" alt="Mistral Logo" style="width: 40px; margin-bottom: 10px;"/>
89
- <div style="font-size: 14px;">mistralai/Mistral-7B-v0.1</div>
90
- <a href="https://huggingface.co/mistralai/Mistral-7B-v0.1" target="_blank" style="color: white; text-decoration: none; font-size: 12px;">Model Page</a>
91
  </div>
92
  <div>
93
  <img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/1620805164087-5ec0135ded25d76864d553f1.png?w=200&h=200&f=face" alt="Second Model Logo" style="width: 40px; margin-bottom: 10px;"/>
 
5
 
6
  API_TOKEN = os.getenv("HF_API_TOKEN")
7
  TRANSCRIBE_API_URL = "https://api-inference.huggingface.co/models/openai/whisper-base.en"
8
+ LLM_API_URL = "https://api-inference.huggingface.co/models/mistralai/Mixtral-8x7B-Instruct-v0.1"
9
 
10
  def transcribe_audio(audio_file):
11
  """Transcribe audio file to text."""
 
39
  "parameters": {
40
  "temperature": 0.3, # More deterministic
41
  "top_p": 0.95, # Consider top 90% probable tokens at each step
 
42
  "repetition_penalty": 1.2, # Discourage repetition
43
  "num_return_sequences": 1, # Number of responses to generate
44
  "return_full_text": False, # Return only generated text, not the full prompt
 
85
  <div style="display: flex; justify-content: space-around; align-items: center;">
86
  <div>
87
  <img src="https://huggingface.co/datasets/huggingchat/models-logo/resolve/main/mistral-logo.png" alt="Mistral Logo" style="width: 40px; margin-bottom: 10px;"/>
88
+ <div style="font-size: 14px;">mistralai/Mixtral-8x7B-Instruct-v0.1</div>
89
+ <a href="https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1" target="_blank" style="color: white; text-decoration: none; font-size: 12px;">Model Page</a>
90
  </div>
91
  <div>
92
  <img src="https://aeiljuispo.cloudimg.io/v7/https://cdn-uploads.huggingface.co/production/uploads/1620805164087-5ec0135ded25d76864d553f1.png?w=200&h=200&f=face" alt="Second Model Logo" style="width: 40px; margin-bottom: 10px;"/>
helper.txt CHANGED
@@ -1,19 +1,17 @@
1
  # Ensure Docker Buildx is used and setup
2
  docker buildx create --use
3
 
4
- # Build your Docker image with Buildx (assuming you're now working locally and not pushing to a registry)
5
- docker buildx build --load --secret id=HF_API_TOKEN,env=HF_API_TOKEN -t gradio-app .
6
 
7
- # Now run your Docker container, passing the secret environment variable
8
- docker run -it -p 7860:7860 -e HF_API_TOKEN=$HF_API_TOKEN gradio-app
9
 
10
 
11
- docker build -t gradio-app .
 
12
 
 
13
  docker run -it -p 7860:7860 -e HF_API_TOKEN=$HF_API_TOKEN gradio-app
14
 
15
- docker-compose up --build
16
-
17
-
18
-
19
- "--host", "0.0.0.0", "--port", "7860"
 
1
  # Ensure Docker Buildx is used and setup
2
  docker buildx create --use
3
 
4
+ #export your API token as a secret variable
5
+ export HF_API_TOKEN="your_huggingface_api_token_here"
6
 
7
+ echo $HF_API_TOKEN
 
8
 
9
 
10
+ # Build your Docker image with Buildx (assuming you're now working locally and not pushing to a registry)
11
+ docker buildx build --load --secret id=HF_API_TOKEN,env=HF_API_TOKEN -t gradio-app .
12
 
13
+ # Now run your Docker container, passing the secret environment variable
14
  docker run -it -p 7860:7860 -e HF_API_TOKEN=$HF_API_TOKEN gradio-app
15
 
16
+ # Use docker composer
17
+ docker-compose up --build
 
 
 
licence ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2023 Sina Tayebati
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
requirements.txt CHANGED
@@ -1,6 +1,3 @@
1
- fastapi==0.110.0
2
  gradio==4.21.0
3
  requests==2.31.0
4
- torch==1.11
5
- transformers==4.38.2
6
- uvicorn[standard]==0.28.0
 
 
1
  gradio==4.21.0
2
  requests==2.31.0
3
+ transformers==4.38.2