jfeng1115 commited on
Commit
957b011
β€’
0 Parent(s):

init commit

Browse files
Files changed (6) hide show
  1. .env.sample +1 -0
  2. Dockerfile +11 -0
  3. README.md +208 -0
  4. app.py +76 -0
  5. chainlit.md +3 -0
  6. requirements.txt +3 -0
.env.sample ADDED
@@ -0,0 +1 @@
 
 
1
+ OPENAI_API_KEY=###
Dockerfile ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.9
2
+ RUN useradd -m -u 1000 user
3
+ USER user
4
+ ENV HOME=/home/user \
5
+ PATH=/home/user/.local/bin:$PATH
6
+ WORKDIR $HOME/app
7
+ COPY --chown=user . $HOME/app
8
+ COPY ./requirements.txt ~/app/requirements.txt
9
+ RUN pip install -r requirements.txt
10
+ COPY . .
11
+ CMD ["chainlit", "run", "app.py", "--port", "7860"]
README.md ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: BeyondChatGPT Demo
3
+ emoji: πŸ“‰
4
+ colorFrom: pink
5
+ colorTo: yellow
6
+ sdk: docker
7
+ pinned: false
8
+ ---
9
+
10
+ <p align = "center" draggable=”false” ><img src="https://github.com/AI-Maker-Space/LLM-Dev-101/assets/37101144/d1343317-fa2f-41e1-8af1-1dbb18399719"
11
+ width="200px"
12
+ height="auto"/>
13
+ </p>
14
+
15
+
16
+ ## <h1 align="center" id="heading">:wave: Welcome to Beyond ChatGPT!!</h1>
17
+
18
+ ## Agenda
19
+
20
+ ### Build πŸ—οΈ
21
+ Build and containerize your App
22
+
23
+ ### Ship 🚒
24
+ Deploy your App on Hugging Face
25
+
26
+ ### Share πŸš€
27
+ - Submit the link to your App for this assignment!
28
+ - Submit the link to a Loom video walkthrough (<5 min) of your Interactive Development Environment (IDE) for LLM Ops and your first LLM application
29
+ - Share 3 lessons learned
30
+ - Share 3 lessons not yet learned
31
+ - Make a social media post about your final application and tag @AIMakerspace
32
+
33
+ Here's a template to get your post started!
34
+
35
+ ```
36
+ πŸš€πŸŽ‰ Exciting News! πŸŽ‰πŸš€
37
+
38
+ πŸ—οΈΒ Today, I'm thrilled to announce that I've successfully built and shipped my first-ever LLM using the powerful combination of Chainlit, Docker, and the OpenAI API! πŸ–₯️
39
+
40
+ Here are my 3 main takeaways!
41
+ - Takeaway 1
42
+ - Takeaway 2
43
+ - Takeaway 3
44
+
45
+ Check it out πŸ‘‡
46
+ [LINK TO APP]
47
+
48
+ A big shoutout to the @**AI Makerspace** for all making this possible. Couldn't have done it without the incredible community there. πŸ€—πŸ™
49
+
50
+ Looking forward to building with the community! πŸ™Œβœ¨Β Here's to many more creations ahead! πŸ₯‚πŸŽ‰
51
+
52
+ Who else is diving into the world of AI? Let's connect! πŸŒπŸ’‘
53
+
54
+ #FirstLLM #Chainlit #Docker #OpenAI #AIMakerspace
55
+ ```
56
+
57
+ ## πŸ€– Your First LLM App
58
+
59
+ > If you need an introduction to `git`, or information on how to set up API keys for the tools we'll be using in this repository - check out our [Interactive Dev Environment for LLM Development](https://github.com/AI-Maker-Space/Interactive-Dev-Environment-for-LLM-Development/tree/main) which has everything you'd need to get started in this repository!
60
+
61
+ In this repository, we'll walk you through the steps to create a Large Language Model (LLM) application using Chainlit, then containerize it using Docker, and finally deploy it on Huggingface Spaces.
62
+
63
+ Are you ready? Let's get started!
64
+
65
+ <details>
66
+ <summary>πŸ–₯️ Accessing "gpt-3.5-turbo" (ChatGPT) like a developer</summary>
67
+
68
+ 1. Head to [this notebook](https://colab.research.google.com/drive/1mOzbgf4a2SP5qQj33ZxTz2a01-5eXqk2?usp=sharing) and follow along with the instructions!
69
+
70
+ 2. Complete the notebook and try out your own system/assistant messages!
71
+
72
+ That's it! Head to the next step and start building your application!
73
+
74
+ </details>
75
+
76
+
77
+ <details>
78
+ <summary>πŸ—οΈ Building Your First LLM App</summary>
79
+
80
+ 1. Clone [this](https://github.com/AI-Maker-Space/Beyond-ChatGPT/tree/main) repo.
81
+
82
+ ``` bash
83
+ git clone https://github.com/AI-Maker-Space/Beyond-ChatGPT.git
84
+ ```
85
+
86
+ 2. Navigate inside this repo
87
+ ``` bash
88
+ cd Beyond-ChatGPT
89
+ ```
90
+
91
+ 3. Install the packages required for this python envirnoment in `requirements.txt`.
92
+ ``` bash
93
+ pip install -r requirements.txt
94
+ ```
95
+
96
+ 4. Open your `.env.sample` file. Replace the `###` in your `.env.sample` file with your OpenAI Key and save the file - rename the `.env.sample` file to `.env`.
97
+ ``` bash
98
+ OPENAI_API_KEY=sk-###
99
+ ```
100
+
101
+ 5. Let's try deploying it locally. Make sure you're in the python environment where you installed Chainlit and OpenAI. Run the app using Chainlit. This may take a minute to run.
102
+ ``` bash
103
+ chainlit run app.py -w
104
+ ```
105
+
106
+ <p align = "center" draggable=”false”>
107
+ <img src="https://i.imgur.com/mlgrl1N.png">
108
+ </p>
109
+
110
+ Great work! Let's see if we can interact with our chatbot.
111
+
112
+ <p align = "center" draggable=”false”>
113
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/854e4435-1dee-438a-9146-7174b39f7c61">
114
+ </p>
115
+
116
+ Awesome! Time to throw it into a docker container and prepare it for shipping!
117
+ </details>
118
+
119
+
120
+
121
+ <details>
122
+ <summary>🐳 Containerizing our App</summary>
123
+
124
+ 1. Let's build the Docker image. We'll tag our image as `llm-app` using the `-t` parameter. The `.` at the end means we want all of the files in our current directory to be added to our image.
125
+
126
+ ``` bash
127
+ docker build -t llm-app .
128
+ ```
129
+ You'll see a number of steps - each of those steps corresponds to an item outlined in our `Dockerfile` and the build process.
130
+
131
+ If you'd like to learn more - check out this resource by Docker: [build](https://docs.docker.com/engine/reference/commandline/build/)
132
+
133
+ 2. Run and test the Docker image locally using the `run` command. The `-p`parameter connects our **host port #** to the left of the `:` to our **container port #** on the right.
134
+
135
+ ``` bash
136
+ docker run -p 7860:7860 llm-app
137
+ ```
138
+
139
+ 3. Visit http://localhost:7860 in your browser to see if the app runs correctly.
140
+
141
+ <p align = "center" draggable=”false”>
142
+ <img src="https://i.imgur.com/mlgrl1N.png">
143
+ </p>
144
+
145
+ Great! Time to ship!
146
+ </details>
147
+
148
+
149
+ <details>
150
+ <summary>πŸš€ Deploying Your First LLM App</summary>
151
+
152
+ 1. Let's create a new Huggingface Space. Navigate to [Huggingface](https://huggingface.co) and click on your profile picture on the top right. Then click on `New Space`.
153
+
154
+ <p align = "center" draggable=”false”>
155
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/f0656408-28b8-4876-9887-8f0c4b882bae">
156
+ </p>
157
+
158
+ 2. Setup your space as shown below:
159
+
160
+ - Owner: Your username
161
+ - Space Name: `llm-app`
162
+ - License: `Openrail`
163
+ - Select the Space SDK: `Docker`
164
+ - Docker Template: `Blank`
165
+ - Space Hardware: `CPU basic - 2 vCPU - 16 GB - Free`
166
+ - Repo type: `Public`
167
+
168
+ <p align = "center" draggable=”false”>
169
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/8f16afd1-6b46-4d9f-b642-8fefe355c5c9">
170
+ </p>
171
+
172
+ 3. You should see something like this. We're now ready to send our files to our Huggingface Space. After cloning, move your files to this repo and push it along with your docker file. You DO NOT need to create a Dockerfile. Make sure NOT TO push your `.env` file. This should automatically be ignored.
173
+
174
+ <p align = "center" draggable=”false”>
175
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/cbf366e2-7613-4223-932a-72c67a73f9c6">
176
+ </p>
177
+
178
+ 4. After pushing all files, navigate to the settings in the top right to add your OpenAI API key.
179
+
180
+ <p align = "center" draggable=”false”>
181
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/a1123a6f-abdd-4f76-bea4-39acf9928762">
182
+ </p>
183
+
184
+ 5. Scroll down to `Variables and secrets` and click on `New secret` on the top right.
185
+
186
+ <p align = "center" draggable=”false”>
187
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/a8a4a25d-752b-4036-b572-93381370c2db">
188
+ </p>
189
+
190
+ 6. Set the name to `OPENAI_API_KEY` and add your OpenAI key under `Value`. Click save.
191
+
192
+ <p align = "center" draggable=”false”>
193
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/0a897538-1779-48ff-bcb4-486af30f7a14">
194
+ </p>
195
+
196
+ 7. To ensure your key is being used, we recommend you `Restart this Space`.
197
+
198
+ <p align = "center" draggable=”false”>
199
+ <img src="https://github.com/AI-Maker-Space/LLMOps-Dev-101/assets/37101144/fb1d83af-6ebe-4676-8bf5-b6d88f07c583">
200
+ </p>
201
+
202
+ 8. Congratulations! You just deployed your first LLM! πŸš€πŸš€πŸš€ Get on linkedin and post your results and experience using the provided template! Make sure to tag us at #AIMakerspace !
203
+
204
+ </details>
205
+
206
+ <p></p>
207
+
208
+ ### That's it for now! And so it begins.... :)
app.py ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # You can find this code for Chainlit python streaming here (https://docs.chainlit.io/concepts/streaming/python)
2
+
3
+ # OpenAI Chat completion
4
+ import os
5
+ import openai # importing openai for API usage
6
+ import chainlit as cl # importing chainlit for our app
7
+ from chainlit.prompt import Prompt, PromptMessage # importing prompt tools
8
+ from chainlit.playground.providers import ChatOpenAI # importing ChatOpenAI tools
9
+ from dotenv import load_dotenv
10
+
11
+ load_dotenv()
12
+
13
+ openai.api_key = os.environ["OPENAI_API_KEY"]
14
+
15
+ # ChatOpenAI Templates
16
+ system_template = """You are a helpful assistant who always speaks in a pleasant tone!
17
+ """
18
+
19
+ user_template = """{input}
20
+ Think through your response step by step.
21
+ """
22
+
23
+
24
+ @cl.on_chat_start # marks a function that will be executed at the start of a user session
25
+ async def start_chat():
26
+ settings = {
27
+ "model": "gpt-3.5-turbo",
28
+ "temperature": 0,
29
+ "max_tokens": 500,
30
+ "top_p": 1,
31
+ "frequency_penalty": 0,
32
+ "presence_penalty": 0,
33
+ }
34
+
35
+ cl.user_session.set("settings", settings)
36
+
37
+
38
+ @cl.on_message # marks a function that should be run each time the chatbot receives a message from a user
39
+ async def main(message: str):
40
+ settings = cl.user_session.get("settings")
41
+
42
+ prompt = Prompt(
43
+ provider=ChatOpenAI.id,
44
+ messages=[
45
+ PromptMessage(
46
+ role="system",
47
+ template=system_template,
48
+ formatted=system_template,
49
+ ),
50
+ PromptMessage(
51
+ role="user",
52
+ template=user_template,
53
+ formatted=user_template.format(input=message),
54
+ ),
55
+ ],
56
+ inputs={"input": message},
57
+ settings=settings,
58
+ )
59
+
60
+ print([m.to_openai() for m in prompt.messages])
61
+
62
+ msg = cl.Message(content="")
63
+
64
+ # Call OpenAI
65
+ async for stream_resp in await openai.ChatCompletion.acreate(
66
+ messages=[m.to_openai() for m in prompt.messages], stream=True, **settings
67
+ ):
68
+ token = stream_resp.choices[0]["delta"].get("content", "")
69
+ await msg.stream_token(token)
70
+
71
+ # Update the prompt object with the completion
72
+ prompt.completion = msg.content
73
+ msg.prompt = prompt
74
+
75
+ # Send and close the message stream
76
+ await msg.send()
chainlit.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ # Beyond ChatGPT
2
+
3
+ This Chainlit app was created following instructions from [this repository!](https://github.com/AI-Maker-Space/Beyond-ChatGPT)
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ chainlit==0.7.0
2
+ openai==0.28.0
3
+ python-dotenv==1.0.0