Spaces:
Runtime error
Runtime error
sanjeevl10
commited on
Commit
Β·
768e225
1
Parent(s):
c8113b0
added mkdir for vectorstore
Browse files- Dockerfile +2 -1
- README (2).md +0 -150
Dockerfile
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
-
FROM python:3.
|
2 |
RUN useradd -m -u 1000 user
|
3 |
USER user
|
4 |
ENV HOME=/home/user \
|
@@ -7,5 +7,6 @@ WORKDIR $HOME/app
|
|
7 |
COPY --chown=user . $HOME/app
|
8 |
COPY ./requirements.txt ~/app/requirements.txt
|
9 |
RUN pip install -r requirements.txt
|
|
|
10 |
COPY . .
|
11 |
CMD ["chainlit", "run", "app.py", "--port", "7860"]
|
|
|
1 |
+
FROM python:3.11
|
2 |
RUN useradd -m -u 1000 user
|
3 |
USER user
|
4 |
ENV HOME=/home/user \
|
|
|
7 |
COPY --chown=user . $HOME/app
|
8 |
COPY ./requirements.txt ~/app/requirements.txt
|
9 |
RUN pip install -r requirements.txt
|
10 |
+
RUN mkdir -p $HOME/app/data/vectorstore && chown -R user:user $HOME/app/data
|
11 |
COPY . .
|
12 |
CMD ["chainlit", "run", "app.py", "--port", "7860"]
|
README (2).md
DELETED
@@ -1,150 +0,0 @@
|
|
1 |
-
# Week 4: Tuesday
|
2 |
-
|
3 |
-
In today's assignment, we'll be creating an Open Source LLM-powered LangChain RAG Application in Chainlit.
|
4 |
-
|
5 |
-
There are 2 main sections to this assignment:
|
6 |
-
|
7 |
-
## Build ποΈ
|
8 |
-
|
9 |
-
### Build Task 1: Deploy LLM and Embedding Model to SageMaker Endpoint Through Hugging Face Inference Endpoints
|
10 |
-
|
11 |
-
#### LLM Endpoint
|
12 |
-
|
13 |
-
Select "Inference Endpoint" from the "Solutions" button in Hugging Face:
|
14 |
-
|
15 |
-

|
16 |
-
|
17 |
-
Create a "+ New Endpoint" from the Inference Endpoints dashboard.
|
18 |
-
|
19 |
-

|
20 |
-
|
21 |
-
Select the `NousResearch/Meta-Llama-3-8B-Instruct` model repository and name your endpoint. Select N. Virginia as your region (`us-east-1`). Give your endpoint an appropriate name. Make sure to select *at least* a L4 GPU.
|
22 |
-
|
23 |
-

|
24 |
-
|
25 |
-
Select the following settings for your `Advanced Configuration`.
|
26 |
-
|
27 |
-

|
28 |
-
|
29 |
-
Create a `Protected` endpoint.
|
30 |
-
|
31 |
-

|
32 |
-
|
33 |
-
If you were successful, you should see the following screen:
|
34 |
-
|
35 |
-

|
36 |
-
|
37 |
-
#### Embedding Model Endpoint
|
38 |
-
We'll be using `Snowflake/snowflake-arctic-embed-m` for our embedding model today.
|
39 |
-
|
40 |
-
The process is the same as the LLM - but we'll make a few specific tweaks:
|
41 |
-
|
42 |
-
Let's make sure our set-up reflects the following screenshots:
|
43 |
-
|
44 |
-

|
45 |
-
|
46 |
-
After which, make sure the advanced configuration is set like so:
|
47 |
-
|
48 |
-

|
49 |
-
|
50 |
-
> #### NOTE: PLEASE SHUTDOWN YOUR INSTANCES WHEN YOU HAVE COMPLETED THE ASSIGNMENT TO PREVENT UNESSECARY CHARGES.
|
51 |
-
|
52 |
-
### Build Task 2: Create RAG Pipeline with LangChain
|
53 |
-
|
54 |
-
Follow the [notebook](https://colab.research.google.com/drive/1v1FYmvKH4gsqcdZwIT9wvbQe0GUjrc9d?usp=sharing) to create a LangChain pipeline powered by Hugging Face endpoints!
|
55 |
-
|
56 |
-
Once you're done - please move on to Build Task 3!
|
57 |
-
|
58 |
-
### Build Task 3: Create a Chainlit Application
|
59 |
-
|
60 |
-
1. Create a new empty Docker space through Hugging Face - with the following settings:
|
61 |
-
|
62 |
-

|
63 |
-
|
64 |
-
> NOTE: You may notice the application builds slowly (~15min.) with the default free-tier hardware. The process will be faster using the `CPU upgrade` Space Hardware - though it is not required.
|
65 |
-
|
66 |
-
2. Clone the newly created space into a directory that is *NOT IN YOUR AI MAKERSPACE REPOSITORY* using the SSH option.
|
67 |
-
|
68 |
-
> NOTE: You may need to ensure you've added your SSH key to Hugging Face, as well as GitHub. This should already be done.
|
69 |
-
|
70 |
-

|
71 |
-
|
72 |
-
3. Copy and Paste (`cp ...` or through UI) the contents of `Week 4/Day 1` into the newly cloned repository.
|
73 |
-
|
74 |
-
> NOTE: Please keep the `README.md` that was cloned from your space and delete the class `README.md`.
|
75 |
-
|
76 |
-
4. Using the `ls` command or the `tree` command verify that you have copied over:
|
77 |
-
- `app.py`
|
78 |
-
- `Dockerfile`
|
79 |
-
- `data/paul_graham_essays.txt`
|
80 |
-
- `chainlit.md`
|
81 |
-
- `.gitignore`
|
82 |
-
- `.env.sample`
|
83 |
-
- `solution_app.py`
|
84 |
-
- `requirements.txt`
|
85 |
-
|
86 |
-
Here is an example as the `ls -al` CLI command:
|
87 |
-
|
88 |
-

|
89 |
-
|
90 |
-
5. Work through the `app.py` file to migrate your LCEL LangChain RAG Chain from the Notebook to Chainlit!
|
91 |
-
|
92 |
-
6. Be sure to modify your `README.md` and `chainlit.md` as you see fit!
|
93 |
-
|
94 |
-
> NOTE: If you get stuck, there is a working reference version in `solution_app.py`.
|
95 |
-
|
96 |
-
7. When you are done with local testing - push your changes to your space.
|
97 |
-
|
98 |
-
8. Make sure you add your `HF_LLM_ENDPOINT`, `HF_EMBED_ENDPOINT`, `HF_TOKEN` as "Secrets" in your Hugging Face Space.
|
99 |
-
|
100 |
-
### Terminating Your Resources
|
101 |
-
|
102 |
-
Please head to the settings of each endpoint and select `Delete Endpoint`. You will need to type the name of the endpoint to delete the resources.
|
103 |
-
|
104 |
-
### Deliverables
|
105 |
-
|
106 |
-
- Completed Notebook
|
107 |
-
- Chainlit Application in a Hugging Face Space Powered by Hugging Face Endpoints
|
108 |
-
- Screenshot of endpoint usage
|
109 |
-
|
110 |
-
Example Screen Shot:
|
111 |
-
|
112 |
-

|
113 |
-
|
114 |
-
## Ship π’
|
115 |
-
|
116 |
-
Create a Hugging Face Space powered by Hugging Face Endpoints!
|
117 |
-
|
118 |
-
### Deliverables
|
119 |
-
|
120 |
-
- A short Loom of the space, and a 1min. walkthrough of the application in full
|
121 |
-
|
122 |
-
## Share π
|
123 |
-
|
124 |
-
Make a social media post about your final application!
|
125 |
-
|
126 |
-
### Deliverables
|
127 |
-
|
128 |
-
- Make a post on any social media platform about what you built!
|
129 |
-
|
130 |
-
Here's a template to get you started:
|
131 |
-
|
132 |
-
```
|
133 |
-
π Exciting News! π
|
134 |
-
|
135 |
-
I am thrilled to announce that I have just built and shipped a open-source LLM-powered Retrieval Augmented Generation Application with LangChain! ππ€
|
136 |
-
|
137 |
-
π Three Key Takeaways:
|
138 |
-
1οΈβ£
|
139 |
-
2οΈβ£
|
140 |
-
3οΈβ£
|
141 |
-
|
142 |
-
Let's continue pushing the boundaries of what's possible in the world of AI and question-answering. Here's to many more innovations! π
|
143 |
-
Shout out to @AIMakerspace !
|
144 |
-
|
145 |
-
#LangChain #QuestionAnswering #RetrievalAugmented #Innovation #AI #TechMilestone
|
146 |
-
|
147 |
-
Feel free to reach out if you're curious or would like to collaborate on similar projects! π€π₯
|
148 |
-
```
|
149 |
-
|
150 |
-
> #### NOTE: PLEASE SHUTDOWN YOUR INSTANCES WHEN YOU HAVE COMPLETED THE ASSIGNMENT TO PREVENT UNESSECARY CHARGES.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|