Spaces:
Runtime error
Runtime error
RajatChaudhari
commited on
Commit
•
579951f
1
Parent(s):
7da4705
Update app.py
Browse files
app.py
CHANGED
@@ -82,7 +82,7 @@ if __name__ == "__main__":
|
|
82 |
description = """
|
83 |
<img src="https://superagi.com/wp-content/uploads/2023/10/Introduction-to-RAGA-Retrieval-Augmented-Generation-and-Actions-1200x600.png.webp" width=100%>
|
84 |
<br>
|
85 |
-
Demo using Vector store-backed retriever. This space demonstrate application of RAG on a small model and its effectiveness, I used small model because of the space constraint. The current space runs on
|
86 |
<ul>
|
87 |
<li>model: TinyLlama/TinyLlama-1.1B-Chat-v1.0</li>
|
88 |
<li>update1: This space now does not create a faiss index on build, it uses a locally saved faiss index</li>
|
@@ -96,8 +96,8 @@ if __name__ == "__main__":
|
|
96 |
<li>What are forms of memory implementation in langchain</li>
|
97 |
<li>What is question answering from documents</li>
|
98 |
</ul>
|
99 |
-
Go through this paper here to find more about langchain and then test how this solution performs. <a href='https://www.researchgate.net/publication/372669736_Creating_Large_Language_Model_Applications_Utilizing_LangChain_A_Primer_on_Developing_LLM_Apps_Fast' target='_blank'>This paper is the data source for this solution
|
100 |
-
Have you already used RAG? feel free to suggest improvements
|
101 |
Feel excited about the implementation? You know where to find me!
|
102 |
I would love to connect and have a chat.
|
103 |
</p>"""
|
|
|
82 |
description = """
|
83 |
<img src="https://superagi.com/wp-content/uploads/2023/10/Introduction-to-RAGA-Retrieval-Augmented-Generation-and-Actions-1200x600.png.webp" width=100%>
|
84 |
<br>
|
85 |
+
Demo using Vector store-backed retriever. This space demonstrate application of RAG on a small model and its effectiveness, I used small model because of the space constraint. The current space runs on meer <b>2v CPU and 16GB of RAM</b>, hence there is some delay in generating output. Test this to your hearts content and let me know your thoughts, I will keep updating this space with tiny improvements on architecture and design
|
86 |
<ul>
|
87 |
<li>model: TinyLlama/TinyLlama-1.1B-Chat-v1.0</li>
|
88 |
<li>update1: This space now does not create a faiss index on build, it uses a locally saved faiss index</li>
|
|
|
96 |
<li>What are forms of memory implementation in langchain</li>
|
97 |
<li>What is question answering from documents</li>
|
98 |
</ul>
|
99 |
+
Go through this paper here to find more about langchain and then test how this solution performs. <a href='https://www.researchgate.net/publication/372669736_Creating_Large_Language_Model_Applications_Utilizing_LangChain_A_Primer_on_Developing_LLM_Apps_Fast' target='_blank'> This paper is the data source for this solution.</a>
|
100 |
+
Have you already used RAG? feel free to suggest improvements.
|
101 |
Feel excited about the implementation? You know where to find me!
|
102 |
I would love to connect and have a chat.
|
103 |
</p>"""
|