Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,24 @@ sdk: gradio
|
|
7 |
sdk_version: 4.31.5
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
-
|
11 |
---
|
12 |
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
sdk_version: 4.31.5
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
+
short_description: RAG with GPT 3.5 Turbo LLM and MongoDB Atlas Vector Search
|
11 |
---
|
12 |
|
13 |
+
# Retrieval Augmented Generation with GPT 3.5 Turbo, MongoDB Atlas Vector Search, and LlamaIndex: Question Answering demo
|
14 |
+
|
15 |
+
### This demo uses the GPT 3.5 Turbo LLM and MongoDB Atlas Vector Search for fast and performant Retrieval Augmented Generation (RAG).
|
16 |
+
|
17 |
+
The context is the new Oppenheimer movie's entire wikipedia page. The movie came out very recently in July, 2023, so the GPT 3.5 turbo model is not aware of it.
|
18 |
+
|
19 |
+
Retrieval Augmented Generation (RAG) enables us to retrieve just the few small chunks of the document that are relevant to the our query and inject it into our prompt. The model is then able to answer questions by incorporating knowledge from the newly provided document. RAG can be used with thousands of documents, but this demo is limited to just one txt file.
|
20 |
+
|
21 |
+
# RAG Components
|
22 |
+
- ### `LLM` : GPT 3.5 Turbo
|
23 |
+
- ### `Text Embedding Model` : OpenAI Embeddings (text-embedding-3-small)
|
24 |
+
- ### `Vector Database` : MongoDB Atlas Vector Search
|
25 |
+
- ### `Framework` : LlamaIndex
|
26 |
+
|
27 |
+
# Demo
|
28 |
+
The demo has been depolyed to the following HuggingFace space.
|
29 |
+
|
30 |
+
https://huggingface.co/spaces/rasyosef/RAG-with-GPT3.5-MongoDBAtlas-Llamaindex
|