rasyosef commited on
Commit
2798c57
β€’
1 Parent(s): f931159

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -13
README.md CHANGED
@@ -1,13 +1,31 @@
1
- ---
2
- title: RAG With Gemini Pinecone LlamaIndex
3
- emoji: 🌍
4
- colorFrom: purple
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 4.31.5
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: RAG With Gemini Pinecone LlamaIndex
3
+ emoji: 🌍
4
+ colorFrom: purple
5
+ colorTo: blue
6
+ sdk: gradio
7
+ sdk_version: 4.31.5
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ short_description: RAG using Gemini Pro LLM and Pinecone Vector Database
12
+ ---
13
+
14
+ # Retrieval Augmented Generation with Gemini Pro, Pinecone and LlamaIndex: Question Answering demo
15
+
16
+ ### This demo uses the Gemini Pro LLM and Pinecone Vector Search for fast and performant Retrieval Augmented Generation (RAG).
17
+
18
+ The context is the new Oppenheimer movie's entire wikipedia page. The movie came out very recently in July, 2023, so the Gemini Pro model is not aware of it.
19
+
20
+ Retrieval Augmented Generation (RAG) enables us to retrieve just the few small chunks of the document that are relevant to the our query and inject it into our prompt. The model is then able to answer questions by incorporating knowledge from the newly provided document. RAG can be used with thousands of documents, but this demo is limited to just one txt file.
21
+
22
+ # RAG Components
23
+ - ### `LLM` : Gemini Pro
24
+ - ### `Text Embedding Model` : Gemini Embeddings (embedding-001)
25
+ - ### `Vector Database` : Pinecone
26
+ - ### `Framework` : LlamaIndex
27
+
28
+ # Demo
29
+ The demo has been depolyed to the following HuggingFace space.
30
+
31
+ https://huggingface.co/spaces/rasyosef/RAG-with-Gemini-Pinecone-LlamaIndex