rasyosef's picture
Update README.md
2798c57 verified
|
raw
history blame
1.33 kB
---
title: RAG With Gemini Pinecone LlamaIndex
emoji: 🌍
colorFrom: purple
colorTo: blue
sdk: gradio
sdk_version: 4.31.5
app_file: app.py
pinned: false
license: mit
short_description: RAG using Gemini Pro LLM and Pinecone Vector Database
---
# Retrieval Augmented Generation with Gemini Pro, Pinecone and LlamaIndex: Question Answering demo
### This demo uses the Gemini Pro LLM and Pinecone Vector Search for fast and performant Retrieval Augmented Generation (RAG).
The context is the new Oppenheimer movie's entire wikipedia page. The movie came out very recently in July, 2023, so the Gemini Pro model is not aware of it.
Retrieval Augmented Generation (RAG) enables us to retrieve just the few small chunks of the document that are relevant to the our query and inject it into our prompt. The model is then able to answer questions by incorporating knowledge from the newly provided document. RAG can be used with thousands of documents, but this demo is limited to just one txt file.
# RAG Components
- ### `LLM` : Gemini Pro
- ### `Text Embedding Model` : Gemini Embeddings (embedding-001)
- ### `Vector Database` : Pinecone
- ### `Framework` : LlamaIndex
# Demo
The demo has been depolyed to the following HuggingFace space.
https://huggingface.co/spaces/rasyosef/RAG-with-Gemini-Pinecone-LlamaIndex