# 📚 GroqDoc: Chat with your Documents Ever spent hours sifting through a lengthy document, searching for that one specific answer? GroqDoc eliminates this frustration by transforming your documents into interactive chat sessions. GroqDoc is a chatbot application that lets you chat with your uploaded documents! It leverages the power of open-source Large Language Models (LLMs) and the rapid inference speed of Groq. ## Getting Started **1. Generate your Groq API Key (One Time Process):** To use GroqDoc, you'll need a Groq API key. Here's how to generate one for free: - Visit the Groq Cloud console: [https://console.groq.com/](https://console.groq.com/) - Sign up for a free account (if you don't have one already). - Navigate to the "API Keys" section (usually found in the left-side navigation panel). - Click "Create API Key" and give it a descriptive name. - Click "Submit" to generate your unique API key. - **Important:** Copy and securely store your API key. Don't share it publicly. **2. Upload your Document:** Once you have your Groq API key, launch GroqDoc and upload `PDF` documents you want to chat with. GroqDoc supports upto 10 pdfs and upto 100mb at a time. It may take a few seconds to prcess your documents. **3. Ask your Questions:** Once your document is uploaded, feel free to ask GroqDoc any questions you have about the content. GroqDoc will utilize its conversational RAG architecture to retrieve relevant information and provide you with answers directly from the document. **4. Choose your LLM (Optional):** GroqDoc offers a selection of open-source LLMs for you to choose from within the settings panel in the chatbox. These include: - llama3-8b-8192 - llama3-70b-8192 - mixtral-8x7b-32768 - gemma-7b-it `llama3-70b-8192` is set by default. Experimenting with different LLMs can provide you with varying perspectives and answer styles. ----- ## Project Overview QroqDoc is based on Conversational RAG architecture which allows it to retrieve relevant information from the uploaded documents and response based on the user's query and chat history. The following tech stack is used to achieve this - **1. Framework:** [🦜️🔗 LangChain](https://www.langchain.com/) **2. Text Embedding Model:** [Snowflake's Arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) **3. Vector Store:** [Chroma DB](https://www.trychroma.com/) **4. Open Source LLMs:** - [LLaMA3 8b](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - Developer: Meta - Context Window: 8,192 tokens - [LLaMA3 70b](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) - Developer: Meta - Context Window: 8,192 tokens - [Mixtral 8x7b](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) - Developer: Mistral - Context Window: 32,768 tokens - [Gemma 7b](https://huggingface.co/google/gemma-1.1-7b-it) - Developer: Google - Context Window: 8,192 tokens **5. Infernce Engine:** [🚀Groq](https://groq.com/) **6. UI:** [Chainlit](https://docs.chainlit.io/get-started/overview) **7. Deployment** [🤗Hugging Face Space](https://huggingface.co/spaces)