Spaces:
Sleeping
Sleeping
File size: 1,748 Bytes
94e0bd3 fe1526d a5a5571 3ad05d5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
title: Chatbot For Files Langchain
emoji: ⚡
colorFrom: yellow
colorTo: pink
sdk: streamlit
sdk_version: 1.19.0
app_file: app.py
pinned: false
license: mit
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
This is a chatbot that uses Langchain's Conversational Retrieval Chain to generate responses to user input. The chatbot can ingest files and use Pinecone (Pinecone API key required) or Chroma vector stores (no API key required) to retrieve relevant documents for generating responses. OpenAI's API key is also required. The UI is based on Streamlit.
## Fun fact
This README file is generated by this app after ingesting this python file. See the screenshot below.
## Installation
To install the required packages, run:
```
pip install -r requirements.txt
```
## Usage
To run the chatbot, run:
```
streamlit run app.py
```
The chatbot will prompt the user for inputs and generate a response based on user's question and the chat history.
## Ingesting Files
To ingest files, select "Yes" when prompted and upload the files. The chatbot will split the files into smaller documents and ingest them into the vector store.
## Using Pinecone
To use Pinecone, select "Yes" when prompted and enter the name of the Pinecone index. Make sure to set the `PINECONE_API_KEY` and `PINECONE_API_ENV` environment variables.
## Using Chroma
To use Chroma, enter the name of the Chroma collection when prompted. The chatbot will create a Chroma vector store in the `persist_directory` specified in the code.
## Screenshot
Use Pinecone and not ingest files:
![Use Pinecone](./screenshot/Pinecone_noingest.png)
Use Chroma and ingest files:
![Use Chroma](./screenshot/Chroma_ingest.png)
|