Spaces:
Runtime error
Runtime error
metadata
title: Ai Chatbot with Conversational Memory
emoji: 🔥
colorFrom: indigo
colorTo: gray
sdk: streamlit
sdk_version: 1.28.0
app_file: main.py
pinned: false
license: mit
Streamlit + Langchain + LLama.cpp w/ Mistral + Conversational Memory
Run your own AI Chatbot locally without a GPU.
To make that possible, we use the Mistral 7b model.
However, you can use any quantized model that is supported by llama.cpp.
This model will chatbot will allow you to define it's personality and respond to the questions accordingly.
This example remembers the chat history allowing you to ask follow up questions.
TL;DR instructions
- Install llama-cpp-python
- Install langchain
- Install streamlit
- Run streamlit
Step by Step instructions
The setup assumes you have python
already installed and venv
module available.
- Download the code or clone the repository.
- Inside the root folder of the repository, initialize a python virtual environment:
python -m venv venv
- Activate the python envitonment:
source venv/bin/activate
- Install required packages (
langchain
,llama.cpp
, andstreamlit
):
pip install -r requirements.txt
- Start
streamlit
:
streamlit run main.py
- The
models
directory will be created and the app will download theMistral7b
quantized model fromhuggingface
from the following link: mistral-7b-instruct-v0.1.Q4_0.gguf