lalanikarim's picture
Change emoji
c7dfe89
|
raw
history blame
No virus
1.8 kB
metadata
title: Ai Chatbot with Conversational Memory
emoji: 🔥
colorFrom: indigo
colorTo: gray
sdk: streamlit
sdk_version: 1.28.0
app_file: main.py
pinned: false
license: mit

Streamlit + Langchain + LLama.cpp w/ Mistral + Conversational Memory

Run your own AI Chatbot locally without a GPU.

To make that possible, we use the Mistral 7b model.
However, you can use any quantized model that is supported by llama.cpp.

This model will chatbot will allow you to define it's personality and respond to the questions accordingly.
This example remembers the chat history allowing you to ask follow up questions.

TL;DR instructions

  1. Install llama-cpp-python
  2. Install langchain
  3. Install streamlit
  4. Run streamlit

Step by Step instructions

The setup assumes you have python already installed and venv module available.

  1. Download the code or clone the repository.
  2. Inside the root folder of the repository, initialize a python virtual environment:
python -m venv venv
  1. Activate the python envitonment:
source venv/bin/activate
  1. Install required packages (langchain, llama.cpp, and streamlit):
pip install -r requirements.txt
  1. Start streamlit:
streamlit run main.py
  1. The models directory will be created and the app will download the Mistral7b quantized model from huggingface from the following link: mistral-7b-instruct-v0.1.Q4_0.gguf

Screenshot

Screenshot from 2023-10-23 20-36-22