{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "view-in-github"
},
"source": [
""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EnzlcRZycXnr"
},
"source": [
"# ConversaDocs\n",
"\n",
"`Chat with your documents using Llama 2, Falcon or OpenAI`\n",
"\n",
"- You can upload multiple documents at once to a single database.\n",
"- Every time a new database is created, the previous one is deleted.\n",
"- For maximum privacy, you can click \"Load LLAMA GGUF Model\" to use a Llama 2 model. By default, the model llama-2_7B-Chat is loaded.\n",
"\n",
"Program that enables seamless interaction with your documents through an advanced vector database and the power of Large Language Model (LLM) technology.\n",
"\n",
"| Description | Link |\n",
"| ----------- | ---- |\n",
"| 📙 Colab Notebook | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/R3gm/ConversaDocs/blob/main/ConversaDocs_Colab.ipynb) |\n",
"| 🎉 Repository | [![GitHub Repository](https://img.shields.io/badge/GitHub-Repository-black?style=flat-square&logo=github)](https://github.com/R3gm/ConversaDocs/) |\n",
"| 🚀 Online Demo | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/r3gm/ConversaDocs) |\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "S5awiNy-A50W"
},
"outputs": [],
"source": [
"!git clone https://github.com/R3gm/ConversaDocs.git\n",
"%cd ConversaDocs\n",
"!pip install -r requirements.txt"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "_EShTkcgAOWa"
},
"source": [
"Install llama-cpp-python, whether for use on a GPU or solely on a CPU."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "fyPLgbJW95ah"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CUDA is not available on this system.\n"
]
}
],
"source": [
"import torch\n",
"import os\n",
"if torch.cuda.is_available():\n",
" print(\"CUDA is available on this system.\")\n",
" os.system('CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.78 --force-reinstall --upgrade --no-cache-dir --verbose')\n",
"else:\n",
" print(\"CUDA is not available on this system.\")\n",
" os.system('pip install llama-cpp-python==0.1.78')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jLfxiOyMEcGF"
},
"source": [
"`RESTART THE RUNTIME` before executing the next cell."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"id": "2F2VGAJtEbb3"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[WinError 3] The system cannot find the path specified: '/content/ConversaDocs'\n",
"C:\\Users\\Siong Huat\\Project\\ConversaDocs\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Traceback (most recent call last):\n",
" File \"app.py\", line 13, in \n",
" import gradio as gr\n",
"ModuleNotFoundError: No module named 'gradio'\n"
]
}
],
"source": [
"# RUN APP\n",
"%cd /content/ConversaDocs\n",
"!python app.py"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3aEEcmchZIlf"
},
"source": [
"Open the `public URL` when it appears"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"include_colab_link": true,
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.18"
}
},
"nbformat": 4,
"nbformat_minor": 1
}