File size: 3,731 Bytes
5f80bb9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "view-in-github"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/R3gm/ConversaDocs/blob/main/ConversaDocs_Colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EnzlcRZycXnr"
      },
      "source": [
        "# LLmDocumentChatBot\n",
        "\n",
        "`Chat with your documents using Llama 2, Falcon or OpenAI`\n",
        "\n",
        "- You can upload multiple documents at once to a single database.\n",
        "- Every time a new database is created, the previous one is deleted.\n",
        "- For maximum privacy, you can click \"Load LLAMA GGML Model\" to use a Llama 2 model. By default, the model llama-2_7B-Chat is loaded.\n",
        "\n",
        "Program that enables seamless interaction with your documents through an advanced vector database and the power of Large Language Model (LLM) technology.\n",
        "\n",
        "| Description | Link |\n",
        "| ----------- | ---- |\n",
        "| πŸ“™ Colab Notebook | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ndn1954/llmdocumentchatbot/blob/main/LLMdocumentchatbot_Colab.ipynb) |\n",
        "| πŸŽ‰ Repository | [![GitHub Repository](https://img.shields.io/badge/GitHub-Repository-black?style=flat-square&logo=github)](https://github.com/ndn1954/llmdocumentchatbot/) |\n",
        "| πŸš€ Online Demo | [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/ndn1954/llmdocumentchatbot) |\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "S5awiNy-A50W"
      },
      "outputs": [],
      "source": [
        "!git clone https://github.com/ndn1954/llmdocumentchatbot.git\n",
        "%cd llmdocumentchatbot\n",
        "!pip install -r requirements.txt\n",
        "\n",
        "import torch\n",
        "import os\n",
        "print(\"Wait until the cell finishes executing\")\n",
        "if torch.cuda.is_available():\n",
        "    print(\"CUDA is available on this system.\")\n",
        "    os.system('CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir --verbose')\n",
        "else:\n",
        "    print(\"CUDA is not available on this system.\")\n",
        "    os.system('pip install llama-cpp-python')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jLfxiOyMEcGF"
      },
      "source": [
        "`RESTART THE RUNTIME` before executing the next cell."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2F2VGAJtEbb3"
      },
      "outputs": [],
      "source": [
        "%cd /content/llmdocumentchatbot\n",
        "!python app.py"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3aEEcmchZIlf"
      },
      "source": [
        "Open the `public URL` when it appears"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "authorship_tag": "ABX9TyMoq/QuUmy+xrGmEAesfDhp",
      "gpuType": "T4",
      "include_colab_link": true,
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}