{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "f96b815e"
   },
   "source": [
    "# Build a Generative AI application using Elasticsearch and OpenAI"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "e0f537af"
   },
   "source": [
    "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elastic/elasticsearch-labs/blob/main/supporting-blog-content/openai-rag-streamlit/openai_rag_streamlit.ipynb)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "349e0e74"
   },
   "source": [
    "This notebook demonstrates how to:\n",
    "- Index the OpenAI Wikipedia vector dataset into Elasticsearch\n",
    "- Build a simple Gen AI application with Streamlit that retrieves context using Elasticsearch and formulate answers using OpenAI\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "8nGNmWM3idlK"
   },
   "source": [
    "![Screenshot 2023-08-16 at 3.27.07 PM.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "aa9576ca"
   },
   "source": [
    "## Install packages and import modules"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "8c304b93"
   },
   "outputs": [],
   "source": [
    "# install packages\n",
    "\n",
    "!python3 -m pip install -qU openai pandas==1.5.3 wget elasticsearch streamlit tqdm\n",
    "\n",
    "# import modules\n",
    "\n",
    "import os\n",
    "from getpass import getpass\n",
    "from elasticsearch import Elasticsearch, helpers\n",
    "import wget, zipfile, pandas as pd, json, openai\n",
    "import streamlit as st\n",
    "from tqdm.notebook import tqdm"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "de32a789"
   },
   "source": [
    "## Connect to Elasticsearch\n",
    "\n",
    "ℹ️ We're using an Elastic Cloud deployment of Elasticsearch for this notebook.\n",
    "If you don't already have an Elastic deployment, you can sign up for a free [Elastic Cloud trial](https://cloud.elastic.co/registration?utm_source=github&utm_content=elasticsearch-labs-notebook).\n",
    "\n",
    "To connect to Elasticsearch, you need to create a client instance with the Cloud ID and password for your deployment.\n",
    "\n",
    "Find the Cloud ID for your deployment by going to https://cloud.elastic.co/deployments and selecting your deployment."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "3a57b6a8"
   },
   "outputs": [],
   "source": [
    "os.environ[\"es_cloud_id\"] = getpass(\"Elastic deployment Cloud ID\")\n",
    "os.environ[\"es_password\"] = getpass(\"Elastic deployment Password\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "cb2ng640_o2n"
   },
   "source": [
    "Test the connection with Elasticsearch."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "NJkflYkGHnGL"
   },
   "outputs": [],
   "source": [
    "es_cloud_id = os.environ[\"es_cloud_id\"]\n",
    "es_password = os.environ[\"es_password\"]\n",
    "\n",
    "client = Elasticsearch(\n",
    "    cloud_id=es_cloud_id,\n",
    "    basic_auth=(\n",
    "        \"elastic\",\n",
    "        es_password,\n",
    "    ),  # Alternatively use `api_key` instead of `basic_auth`\n",
    ")\n",
    "\n",
    "# Test connection to Elasticsearch\n",
    "print(client.info())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "JiA-4-Kb-C3K"
   },
   "source": [
    "## Configure OpenAI connection\n",
    "\n",
    "Our example will use OpenAI to formulate an answer, so please provide a valid OpenAI Api Key here.\n",
    "\n",
    "You can follow [this guide](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key) to retrieve your API Key.\n",
    "\n",
    "Then test the connection with OpenAI and check the model used in this notebook is available."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "imKyf8sm-caV"
   },
   "outputs": [],
   "source": [
    "os.environ[\"openai_api_key\"] = getpass(\"OpenAI Api Key\")\n",
    "openai.api_key = os.environ[\"openai_api_key\"]\n",
    "openai.Model.retrieve(\"text-embedding-ada-002\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "80b55952"
   },
   "source": [
    "## Download the dataset\n",
    "\n",
    "In this step we download the OpenAI Wikipedia embeddings dataset, and extract the zip file."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "c584f15c"
   },
   "outputs": [],
   "source": [
    "embeddings_url = \"https://cdn.openai.com/API/examples/data/vector_database_wikipedia_articles_embedded.zip\"\n",
    "wget.download(embeddings_url)\n",
    "\n",
    "with zipfile.ZipFile(\"vector_database_wikipedia_articles_embedded.zip\", \"r\") as zip_ref:\n",
    "    zip_ref.extractall(\"data\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9654ac08"
   },
   "source": [
    "##  Read CSV file into a Pandas DataFrame\n",
    "\n",
    "Next we use the Pandas library to read the unzipped CSV file into a DataFrame. This step makes it easier to index the data into Elasticsearch in bulk."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "76347d10"
   },
   "outputs": [],
   "source": [
    "wikipedia_dataframe = pd.read_csv(\n",
    "    \"data/vector_database_wikipedia_articles_embedded.csv\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "6af9f5ad"
   },
   "source": [
    "## Create index with mapping\n",
    "\n",
    "Now we need to create an Elasticsearch index with the necessary mappings. This will enable us to index the data into Elasticsearch.\n",
    "\n",
    "We use the `dense_vector` field type for the `title_vector` and  `content_vector` fields. This is a special field type that allows us to store dense vectors in Elasticsearch.\n",
    "\n",
    "Later, we'll need to target the `dense_vector` field for kNN search.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "681989b3"
   },
   "outputs": [],
   "source": [
    "index_mapping = {\n",
    "    \"properties\": {\n",
    "        \"title_vector\": {\n",
    "            \"type\": \"dense_vector\",\n",
    "            \"dims\": 1536,\n",
    "            \"index\": \"true\",\n",
    "            \"similarity\": \"cosine\",\n",
    "        },\n",
    "        \"content_vector\": {\n",
    "            \"type\": \"dense_vector\",\n",
    "            \"dims\": 1536,\n",
    "            \"index\": \"true\",\n",
    "            \"similarity\": \"cosine\",\n",
    "        },\n",
    "        \"text\": {\"type\": \"text\"},\n",
    "        \"title\": {\"type\": \"text\"},\n",
    "        \"url\": {\"type\": \"keyword\"},\n",
    "        \"vector_id\": {\"type\": \"long\"},\n",
    "    }\n",
    "}\n",
    "client.indices.create(index=\"wikipedia_vector_index\", mappings=index_mapping)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "c2fb582e"
   },
   "source": [
    "## Index data into Elasticsearch\n",
    "\n",
    "The following function generates the required bulk actions that can be passed to Elasticsearch's Bulk API, so we can index multiple documents efficiently in a single request.\n",
    "\n",
    "For each row in the DataFrame, the function yields a dictionary representing a single document to be indexed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "efee9b97"
   },
   "outputs": [],
   "source": [
    "def dataframe_to_bulk_actions(df):\n",
    "    for index, row in df.iterrows():\n",
    "        yield {\n",
    "            \"_index\": \"wikipedia_vector_index\",\n",
    "            \"_id\": row[\"id\"],\n",
    "            \"_source\": {\n",
    "                \"url\": row[\"url\"],\n",
    "                \"title\": row[\"title\"],\n",
    "                \"text\": row[\"text\"],\n",
    "                \"title_vector\": json.loads(row[\"title_vector\"]),\n",
    "                \"content_vector\": json.loads(row[\"content_vector\"]),\n",
    "                \"vector_id\": row[\"vector_id\"],\n",
    "            },\n",
    "        }"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "b8164b38"
   },
   "source": [
    "As the dataframe is large, we will index data in batches of `100`. We index the data into Elasticsearch using the Python client's [helpers](https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/client-helpers.html#bulk-helpers) for the bulk API."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "aacb5e9c"
   },
   "outputs": [],
   "source": [
    "total_documents = len(wikipedia_dataframe)\n",
    "\n",
    "progress_bar = tqdm(total=total_documents, unit=\"documents\")\n",
    "success_count = 0\n",
    "\n",
    "for ok, info in helpers.streaming_bulk(\n",
    "    client,\n",
    "    actions=dataframe_to_bulk_actions(wikipedia_dataframe),\n",
    "    raise_on_error=False,\n",
    "    chunk_size=100,\n",
    "):\n",
    "    if ok:\n",
    "        success_count += 1\n",
    "    else:\n",
    "        print(f\"Unable to index {info['index']['_id']}: {info['index']['error']}\")\n",
    "    progress_bar.update(1)\n",
    "    progress_bar.set_postfix(success=success_count)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "rnJbAQdgbXSm"
   },
   "source": [
    "## Build application with Streamlit\n",
    "\n",
    "In the following section, you will build a simple interface using streamlit.\n",
    "\n",
    "This application will display a simple search bar where an user can ask a question. Elasticsearch is used to retrieve the relevant documents (context) matching the question then OpenAI formulate an answer using the context."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "I55fQHW589RP"
   },
   "source": [
    "Install the dependency to access the application once running."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "LnL-wOdRct5O"
   },
   "outputs": [],
   "source": [
    "!npm install localtunnel"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "LkEHb4VMevcc"
   },
   "source": [
    "Create application"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "_J7keMUAewy1"
   },
   "outputs": [],
   "source": [
    "%%writefile app.py\n",
    "\n",
    "import os\n",
    "import streamlit as st\n",
    "import openai\n",
    "from elasticsearch import Elasticsearch\n",
    "\n",
    "\n",
    "# Elastic Cloud\n",
    "es_cloud_id = os.environ['es_cloud_id']\n",
    "es_password = os.environ['es_password']\n",
    "\n",
    "# OpenAI\n",
    "openai.api_key = os.environ['openai_api_key']\n",
    "\n",
    "# Define model\n",
    "EMBEDDING_MODEL = \"text-embedding-ada-002\"\n",
    "\n",
    "# Connect to Elasticsearch\n",
    "client = Elasticsearch(\n",
    "  cloud_id = es_cloud_id,\n",
    "  basic_auth=(\"elastic\", es_password) # Alternatively use `api_key` instead of `basic_auth`\n",
    ")\n",
    "\n",
    "def openai_summarize(query, response):\n",
    "    context = response['hits']['hits'][0]['_source']['text']\n",
    "    summary = openai.ChatCompletion.create(\n",
    "    model=\"gpt-3.5-turbo\",\n",
    "    messages=[\n",
    "            {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
    "            {\"role\": \"user\", \"content\": \"Answer the following question:\" + query + \"by using the following text: \" + context},\n",
    "        ]\n",
    "    )\n",
    "    return summary.choices[0].message.content\n",
    "\n",
    "\n",
    "def search_es(query):\n",
    "    # Create embedding\n",
    "    question_embedding = openai.Embedding.create(input=query, model=EMBEDDING_MODEL)\n",
    "\n",
    "    # Define Elasticsearch query\n",
    "    response = client.search(\n",
    "    index = \"wikipedia_vector_index\",\n",
    "    knn={\n",
    "        \"field\": \"content_vector\",\n",
    "        \"query_vector\":  question_embedding[\"data\"][0][\"embedding\"],\n",
    "        \"k\": 10,\n",
    "        \"num_candidates\": 100\n",
    "        }\n",
    "    )\n",
    "    return response\n",
    "\n",
    "\n",
    "def main():\n",
    "    st.title(\"Gen AI Application\")\n",
    "\n",
    "    # Input for user search query\n",
    "    user_query = st.text_input(\"Enter your question:\")\n",
    "\n",
    "    if st.button(\"Search\"):\n",
    "        if user_query:\n",
    "\n",
    "            st.write(f\"Searching for: {user_query}\")\n",
    "            result = search_es(user_query)\n",
    "\n",
    "            # print(result)\n",
    "            openai_summary = openai_summarize(user_query, result)\n",
    "            st.write(f\"OpenAI Summary: {openai_summary}\")\n",
    "\n",
    "            # Display search results\n",
    "            if result['hits']['total']['value'] > 0:\n",
    "                st.write(\"Search Results:\")\n",
    "                for hit in result['hits']['hits']:\n",
    "                    st.write(hit['_source']['title'])\n",
    "                    st.write(hit['_source']['text'])\n",
    "            else:\n",
    "                st.write(\"No results found.\")\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    main()\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "BU1WKBVGe5ZY"
   },
   "source": [
    "### Run the application\n",
    "\n",
    "Run the application and check your IP for the tunneling"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "-oQa-VV6e40J"
   },
   "outputs": [],
   "source": [
    "!streamlit run app.py &> /content/app.log & curl ipv4.icanhazip.com"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ZXLKpEvMe-D2"
   },
   "source": [
    "### Create the tunnel to access it from anywhere\n",
    "\n",
    "Run the tunnel and use the link below to connect to the tunnel.\n",
    "\n",
    "Use the IP from the previous step to connect to the application"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "background_save": true
    },
    "id": "ertvvtnifAZy"
   },
   "outputs": [],
   "source": [
    "!npx localtunnel --port 8501"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "AcvvIMxxGIun"
   },
   "source": [
    "Success you build your first Gen AI Application.\n",
    "\n",
    "You can try it by asking question such as \"Who is Beethoven?\" or \"What is football?\" and see the answers."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "WHBfQus1TfRG"
   },
   "source": [
    "## Next steps\n",
    "\n",
    "Now you know how to quickly put together an interface that allows you to ask questions and get answer from a specific dataset, in this notebook example, wikipedia.\n",
    "\n",
    "You can adapt this example to use your own dataset, and use the streamlit application as a blueprint for integrating with your own application."
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3.11.3 64-bit",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.3"
  },
  "vscode": {
   "interpreter": {
    "hash": "b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
