{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "e39ce34b",
   "metadata": {
    "colab_type": "text",
    "id": "view-in-github"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/tomasonjo/blogs/blob/master/llm/official_langchain_neo4jvector.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "ea7390e8-b3b6-449e-9819-9cbec935fbdf",
   "metadata": {
    "id": "ea7390e8-b3b6-449e-9819-9cbec935fbdf",
    "outputId": "d870fe28-93ec-4cae-aa72-28a6b6058ed0"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Requirement already satisfied: langchain in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (0.1.0)\n",
      "Requirement already satisfied: openai in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (1.6.1)\n",
      "Requirement already satisfied: wikipedia in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (1.4.0)\n",
      "Requirement already satisfied: tiktoken in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (0.5.2)\n",
      "Requirement already satisfied: neo4j in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (5.16.0)\n",
      "Requirement already satisfied: langchain_openai in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (0.0.2.post1)\n",
      "Requirement already satisfied: PyYAML>=5.3 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain) (6.0.1)\n",
      "Requirement already satisfied: SQLAlchemy<3,>=1.4 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain) (2.0.21)\n",
      "Requirement already satisfied: aiohttp<4.0.0,>=3.8.3 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain) (3.9.0)\n",
      "Requirement already satisfied: dataclasses-json<0.7,>=0.5.7 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain) (0.6.3)\n",
      "Requirement already satisfied: jsonpatch<2.0,>=1.33 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain) (1.33)\n",
      "Requirement already satisfied: langchain-community<0.1,>=0.0.9 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain) (0.0.9)\n",
      "Requirement already satisfied: langchain-core<0.2,>=0.1.7 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain) (0.1.7)\n",
      "Requirement already satisfied: langsmith<0.1.0,>=0.0.77 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain) (0.0.77)\n",
      "Requirement already satisfied: numpy<2,>=1 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain) (1.26.2)\n",
      "Requirement already satisfied: pydantic<3,>=1 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain) (1.10.12)\n",
      "Requirement already satisfied: requests<3,>=2 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain) (2.31.0)\n",
      "Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain) (8.2.2)\n",
      "Requirement already satisfied: anyio<5,>=3.5.0 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from openai) (3.5.0)\n",
      "Requirement already satisfied: distro<2,>=1.7.0 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from openai) (1.8.0)\n",
      "Requirement already satisfied: httpx<1,>=0.23.0 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from openai) (0.26.0)\n",
      "Requirement already satisfied: sniffio in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from openai) (1.2.0)\n",
      "Requirement already satisfied: tqdm>4 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from openai) (4.65.0)\n",
      "Requirement already satisfied: typing-extensions<5,>=4.7 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from openai) (4.9.0)\n",
      "Requirement already satisfied: beautifulsoup4 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from wikipedia) (4.12.2)\n",
      "Requirement already satisfied: regex>=2022.1.18 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from tiktoken) (2023.10.3)\n",
      "Requirement already satisfied: pytz in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from neo4j) (2023.3.post1)\n",
      "Requirement already satisfied: attrs>=17.3.0 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (23.1.0)\n",
      "Requirement already satisfied: multidict<7.0,>=4.5 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (6.0.4)\n",
      "Requirement already satisfied: yarl<2.0,>=1.0 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.9.3)\n",
      "Requirement already satisfied: frozenlist>=1.1.1 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.4.0)\n",
      "Requirement already satisfied: aiosignal>=1.1.2 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.2.0)\n",
      "Requirement already satisfied: idna>=2.8 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from anyio<5,>=3.5.0->openai) (3.4)\n",
      "Requirement already satisfied: marshmallow<4.0.0,>=3.18.0 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (3.20.1)\n",
      "Requirement already satisfied: typing-inspect<1,>=0.4.0 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (0.9.0)\n",
      "Requirement already satisfied: certifi in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from httpx<1,>=0.23.0->openai) (2023.11.17)\n",
      "Requirement already satisfied: httpcore==1.* in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from httpx<1,>=0.23.0->openai) (1.0.2)\n",
      "Requirement already satisfied: h11<0.15,>=0.13 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai) (0.14.0)\n",
      "Requirement already satisfied: jsonpointer>=1.9 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from jsonpatch<2.0,>=1.33->langchain) (2.1)\n",
      "Requirement already satisfied: packaging<24.0,>=23.2 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from langchain-core<0.2,>=0.1.7->langchain) (23.2)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from requests<3,>=2->langchain) (2.0.4)\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from requests<3,>=2->langchain) (2.1.0)\n",
      "Requirement already satisfied: soupsieve>1.2 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from beautifulsoup4->wikipedia) (2.5)\n",
      "Requirement already satisfied: mypy-extensions>=0.3.0 in /Users/tomazbratanic/anaconda3/lib/python3.11/site-packages (from typing-inspect<1,>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain) (1.0.0)\n"
     ]
    }
   ],
   "source": [
    "!pip install langchain openai wikipedia tiktoken neo4j langchain_openai"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "e7e57380-e2a7-4dce-94a9-76f291c49e78",
   "metadata": {
    "id": "e7e57380-e2a7-4dce-94a9-76f291c49e78"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "from langchain_community.vectorstores.neo4j_vector import Neo4jVector\n",
    "from langchain.document_loaders import WikipediaLoader\n",
    "from langchain_openai import OpenAIEmbeddings\n",
    "from langchain.text_splitter import CharacterTextSplitter\n",
    "\n",
    "os.environ['OPENAI_API_KEY'] = \"sk-\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "jmQBn7qB7L8q",
   "metadata": {
    "id": "jmQBn7qB7L8q"
   },
   "source": [
    "# LangChain library adds full support for Neo4j Vector Index\n",
    "## Streamlining data ingestion and querying in Retrieval-Augmented Generation Applications\n",
    "\n",
    "If you have been on vacation for the past six months, first of all, congratulations. Secondly, you should know that since the introduction of ChatGPT-like large language models (LLM), the technology ecosystem has changed dramatically. Nowadays, it's all about retrieval-augmented generation (RAG) applications. The idea behind RAG applications is to provide additional context at query time to have the LLM generate accurate and up-to-date answers.\n",
    "\n",
    "![Screenshot from 2023-08-28 10-22-54.png]()\n",
    "\n",
    "Neo4j has been and is excellent at storing and analyzing structured information in RAG applications. Furthermore, Neo4j added the vector index search only a couple of days ago, which brings it closer to supporting RAG applications based on unstructured text.\n",
    "\n",
    "To streamline the use of Neo4j's vector index, I have integrated it into the LangChain library properly. LangChain is a leading framework for building LLM applications, integrating most LLM providers, databases, and more. It supports data ingestion as well as reading workflows and is especially useful for developing question-answering chatbots using the RAG architecture.\n",
    "\n",
    "In this blog post, I'll guide you through an end-to-end example that demonstrates how to leverage LangChain for efficient data ingestion into Neo4j vector index, followed by the construction of a straightforward yet effective RAG application.\n",
    "\n",
    "![neo4j_vector.drawio.png]()\n",
    "\n",
    "The tutorial will consist of the following steps:\n",
    "* Read a Wikipedia article using a LangChain Document Reader\n",
    "* Chunk the text\n",
    "* Store the text into Neo4j and index it using the newly added vector index\n",
    "* Implement a question-answering workflow to support RAG applications."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "SCk99SCN7sQZ",
   "metadata": {
    "id": "SCk99SCN7sQZ"
   },
   "source": [
    "## Neo4j Environment setup\n",
    "You need to setup a Neo4j 5.11 or greater to follow along with the examples in this blog post. The easiest way is to start a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database/), which offers cloud instances of Neo4j database. Alternatively, you can also setup a local instance of the Neo4j database by downloading the Neo4j Desktop application and creating a local database instance.\n",
    "## Reading and chunking a Wikipedia article\n",
    "We will begin by reading and chunking a Wikipedia article. The process is pretty simple, as LangChain has integrated the Wikipedia document loader as well as the text chunking modules."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "bd210816-659c-4e99-80ed-ce17abd9e409",
   "metadata": {
    "id": "bd210816-659c-4e99-80ed-ce17abd9e409",
    "outputId": "26955e8e-3934-49db-b4b4-3274188a4b21"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Created a chunk of size 1130, which is longer than the specified 1000\n",
      "Created a chunk of size 1221, which is longer than the specified 1000\n",
      "Created a chunk of size 2331, which is longer than the specified 1000\n",
      "Created a chunk of size 1623, which is longer than the specified 1000\n",
      "Created a chunk of size 1572, which is longer than the specified 1000\n"
     ]
    }
   ],
   "source": [
    "# Read the wikipedia article\n",
    "raw_documents = WikipediaLoader(query=\"Leonhard Euler\").load()\n",
    "# Define chunking strategy\n",
    "text_splitter = CharacterTextSplitter.from_tiktoken_encoder(\n",
    "    chunk_size=1000, chunk_overlap=20\n",
    ")\n",
    "# Chunk the document\n",
    "documents = text_splitter.split_documents(raw_documents)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "27c5d780-e0e4-413e-a437-3424d38a7cfd",
   "metadata": {
    "id": "27c5d780-e0e4-413e-a437-3424d38a7cfd"
   },
   "outputs": [],
   "source": [
    "# Remove summary from metadata\n",
    "for d in documents:\n",
    "    del d.metadata['summary']"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "EcHRKYXA71u5",
   "metadata": {
    "id": "EcHRKYXA71u5"
   },
   "source": [
    "Since Neo4j is a graph database, I thought using the Wikipedia article about Leonhard Euler as the example was only fitting. Next, we use the tiktoken text chunking module, which uses a tokenizer made by OpenAI, to split the article into chunks with 1000 tokens.\n",
    "\n",
    "LangChain's WikipediaLoaderadds a summary to each chunk by default. I thought the added summaries were a bit redundant. For example, if you used a vector similarity search to retrieve the top three results, the summary would be repeated three times. Therefore, I decided to remove it from the dataset.\n",
    "\n",
    "## Store and index the text with Neo4j\n",
    "LangChain makes it easy to import the documents into Neo4j and index them using the newly added vector index. We tried to make it very user-friendly, which means that you don't have to know anything about Neo4j or graphs to use it. On the other hand, we provided several customization options for more experienced users, which will be presented in a separate blog post.\n",
    "\n",
    "Neo4j vector index is wrapped as a LangChain vector store and, therefore, follows the syntax used to interact with other vector databases."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "380059c2-4651-4773-a9e4-db5ad63cb06d",
   "metadata": {
    "id": "380059c2-4651-4773-a9e4-db5ad63cb06d"
   },
   "outputs": [],
   "source": [
    "# Neo4j Aura credentials\n",
    "url=\"bolt://54.236.226.158:7687\"\n",
    "username=\"neo4j\"\n",
    "password=\"radiuses-investment-college\"\n",
    "\n",
    "# Instantiate Neo4j vector from documents\n",
    "neo4j_vector = Neo4jVector.from_documents(\n",
    "    documents,\n",
    "    OpenAIEmbeddings(),\n",
    "    url=url,\n",
    "    username=username,\n",
    "    password=password\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "sxUK19rj8Cy8",
   "metadata": {
    "id": "sxUK19rj8Cy8"
   },
   "source": [
    "The from_documents method connects to a Neo4j database, imports and embeds the documents, and creates a vector index. The data will be represented as the `Chunk` nodes by default. As mentioned, you can customize how the data should be stored, as well as which data should be returned. However, that will be discussed in the following blog post.\n",
    "\n",
    "If you already have an existing vector index with populated data, you can use the from_existing_index method.\n",
    "## Vector similarity search\n",
    "We will begin with a simple vector similarity search to verify that everything works as intended."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "7d31a6a4-c4f9-4711-90c8-67d61dd7e7b5",
   "metadata": {
    "id": "7d31a6a4-c4f9-4711-90c8-67d61dd7e7b5",
    "outputId": "d593b5c8-e24f-42b5-b26e-0910db7fbb0d"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "== Early life ==\n",
      "Leonhard Euler was born on 15 April 1707, in Basel to Paul III Euler, a pastor of the Reformed Church, and Marguerite (née Brucker), whose ancestors include a number of well-known scholars in the classics. He was the oldest of four children, having two younger sisters, An\n"
     ]
    }
   ],
   "source": [
    "query = \"Where did Euler grow up?\"\n",
    "\n",
    "results = neo4j_vector.similarity_search(query, k=1)\n",
    "print(results[0].page_content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "Vf46IoZI8IP0",
   "metadata": {
    "id": "Vf46IoZI8IP0"
   },
   "source": [
    "The LangChain module used the specified embedding function (OpenAI in this example) to embed the question and then find the most similar documents by comparing the cosine similarity between the user question and indexed documents.\n",
    "\n",
    "Neo4j Vector index also supports the Euclidean similarity metric along with the cosine similarity.\n",
    "\n",
    "## Question-answer workflow with LangChain\n",
    "\n",
    "The nice thing about LangChain is that it supports question-answering workflows using only a line or two of code. For example, if we wanted to create a question-answering that generates answers based on the provided context but also provides which documents it used as the context, we can use the following code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "9e7f27d1-cfb0-4b7b-8f7d-fb12465a17f9",
   "metadata": {
    "id": "9e7f27d1-cfb0-4b7b-8f7d-fb12465a17f9"
   },
   "outputs": [],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "from langchain.chains import RetrievalQAWithSourcesChain\n",
    "\n",
    "chain = RetrievalQAWithSourcesChain.from_chain_type(\n",
    "    ChatOpenAI(temperature=0),\n",
    "    chain_type=\"stuff\",\n",
    "    retriever=neo4j_vector.as_retriever()\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "abf2fd1f-ee15-468b-8d3d-d934e2697b21",
   "metadata": {
    "id": "abf2fd1f-ee15-468b-8d3d-d934e2697b21",
    "outputId": "eefd6494-5e4a-4958-dce3-efd53b0ea41f"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/tomazbratanic/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n",
      "/Users/tomazbratanic/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n",
      "/Users/tomazbratanic/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'answer': \"Euler is credited for popularizing modern notation and terminology in mathematics. He introduced the notation for functions, trigonometric functions, the base of the natural logarithm (Euler's number), and the use of the Greek letter π to represent the ratio of a circle's circumference to its diameter. He also invented the notation i to represent the imaginary unit. Additionally, Euler made important contributions to complex analysis and discovered Euler's formula, which is considered one of the most remarkable formulas in mathematics. He is also known for his work in calculus, graph theory, and number theory.\",\n",
       " 'sources': ''}"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "query = \"What is Euler credited for popularizing?\"\n",
    "\n",
    "chain.invoke(\n",
    "    {\"question\": query},\n",
    "    return_only_outputs=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "N2HeqF0h8Pa0",
   "metadata": {
    "id": "N2HeqF0h8Pa0"
   },
   "source": [
    "As you can see, the LLM constructed an accurate answer based on the provided Wikipedia article but also returned which source documents it used. And we only required one line of code to achieve this, which is pretty awesome if you ask me.\n",
    "\n",
    "While testing the code, I noticed that the sources were not always returned. The problem here is not Neo4j Vector implementation but rather GPT-3.5-turbo. Sometimes, it doesn't listen to instructions to return the source documents. However, if you use GPT-4, the problem goes away.\n",
    "\n",
    "Lastly, to replicate the ChatGPT interface, you can add a memory module, which additionally provides the LLM with dialogue history so that we can ask follow-up questions. Again, we only need two lines of codes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "5dde9f1e-61a5-4b44-88c7-980c2b8cad86",
   "metadata": {
    "id": "5dde9f1e-61a5-4b44-88c7-980c2b8cad86"
   },
   "outputs": [],
   "source": [
    "from langchain.chains import ConversationalRetrievalChain\n",
    "from langchain.memory import ConversationBufferMemory\n",
    "\n",
    "memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)\n",
    "qa = ConversationalRetrievalChain.from_llm(\n",
    "    ChatOpenAI(temperature=0), neo4j_vector.as_retriever(), memory=memory)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "xMhpHPNt8TND",
   "metadata": {
    "id": "xMhpHPNt8TND"
   },
   "source": [
    "Let's now test it out."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "f5cef351-611e-4ac9-8bfd-ad2c435fd44e",
   "metadata": {
    "id": "f5cef351-611e-4ac9-8bfd-ad2c435fd44e",
    "outputId": "5f1c70ed-68bc-452d-aedd-a062ac1245fd"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/tomazbratanic/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n",
      "/Users/tomazbratanic/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n",
      "/Users/tomazbratanic/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Euler is credited for popularizing several mathematical concepts and notations. Some of the things he is credited for popularizing include:\n",
      "\n",
      "1. The use of the Greek letter π (pi) to represent the ratio of a circle's circumference to its diameter.\n",
      "2. The notation f(x) to represent a function.\n",
      "3. The use of the letter i to represent the imaginary unit (√-1).\n",
      "4. The modern notation for trigonometric functions.\n",
      "5. The use of the letter e to represent the base of the natural logarithm (Euler's number).\n",
      "6. The use of lowercase letters to represent the sides of a triangle and capital letters to represent the angles.\n",
      "7. The use of the Greek letter Σ (sigma) to represent summations.\n",
      "8. The use of the Greek letter Δ (delta) to represent finite differences.\n",
      "\n",
      "These are just a few examples of the many mathematical concepts and notations that Euler is credited for popularizing.\n"
     ]
    }
   ],
   "source": [
    "print(qa.invoke({\"question\": \"What is Euler credited for popularizing?\"})[\"answer\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "N8njRP_l8VU8",
   "metadata": {
    "id": "N8njRP_l8VU8"
   },
   "source": [
    "And now a follow-up question."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "f8221df3-c2ae-4a43-9d55-a0e472d48d96",
   "metadata": {
    "id": "f8221df3-c2ae-4a43-9d55-a0e472d48d96",
    "outputId": "7da01a8a-b83c-4ec3-a960-164e481763ac"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/tomazbratanic/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n",
      "/Users/tomazbratanic/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n",
      "/Users/tomazbratanic/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n",
      "/Users/tomazbratanic/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n",
      "/Users/tomazbratanic/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:189: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
      "  warn_deprecated(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Euler grew up in Basel, Switzerland.\n"
     ]
    }
   ],
   "source": [
    "print(qa.invoke({\"question\": \"Where did he grow up?\"})[\"answer\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "O2Nnchdn8Xw9",
   "metadata": {
    "id": "O2Nnchdn8Xw9"
   },
   "source": [
    "# Summary\n",
    "The vector index is a great addition to Neo4j, making it an excellent solution for handling structured and unstructured data for RAG applications. Hopefully, the LangChain integration will streamline the process of integrating the vector index into your existing or new RAG applications, so you don't have to worry about the details. Remember, LangChain already [supports generating Cypher statements](https://towardsdatascience.com/langchain-has-added-cypher-search-cb9d821120d5) and using them to retrieve context, so you can use it today to retrieve both structured and unstructured information. We have many ideas on upgrading the LangChain support for Neo4j, so stay tuned!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7e9c944e-f356-4839-a30e-2cbd4cc46e4c",
   "metadata": {
    "id": "7e9c944e-f356-4839-a30e-2cbd4cc46e4c"
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "colab": {
   "include_colab_link": true,
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
