{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "view-in-github"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/tomasonjo/blogs/blob/master/llm/generic_cypher_gpt4.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "JUDi5el-l8d8",
    "outputId": "9104694a-4f9e-4c3d-a66d-7c013ad333c6"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
      "Requirement already satisfied: openai in /usr/local/lib/python3.9/dist-packages (0.27.4)\n",
      "Requirement already satisfied: neo4j in /usr/local/lib/python3.9/dist-packages (5.7.0)\n",
      "Requirement already satisfied: aiohttp in /usr/local/lib/python3.9/dist-packages (from openai) (3.8.4)\n",
      "Requirement already satisfied: requests>=2.20 in /usr/local/lib/python3.9/dist-packages (from openai) (2.27.1)\n",
      "Requirement already satisfied: tqdm in /usr/local/lib/python3.9/dist-packages (from openai) (4.65.0)\n",
      "Requirement already satisfied: pytz in /usr/local/lib/python3.9/dist-packages (from neo4j) (2022.7.1)\n",
      "Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.9/dist-packages (from requests>=2.20->openai) (1.26.15)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.9/dist-packages (from requests>=2.20->openai) (2022.12.7)\n",
      "Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.9/dist-packages (from requests>=2.20->openai) (2.0.12)\n",
      "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.9/dist-packages (from requests>=2.20->openai) (3.4)\n",
      "Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (4.0.2)\n",
      "Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (1.9.2)\n",
      "Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (23.1.0)\n",
      "Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (1.3.3)\n",
      "Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (6.0.4)\n",
      "Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.9/dist-packages (from aiohttp->openai) (1.3.1)\n"
     ]
    }
   ],
   "source": [
    "!pip install openai neo4j"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "NYILquGHZrd7"
   },
   "source": [
    "# Generating Cypher queries with GPT-4 on any graph schema\n",
    "\n",
    "Large language models have a great potential to translate natural language into query language. For example, some people like to use GPT models to translate text to SQL, while others want to use GPT models to construct SPARQL queries. Personally, I prefer exploring how to translate natural language to Cypher query language.\n",
    "In my experiments, I have noticed there are two approaches to developing an LLM flow that constructs Cypher statements. One option is to provide example queries in the prompt or use the examples to finetune an LLM model. However, the limitation of this approach is that it requires some work to produce the Cypher examples. Therefore, the example Cypher queries must be generated for each graph schema. On the other hand, we can provide an LLM directly with schema information and let it construct Cypher statements based on graph schema information alone. Using the second approach, we could develop a generic Cypher statement model to produce Cypher statements for any input graph schema, as we eliminate the need for any additional work like generating example Cypher statements.\n",
    "This blog post aims to show you how to implement a Cypher statement-generating model by providing only the graph schema information. We will evaluate the model's Cypher construction capabilities on three graphs with different graph schemas. Currently, the only model I recommend to generate Cypher statements based on only the provided graph schema is GPT-4. Other models like GPT-3.5-turbo or text-davinci-003 aren't that great, and I have yet to find an open-source LLM model that would be good at following instructions in the prompt as well as GPT-4.\n",
    "## Experiment Setup\n",
    "I have implemented a Python class that connects to a Neo4j instance and fetches the schema information when initialized. The graph schema information can then be used as input to GPT-4 model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "id": "67N5Q5-CmuG8"
   },
   "outputs": [],
   "source": [
    "node_properties_query = \"\"\"\n",
    "CALL apoc.meta.data()\n",
    "YIELD label, other, elementType, type, property\n",
    "WHERE NOT type = \"RELATIONSHIP\" AND elementType = \"node\"\n",
    "WITH label AS nodeLabels, collect(property) AS properties\n",
    "RETURN {labels: nodeLabels, properties: properties} AS output\n",
    "\n",
    "\"\"\"\n",
    "\n",
    "rel_properties_query = \"\"\"\n",
    "CALL apoc.meta.data()\n",
    "YIELD label, other, elementType, type, property\n",
    "WHERE NOT type = \"RELATIONSHIP\" AND elementType = \"relationship\"\n",
    "WITH label AS nodeLabels, collect(property) AS properties\n",
    "RETURN {type: nodeLabels, properties: properties} AS output\n",
    "\"\"\"\n",
    "\n",
    "rel_query = \"\"\"\n",
    "CALL apoc.meta.data()\n",
    "YIELD label, other, elementType, type, property\n",
    "WHERE type = \"RELATIONSHIP\" AND elementType = \"node\"\n",
    "RETURN {source: label, relationship: property, target: other} AS output\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "id": "IHY0Kt2-mFFq"
   },
   "outputs": [],
   "source": [
    "from neo4j import GraphDatabase\n",
    "from neo4j.exceptions import CypherSyntaxError\n",
    "import openai\n",
    "\n",
    "\n",
    "def schema_text(node_props, rel_props, rels):\n",
    "    return f\"\"\"\n",
    "  This is the schema representation of the Neo4j database.\n",
    "  Node properties are the following:\n",
    "  {node_props}\n",
    "  Relationship properties are the following:\n",
    "  {rel_props}\n",
    "  Relationship point from source to target nodes\n",
    "  {rels}\n",
    "  Make sure to respect relationship types and directions\n",
    "  \"\"\"\n",
    "\n",
    "\n",
    "class Neo4jGPTQuery:\n",
    "    def __init__(self, url, user, password, openai_api_key):\n",
    "        self.driver = GraphDatabase.driver(url, auth=(user, password))\n",
    "        openai.api_key = openai_api_key\n",
    "        # construct schema\n",
    "        self.schema = self.generate_schema()\n",
    "\n",
    "\n",
    "    def generate_schema(self):\n",
    "        node_props = self.query_database(node_properties_query)\n",
    "        rel_props = self.query_database(rel_properties_query)\n",
    "        rels = self.query_database(rel_query)\n",
    "        return schema_text(node_props, rel_props, rels)\n",
    "\n",
    "    def refresh_schema(self):\n",
    "        self.schema = self.generate_schema()\n",
    "\n",
    "    def get_system_message(self):\n",
    "        return f\"\"\"\n",
    "        Task: Generate Cypher queries to query a Neo4j graph database based on the provided schema definition.\n",
    "        Instructions:\n",
    "        Use only the provided relationship types and properties.\n",
    "        Do not use any other relationship types or properties that are not provided.\n",
    "        If you cannot generate a Cypher statement based on the provided schema, explain the reason to the user.\n",
    "        Schema:\n",
    "        {self.schema}\n",
    "\n",
    "        Note: Do not include any explanations or apologies in your responses.\n",
    "        \"\"\"\n",
    "\n",
    "    def query_database(self, neo4j_query, params={}):\n",
    "        with self.driver.session() as session:\n",
    "            result = session.run(neo4j_query, params)\n",
    "            output = [r.values() for r in result]\n",
    "            output.insert(0, result.keys())\n",
    "            return output\n",
    "\n",
    "    def construct_cypher(self, question, history=None):\n",
    "        messages = [\n",
    "            {\"role\": \"system\", \"content\": self.get_system_message()},\n",
    "            {\"role\": \"user\", \"content\": question},\n",
    "        ]\n",
    "        # Used for Cypher healing flows\n",
    "        if history:\n",
    "            messages.extend(history)\n",
    "\n",
    "        completions = openai.ChatCompletion.create(\n",
    "            model=\"gpt-4\",\n",
    "            temperature=0.0,\n",
    "            max_tokens=1000,\n",
    "            messages=messages\n",
    "        )\n",
    "        return completions.choices[0].message.content\n",
    "\n",
    "    def run(self, question, history=None, retry=True):\n",
    "        # Construct Cypher statement\n",
    "        cypher = self.construct_cypher(question, history)\n",
    "        print(cypher)\n",
    "        try:\n",
    "            return self.query_database(cypher)\n",
    "        # Self-healing flow\n",
    "        except CypherSyntaxError as e:\n",
    "            # If out of retries\n",
    "            if not retry:\n",
    "              return \"Invalid Cypher syntax\"\n",
    "        # Self-healing Cypher flow by\n",
    "        # providing specific error to GPT-4\n",
    "            print(\"Retrying\")\n",
    "            return self.run(\n",
    "                question,\n",
    "                [\n",
    "                    {\"role\": \"assistant\", \"content\": cypher},\n",
    "                    {\n",
    "                        \"role\": \"user\",\n",
    "                        \"content\": f\"\"\"This query returns an error: {str(e)} \n",
    "                        Give me a improved query that works without any explanations or apologies\"\"\",\n",
    "                    },\n",
    "                ],\n",
    "                retry=False\n",
    "            )\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "LXk1Uy-vZ7cm"
   },
   "source": [
    "It's interesting how I ended with the final system message to get GPT-4 following my instructions. At first, I wrote my directions as plain text and added some constraints. However, the model wasn't doing exactly what I wanted, so I opened ChatGPT in a web browser and asked it to rewrite my instructions in a manner that GPT-4 would understand. Finally, ChatGPT seems to understand what works best as GPT-4 prompts, as the model behaved much better with this new prompt structure.\n",
    "\n",
    "\n",
    "The GPT-4 model uses the ChatCompletion endpoint, which uses a combination of system, user, and optional assistant messages when we want to ask follow-up questions. So, we always start with only the system and user message. However, if the generated Cypher statement has any syntax error, the self-healing flow will be started, where we include the error in the follow-up question so that GPT-4 can fix the query. Therefore, we have included the optional history parameter for Cypher self-healing flow.\n",
    "\n",
    "The run function starts by generating a Cypher statement. Then, the generated Cypher statement is used to query the Neo4j database. If the Cypher syntax is valid, the query results are returned. However, suppose there is a Cypher syntax error. In that case, we do a single follow-up to GPT-4, provide the generated Cypher statement it constructed in the previous call, and include the error from the Neo4j database. GPT-4 is quite good at fixing a Cypher statement when provided with the error.\n",
    "\n",
    "The self-healing Cypher flow was inspired by others who have used similar flows for Python and other code. However, I have limited the follow-up Cypher healing to only a single iteration. If the follow-up doesn't provide a valid Cypher statement, the function returns the \"Invalid Cypher syntax response\".\n",
    "Let's now test the capabilities of GPT-4 to construct Cypher statements based on the provided graph schema only."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "CGCS37mZushD"
   },
   "source": [
    "We will begin with a simple airport route graph, which is available [as the GDS project in Neo4j Sandbox](https://sandbox.neo4j.com/?usecase=graph-data-science2).\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Lc7d99b6azdm"
   },
   "source": [
    "![Screenshot from 2023-04-26 20-33-45.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "qxzVgleva2dp"
   },
   "source": [
    "This graph schema is relatively simple. The graph contains information about airports and their routes. Additionally, information about the airport's city, region, country, and continent is stored as separate nodes.\n",
    "\n",
    "We can instantiate the Python class used to query the airport graph with the following Python code:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "id": "7hg23hmtqIvD"
   },
   "outputs": [],
   "source": [
    "openai_key = \"INSERT_OPENAI_API_KEY\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "id": "NZTTW3TkpKFY"
   },
   "outputs": [],
   "source": [
    "gds_db = Neo4jGPTQuery(\n",
    "    url=\"bolt://18.207.187.166:7687\",\n",
    "    user=\"neo4j\",\n",
    "    password=\"preferences-accomplishments-vent\",\n",
    "    openai_api_key=openai_key,\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "aD4z9PW7a8i-"
   },
   "source": [
    "Now we can begin our experiment. First, we will begin with a simple question."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "TyBNV92QqeUn",
    "outputId": "fca146ab-ac24-4abc-9d6b-3db3db0d4f8d"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MATCH (a:Airport)-[:IN_CITY]->(c:City)\n",
      "RETURN c.name AS City, COUNT(a) AS NumberOfAirports\n",
      "ORDER BY NumberOfAirports DESC\n",
      "LIMIT 1\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[['City', 'NumberOfAirports'], ['London', 6]]"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gds_db.run(\"\"\"\n",
    "What is the city with the most airports?\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Qm67jPVaa-u4"
   },
   "source": [
    "Great start. The Cypher statement was correctly generated, and we found that London has six airports. Next, let's try something more complex."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "QLbBT4zVrkgS",
    "outputId": "c9a4e5b8-7761-43fe-df29-7829a14d9783"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MATCH (a:Airport)-[r:HAS_ROUTE]->(:Airport)\n",
      "WITH a, count(r) as num_flights\n",
      "RETURN min(num_flights) as min_flights, max(num_flights) as max_flights, avg(num_flights) as avg_flights, stDev(num_flights) as stddev_flights\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[['min_flights', 'max_flights', 'avg_flights', 'stddev_flights'],\n",
       " [1, 307, 20.905362776025285, 38.28730861505158]]"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gds_db.run(\"\"\"\n",
    "calculate the minimum, maximum, average, and standard deviation of the number of flights out of each airport.\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Ci3zSR8ZbBN9"
   },
   "source": [
    "Quite nice. The GPT-4 model correctly assumed that flights relate to the HAS_ROUTE relationship. Additionally, it accurately aggregates flights per airport, then calculates the specified metrics.\n",
    "\n",
    "Let's now throw it a curveball. We will ask the model to calculate the variance since Cypher doesn't have any built-in function to calculate the variance."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 111
    },
    "id": "OpnkI_laEMda",
    "outputId": "8ecb6490-5e61-4d94-f803-2d10df74776b"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The provided schema does not have information about the number of flights for each airport. Therefore, it is not possible to calculate the variance of the number of flights out of each airport using the given schema.\n",
      "Retrying\n",
      "As mentioned earlier, the provided schema does not have information about the number of flights for each airport. Therefore, it is not possible to create a query to calculate the variance of the number of flights out of each airport using the given schema.\n"
     ]
    },
    {
     "data": {
      "application/vnd.google.colaboratory.intrinsic+json": {
       "type": "string"
      },
      "text/plain": [
       "'Invalid Cypher syntax'"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gds_db.run(\"\"\"\n",
    "calculate the variance of the number of flights out of each airport.\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "PHk4gjdebFEg"
   },
   "source": [
    "First of all, GPT-4 provided explanations when explicitly told not to. Secondly, neither Cypher statements make any sense. In this example, even the self-healing flow didn't succeed since we are not dealing with a Cypher syntax error but a GPT-4 system malfunction.\n",
    "\n",
    "I have noticed that GPT-4 struggles when it needs to perform multiple aggregations using different grouping keys in a single Cypher statement. Here it wanted to split the statement into two parts (which don't work either), but in other cases it wants to borrow syntax from SQL.\n",
    "\n",
    "However, GPT-4 is quite obedient and provides the specified results from the database as instructed by the user."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "h_IxUkoIykhd",
    "outputId": "f6e60bb7-7a0c-4d5a-f7a7-448adda75d26"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MATCH (atl:Airport {iata: \"ATL\"}), (iah:Airport {iata: \"IAH\"}), path = shortestPath((atl)-[:HAS_ROUTE*]-(iah))\n",
      "WITH nodes(path) AS airports\n",
      "UNWIND airports AS airport\n",
      "RETURN {iata: airport.iata, runways: airport.runways} AS airportInfo\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[['airportInfo'],\n",
       " [{'iata': 'ATL', 'runways': 5}],\n",
       " [{'iata': 'IAH', 'runways': 5}]]"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gds_db.run(\"\"\"\n",
    "Find the shortest route between ATL and IAH airports\n",
    "and return only the iata and runways property of the nodes as a map object\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "e2VDhhXxbJyR"
   },
   "source": [
    "Here is where the power of GPT-4 shines. The more specific we are in what we want to find and how we want the results to be structured, the better it works.\n",
    "We can also test if it knows how to use the GDS library."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "7S6rkSYzufWM",
    "outputId": "b5d75083-5198-4ca6-b873-e7718f735f73"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CALL gds.betweenness.stream({\n",
      "  nodeProjection: 'Airport',\n",
      "  relationshipProjection: {\n",
      "    HAS_ROUTE: {\n",
      "      type: 'HAS_ROUTE',\n",
      "      orientation: 'UNDIRECTED'\n",
      "    }\n",
      "  }\n",
      "})\n",
      "YIELD nodeId, score\n",
      "RETURN gds.util.asNode(nodeId).id AS airportId, score\n",
      "ORDER BY score DESC\n"
     ]
    }
   ],
   "source": [
    "print(gds_db.construct_cypher(\"\"\"\n",
    "Calculate the betweenness centrality of airports using the Graph Data Science library\n",
    "\"\"\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "55cxuibdbNhl"
   },
   "source": [
    "Well, the constructed Cypher statement looks fine. However, there is only one problem. The generated Cypher statement uses the anonymous graph projection, which was deprecated and removed in GDS v2. Here we see some issues arising from GPT-4's knowledge cutoff date. Unfortunately, it looks like GDS v2 was released after the knowledge cutoff date, and therefore the new syntax is not baked into GPT-4. Therefore, at the moment, the GPT-4 model doesn't provide valid GDS procedures.\n",
    "\n",
    "If you pay attention, you will also notice that GPT-4 never uses the Cypher subquery syntax, which is again another syntax change that was added after the knowledge cutoff date.\n",
    "\n",
    "Interestingly, if you calculate any of the values from graph algorithms and store them as node property, the GPT-4 has no problem retrieving that."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "6DPu-MyYwFa7",
    "outputId": "b3829b72-16a5-40d9-ac95-61d9d11ce17b"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MATCH (a:Airport)\n",
      "RETURN a.descr, a.pagerank\n",
      "ORDER BY a.pagerank DESC\n",
      "LIMIT 5\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[['a.descr', 'a.pagerank'],\n",
       " ['Dallas/Fort Worth International Airport', 11.97978260670334],\n",
       " [\"Chicago O'Hare International Airport\", 11.162988178920267],\n",
       " ['Denver International Airport', 10.997299338126387],\n",
       " ['Hartsfield - Jackson Atlanta International Airport', 10.389948350302957],\n",
       " ['Istanbul International Airport', 8.42580121770578]]"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gds_db.run(\"\"\"\n",
    "Use PageRank to find the five most important airports and return their descr and pagerank value\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "PtyWWubP4xLh"
   },
   "source": [
    "It looks like Dallas and Chicago have the highest PageRank scores.\n",
    "## Healthcare sandbox\n",
    "You might say that the airport sandbox might have been part of the training data of GPT-4. That is definitely a possibility. Therefore, let's test GPT-4 ability to construct Cypher statements on the latest Neo4j Sandbox project dealing with healthcare data, published between December 2022 and January 2023. That should be after the GPT-4 knowledge cutoff date.\n",
    "\n",
    "![Screenshot from 2023-04-26 22-10-30.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "D2U9HoJobWBi"
   },
   "source": [
    "The healthcare graph schema revolves around adverse drug event cases. Therefore, each case is related to relevant drugs. In addition, other information is available such as the age group, outcome, and reaction. Here, I took the examples from the sandbox guide as I am not familiar with the adverse drug events domain."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "id": "UsKiPnx1wy8Z"
   },
   "outputs": [],
   "source": [
    "hc_db = Neo4jGPTQuery(\n",
    "    url=\"bolt://3.216.123.73:7687\",\n",
    "    user=\"neo4j\",\n",
    "    password=\"reenlistment-superstructures-shafts\",\n",
    "    openai_api_key=openai_key,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "ch0QpKzb491V",
    "outputId": "14eecb9d-cc15-44a6-bca5-34cefdb02a1a"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MATCH (c:Case)-[:HAS_REACTION]->(r:Reaction)\n",
      "RETURN r.description as SideEffect, COUNT(*) as Frequency\n",
      "ORDER BY Frequency DESC\n",
      "LIMIT 5\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[['SideEffect', 'Frequency'],\n",
       " ['Fatigue', 303],\n",
       " ['Product dose omission issue', 285],\n",
       " ['Headache', 272],\n",
       " ['Nausea', 256],\n",
       " ['Pain', 253]]"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "hc_db.run(\"\"\"\n",
    "What are the top 5 side effects reported?\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Tapjpf0HbaO6"
   },
   "source": [
    "It would be interesting to learn how did GPT-4 know that side effects can be found as the Reaction nodes. Even I couldn't find that without any details about the graph. Are there graph out there with similar schema, or is the knowledge cutoff date of GPT-4 not that accurate? Or does it only have great intuition to find relevant data based on node labels and their properties.\n",
    "\n",
    "Let's try something more complex."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "bMEOMI0n5aT5",
    "outputId": "7d91e1a7-69ca-4d9c-c9e1-71401113f74c"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MATCH (m:Manufacturer)-[:REGISTERED]->(c:Case)-[:HAS_REACTION]->(r:Reaction)\n",
      "RETURN m.manufacturerName, COUNT(r) as sideEffectsCount\n",
      "ORDER BY sideEffectsCount DESC\n",
      "LIMIT 3\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[['m.manufacturerName', 'sideEffectsCount'],\n",
       " ['TAKEDA', 5058],\n",
       " ['PFIZER', 3219],\n",
       " ['NOVARTIS', 1823]]"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "hc_db.run(\"\"\"\n",
    "What are the top 3 manufacturing companies with the most reported side effects?\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "7-uxtXJWbeGH"
   },
   "source": [
    "Here, we can see that GPT-4 is very specific in our request. Since we are asking for the count of reported side effects, it expands to Reaction nodes and counts them. On the other hand, we could request only the number of cases."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "gUBIsss9PXLd",
    "outputId": "3fdf29a7-e4fb-4985-bc57-ece657f4762b"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MATCH (m:Manufacturer)-[:REGISTERED]->(c:Case)\n",
      "RETURN m.manufacturerName, COUNT(c) as case_count\n",
      "ORDER BY case_count DESC\n",
      "LIMIT 3\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[['m.manufacturerName', 'case_count'],\n",
       " ['TAKEDA', 617],\n",
       " ['CELGENE', 572],\n",
       " ['PFIZER', 513]]"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "hc_db.run(\"\"\"\n",
    "What are the top 3 manufacturing companies with the most reported cases?\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "qjwYZA2GbgOk"
   },
   "source": [
    "Now, lets do something where GPT-4 has to do both filtering and aggregating."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "0SVUT8l95sVo",
    "outputId": "1f6d09a5-b3d4-47bc-d48d-042b398a81ca"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MATCH (d:Drug)-[:IS_PRIMARY_SUSPECT|:IS_SECONDARY_SUSPECT|:IS_CONCOMITANT|:IS_INTERACTING]->(c:Case)-[:RESULTED_IN]->(o:Outcome)\n",
      "WHERE o.outcome = \"Death\"\n",
      "RETURN d.name, COUNT(*) as count\n",
      "ORDER BY count DESC\n",
      "LIMIT 5\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[['d.name', 'count']]"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "hc_db.run(\"\"\"\n",
    "What are the top 5 drugs whose side effects resulted in Death of patients as an outcome?\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "EdFQGZADTTCW"
   },
   "source": [
    "Something that happens sometimes is that GPT-4 messes up the relationship direction. For example, the relationships from the Drug to the Case node should have a reverse direction. Additionally, the Sandbox guide uses only the IS_PRIMARY_SUSPECT relationship type, but we can't blame the GPT-4 model due to the question's ambiguity.\n",
    "\n",
    "Note that GPT-4 is not deterministic. Therefore, it may return correct relationship directions and sometimes not. For me, it worked correctly one day and not the other. However, I got consistent results within the same day, so who knows what is happening behind the scenes.\n",
    "\n",
    "What I found interesting is that the GPT-4 model knew that the outcome property contains information about the death of patients. But more than that, it knew that the death value should be capitalized, which makes me think the model saw this dataset in one form or another.\n",
    "\n",
    "## Custom astronomical dataset\n",
    "I have decided to construct a custom astronomical dataset that the model definitely hasn't seen during its training since it didn't exist until I started writing this post. It is very tiny, but good enough to test out GPT-4 generalization ability. I have created a [blank project on Neo4j Sandbox](https://sandbox.neo4j.com/?usecase=blank-sandbox) and then seeded the database with the following script."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "id": "3Ceh2OEh6JMR"
   },
   "outputs": [],
   "source": [
    "astro_db = Neo4jGPTQuery(\n",
    "    url=\"bolt://35.171.160.87:7687\",\n",
    "    user=\"neo4j\",\n",
    "    password=\"discontinuance-fifths-sports\",\n",
    "    openai_api_key=openai_key,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "id": "w8GjNAPnTd8k"
   },
   "outputs": [],
   "source": [
    "url = \"https://gist.githubusercontent.com/tomasonjo/52b2da916ef5cd1c2adf0ad62cc71a26/raw/a3a8716f7b28f3a82ce59e6e7df28389e3cb33cb/astro.cql\"\n",
    "astro_db.query_database(\"CALL apoc.cypher.runFile($url)\", {'url':url})\n",
    "astro_db.refresh_schema()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "pRsVNCkxbt2Z"
   },
   "source": [
    "The constructed graph has the following schema.\n",
    "\n",
    "![Screenshot from 2023-04-26 22-43-17.png]()\n",
    "\n",
    "The database contains planets within our solar system that orbit the sun. Additionally, satellites like ISS, the moon, and Hubble Telescope are included."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "i-nFUrbfT7Zp",
    "outputId": "8aeace90-e748-46c4-e5a1-0fba731e8b96"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MATCH (ao:AstronomicalObject {name: \"Earth\"})<-[:ORBITS]-(o)\n",
      "RETURN o.name, labels(o)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[['o.name', 'labels(o)'],\n",
       " ['Hubble Space Telescope', ['Satellite']],\n",
       " ['ISS', ['Satellite']],\n",
       " ['Moon', ['Satellite']]]"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "astro_db.run(\"\"\"\n",
    "What orbits the Earth?\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "YQXN8CRNb1j1"
   },
   "source": [
    "Remember, the GPT-4 only know that there are satellites and astronomical objects in the database. Astronomical objects orbit other astronomical objects, while satellites can only orbit objects. It looks like it used its knowledge to assume that only a satellite would orbit the Earth, which is impressive. We can observe that GPT-4 probably makes a lot of assumption based on its baked knowledge to help us with our queries.\n",
    "\n",
    "Let's dig deeper."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "hU7agLmMUUOa",
    "outputId": "f842f49c-c764-49ac-cc11-39e0720ac178"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MATCH (s:Satellite {name: \"ISS\"})-[:ORBITS]->(a:AstronomicalObject {name: \"Sun\"})\n",
      "RETURN s.name as Satellite, a.name as AstronomicalObject\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[['Satellite', 'AstronomicalObject']]"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "astro_db.run(\"\"\"\n",
    "Does ISS orbits the Sun?\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "XYizIkKHb4JP"
   },
   "source": [
    "So, the ISS doesn't directly orbit the sun. We can rephrase our question."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "l6RY0-1jXyyK",
    "outputId": "b244c722-09b4-41d8-960b-f28bd785c3bf"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "MATCH path = (iss:Satellite {name: \"ISS\"})-[:ORBITS*]->(sun:AstronomicalObject {name: \"Sun\"})\n",
      "RETURN [node in nodes(path) | node.name] AS path_names\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[['path_names'], [['ISS', 'Earth', 'Sun']]]"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "astro_db.run(\"\"\"\n",
    "Does ISS orbits the Sun? Find any path between them\n",
    "and return names of nodes in the path\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "iIwY3cBtb7X3"
   },
   "source": [
    "Now, it uses a variable-length path pattern to find if ISS orbits the sun by proxy. Of course, we gave it a hint to use that, but it is still remarkable. For the final example, let's observe how good GPT-4 is at guessing never-seen-before property values."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 239
    },
    "id": "_gdrePY_b_wz",
    "outputId": "66b2469c-0848-49b9-de21-942e214bf0e3"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "To find the altitude difference between ISS and Hubble telescope, you can use the following Cypher query:\n",
      "\n",
      "```cypher\n",
      "MATCH (s1:Satellite {name: \"ISS\"}), (s2:Satellite {name: \"Hubble Telescope\"})\n",
      "RETURN abs(s1.altitude - s2.altitude) as altitude_difference\n",
      "```\n",
      "Retrying\n",
      "```cypher\n",
      "MATCH (s1:Satellite {name: \"ISS\"}), (s2:Satellite {name: \"Hubble Telescope\"})\n",
      "RETURN abs(s1.altitude - s2.altitude) as altitude_difference\n",
      "```\n"
     ]
    },
    {
     "data": {
      "application/vnd.google.colaboratory.intrinsic+json": {
       "type": "string"
      },
      "text/plain": [
       "'Invalid Cypher syntax'"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "astro_db.run(\"\"\"\n",
    "What's the altitude difference between ISS and Hubble telescope\n",
    "\"\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Gq1a80BHb-Oc"
   },
   "source": [
    "To tell you the truth, I am kind of relieved GPT-4 didn't guess correctly that Hubble is stored in the database as \"Hubble Space Telescope\". Other than that, the generated Cypher statement is perfectly valid.\n",
    "Summary\n",
    "GPT-4 has a great potential to generate Cypher statements based on only the provided graph schema. My opinion is that it has seen a lot of datasets and graph models during its training, so it is kind of good at guessing which properties to use and sometimes even their values. However, you can always provide the model with instructions about which properties to use and specify the exact values if the model isn't performing well on your specific graph model. The limitations I have observed during this experiment are the following:\n",
    "* Multiple aggregations with different grouping keys are a problem\n",
    "* Version two of the Graph Data Science library is beyond the knowledge cutoff date\n",
    "* Sometimes it messes up the relationship direction (not frequently, though)\n",
    "* The non-deterministic nature of GPT-4 makes it feel like you are dealing with a horoscope-based model, where identical queries work in the morning but not in the afternoon\n",
    "* Sometimes the model bypasses system instructions and provides explanations for queries\n",
    "\n",
    "Using the schema-only approach to GPT-4 can be used for experimental setups to help developers or researchers that don't have malicious intents to interact with the graph database. On the other hand, if you want to build something more production-ready, I would recommend using the approach of providing examples of Cypher statements."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "id": "U_WPbddjWVEc"
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "colab": {
   "authorship_tag": "ABX9TyMV47wMTNlwFnmIA1EX1mSX",
   "include_colab_link": true,
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
