{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "authorship_tag": "ABX9TyPNEWtbSUV0s+fvxXH5rm34",
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/tomasonjo/blogs/blob/master/llm/graph_based_prefiltering.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "id": "ih-k1Y4QycWI"
      },
      "outputs": [],
      "source": [
        "!pip install --quiet langchain langchain-community langchain-openai neo4j"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "from typing import Dict, List, Optional, Tuple, Type\n",
        "\n",
        "from langchain.agents import AgentExecutor\n",
        "from langchain.agents.format_scratchpad import format_to_openai_function_messages\n",
        "from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser\n",
        "from langchain.callbacks.manager import CallbackManagerForToolRun\n",
        "from langchain.pydantic_v1 import BaseModel, Field\n",
        "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
        "from langchain_core.messages import AIMessage, HumanMessage\n",
        "from langchain_core.utils.function_calling import convert_to_openai_function\n",
        "from langchain.tools import BaseTool\n",
        "from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
        "\n",
        "from langchain_community.graphs import Neo4jGraph\n",
        "from langchain_community.vectorstores import Neo4jVector\n",
        "from langchain_community.vectorstores.neo4j_vector import remove_lucene_chars"
      ],
      "metadata": {
        "id": "K2Q6_5oh8N0w"
      },
      "execution_count": 2,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Graph-based metadata filtering for improving vector search in RAG applications\n",
        "## Optimizing vector retrieval with advanced graph-based metadata techniques using LangChain and Neo4j\n",
        "Text embeddings and vector similarity search help us find documents by understanding their meanings and how similar they are to each other. However, text embeddings aren't as effective when sorting information based on specific criteria like dates or categories; for example, if you need to find all documents created in a particular year or documents tagged under a specific category like \"science fiction.\" This is where metadata filtering or filtered vector search comes into play, as it can effectively handle those structured filters, allowing users to narrow their search results according to specific attributes.\n",
        "\n",
        "![prefiltering.png]()\n"
      ],
      "metadata": {
        "id": "arowEhz87fB2"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "In the image provided, the process starts with a user asking whether any new policies were implemented in 2021. A metadata filter is then used to sort through a larger pool of indexed documents by the specified year, which in this case is 2021. This results in a filtered subset of documents from that year only. To further hone in on the most relevant documents, a vector similarity search is performed within this subset. This method allows the system to find documents closely related to the topic of interest from within the contextually relevant pool of documents from the year 2021. This two-step process, metadata filtering followed by vector similarity search, increases the accuracy and relevance of the search results.\n",
        "\n",
        "Recently, we introduced [LangChain support for metadata filtering in Neo4j](https://python.langchain.com/docs/integrations/vectorstores/neo4jvector/#metadata-filtering) based on node properties. However, graph databases like Neo4j can store highly complex and connected structured data alongside unstructured data.\n",
        "\n",
        "The unstructured part of the dataset represents articles and their text chunks. Text chunk nodes contain text and their text embedding values and are linked to the article nodes, where more information about the article, such as the date, sentiment, author, etc., is present. However, the articles are then further linked to the organizations they mention. In this example, the article mentions Neo4j. Additionally, our dataset includes a wealth of structured information about Neo4j such as its investors, board members, suppliers, and beyond.\n",
        "Thus, we can leverage this extensive structured information to execute sophisticated metadata filtering, allowing us to precisely refine our document selection using structured criteria such as:\n",
        "* Did any of the companies where Rod Johnson is the board member implement a new work-from-home policy?\n",
        "* Are there any negative news about companies that Neo4j invested in?\n",
        "* Were there any notable news in connection with supply chain problems for companies that supply Hyundai?\n",
        "\n",
        "With all these example questions, you can greatly narrow down the relevant document subset using a structured graph-based metadata filter.\n",
        "In this blog post, I will show you how to implement graph-based metadata filtering using LangChain in combination with OpenAI function-calling agent.\n",
        "## Agenda\n",
        "We will use the so-called companies graph dataset, available on a public demo server hosted by Neo4j.\n",
        "\n",
        "The graph schema revolves around Organization nodes. There is vast information available regarding their suppliers, competitors, location, board members, and more. As mentioned before, there also articles mentioning particular organizations with their corresponding text chunks.\n",
        "We will implement an OpenAI agent with a single tool, which can dynamically generate Cypher statements based on user input and retrieve relevant text chunks from the graph database. In this example, the tool will have four optional input parameters:\n",
        "* topic: Any specific information or topic besides organization, country, and sentiment that the user is interested in.\n",
        "* organization: Organization that the user wants to find information about\n",
        "* country: Country of organizations that the user is interested in. Use full names like United States of America and France.\n",
        "* sentiment: Sentiment of articles\n",
        "\n",
        "Based on the four input parameters, we will dynamically, but deterministically, construct a corresponding Cypher statement to retrieve relevant information from the graph and use it as context to generate the final answer using an LLM.\n",
        "You will require an OpenAI API key to follow along with the code.\n",
        "## Function Implementation\n",
        "We will begin by defining credentials and relevant connections to Neo4j.\n",
        "\n",
        "\n",
        "\n"
      ],
      "metadata": {
        "id": "6ji1PYt-7nXf"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "\n",
        "os.environ[\"OPENAI_API_KEY\"] = \"sk-\"\n",
        "os.environ[\"NEO4J_URI\"] = \"neo4j+s://demo.neo4jlabs.com\"\n",
        "os.environ[\"NEO4J_USERNAME\"] = \"companies\"\n",
        "os.environ[\"NEO4J_PASSWORD\"] = \"companies\"\n",
        "os.environ[\"NEO4J_DATABASE\"] = \"companies\""
      ],
      "metadata": {
        "id": "cbSBFWv6ynpn"
      },
      "execution_count": 3,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "As mentioned, we will be using the OpenAI embeddings, for which you require their API key. Next, we define the graph connection to Neo4j, allowing us to execute arbitrary Cypher statements. Lastly, we instantiate a Neo4jVector connection, which can retrieve information by querying the existing vector index. At the time of writing this article, you cannot use the vector index in combination with the pre-filtering approach; you can only apply post-filtering in combination with the vector index. However, debating post-filtering is beyond the scope of this article as we will focus on pre-filtering approaches combined with an exhaustive vector similarity search."
      ],
      "metadata": {
        "id": "SIgvWglM8zWq"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "embeddings = OpenAIEmbeddings()\n",
        "graph = Neo4jGraph()\n",
        "vector_index = Neo4jVector.from_existing_index(\n",
        "    embeddings,\n",
        "    index_name=\"news\"\n",
        ")"
      ],
      "metadata": {
        "id": "wBMSBXvY8Ql_"
      },
      "execution_count": 4,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Code for mapping organizations from user input to database using Full-text index\n",
        "def generate_full_text_query(input: str) -> str:\n",
        "    \"\"\"\n",
        "    Generate a full-text search query for a given input string.\n",
        "\n",
        "    This function constructs a query string suitable for a full-text search.\n",
        "    It processes the input string by splitting it into words and appending a\n",
        "    similarity threshold (~0.8) to each word, then combines them using the AND\n",
        "    operator. Useful for mapping movies and people from user questions\n",
        "    to database values, and allows for some misspelings.\n",
        "    \"\"\"\n",
        "    full_text_query = \"\"\n",
        "    words = [el for el in remove_lucene_chars(input).split() if el]\n",
        "    for word in words[:-1]:\n",
        "        full_text_query += f\" {word}~2 AND\"\n",
        "    full_text_query += f\" {words[-1]}~2\"\n",
        "    return full_text_query.strip()\n",
        "\n",
        "candidate_query = \"\"\"\n",
        "CALL db.index.fulltext.queryNodes($index, $fulltextQuery, {limit: $limit})\n",
        "YIELD node\n",
        "WHERE node:Organization // Filter organization nodes\n",
        "RETURN distinct node.name AS candidate\n",
        "\"\"\"\n",
        "\n",
        "\n",
        "def get_candidates(input: str, limit: int = 5) -> List[Dict[str, str]]:\n",
        "    \"\"\"\n",
        "    Retrieve a list of candidate entities from database based on the input string.\n",
        "\n",
        "    This function queries the Neo4j database using a full-text search. It takes the\n",
        "    input string, generates a full-text query, and executes this query against the\n",
        "    specified index in the database. The function returns a list of candidates\n",
        "    matching the query.\n",
        "    \"\"\"\n",
        "    ft_query = generate_full_text_query(input)\n",
        "    candidates = graph.query(\n",
        "        candidate_query, {\"fulltextQuery\": ft_query, \"index\": 'entity', \"limit\": limit}\n",
        "    )\n",
        "    # If there is direct match return only that, otherwise return all options\n",
        "    direct_match = [el[\"candidate\"] for el in candidates if el[\"candidate\"].lower() == input.lower()]\n",
        "    if direct_match:\n",
        "        return direct_match\n",
        "\n",
        "    return [el[\"candidate\"] for el in candidates]"
      ],
      "metadata": {
        "id": "FEWb2MoTANNV"
      },
      "execution_count": 5,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "get_candidates(\"neo4\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "m3U_sk97WLgX",
        "outputId": "8daf9d6f-24a8-464a-87a5-f69bac08cbbf"
      },
      "execution_count": 6,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "['Net4', 'Neo4j', 'Neos', 'Neo', 'Neon Software']"
            ]
          },
          "metadata": {},
          "execution_count": 6
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "More or less, the whole blog post boils down to the following get_organization_news function, which dynamically generates a Cypher statement and retrieves relevant information."
      ],
      "metadata": {
        "id": "9M_Yh40p88C4"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def get_organization_news(\n",
        "    topic: Optional[str] = None,\n",
        "    organization: Optional[str] = None,\n",
        "    country: Optional[str] = None,\n",
        "    sentiment: Optional[str] = None,\n",
        ") -> str:\n",
        "    # If there is no prefiltering, we can use vector index\n",
        "    if topic and not organization and not country and not sentiment:\n",
        "        return vector_index.similarity_search(topic)\n",
        "    # Uses parallel runtime where available\n",
        "    base_query = (\n",
        "        \"CYPHER runtime = parallel parallelRuntimeSupport=all \"\n",
        "        \"MATCH (c:Chunk)<-[:HAS_CHUNK]-(a:Article) WHERE \"\n",
        "    )\n",
        "    where_queries = []\n",
        "    params = {\"k\": 5}  # Define the number of text chunks to retrieve\n",
        "    if organization:\n",
        "        # Map to database\n",
        "        candidates = get_candidates(organization)\n",
        "        if len(candidates) > 1:  # Ask for follow up if too many options\n",
        "            return (\n",
        "                \"Ask a follow up question which of the available organizations \"\n",
        "                f\"did the user mean. Available options: {candidates}\"\n",
        "            )\n",
        "        where_queries.append(\n",
        "            \"EXISTS {(a)-[:MENTIONS]->(:Organization {name: $organization})}\"\n",
        "        )\n",
        "        params[\"organization\"] = candidates[0]\n",
        "    if country:\n",
        "        # No need to disambiguate\n",
        "        where_queries.append(\n",
        "            \"EXISTS {(a)-[:MENTIONS]->(:Organization)-[:IN_CITY]->()-[:IN_COUNTRY]->(:Country {name: $country})}\"\n",
        "        )\n",
        "        params[\"country\"] = country\n",
        "\n",
        "    if sentiment:\n",
        "        if sentiment == \"positive\":\n",
        "            where_queries.append(\"a.sentiment > $sentiment\")\n",
        "            params[\"sentiment\"] = 0.5\n",
        "        else:\n",
        "            where_queries.append(\"a.sentiment < $sentiment\")\n",
        "            params[\"sentiment\"] = -0.5\n",
        "    if topic:  # Do vector comparison\n",
        "        vector_snippet = (\n",
        "            \" WITH c, a, vector.similarity.cosine(c.embedding,$embedding) AS score \"\n",
        "            \"ORDER BY score DESC LIMIT toInteger($k) \"\n",
        "        )\n",
        "        params[\"embedding\"] = embeddings.embed_query(topic)\n",
        "        params[\"topic\"] = topic\n",
        "    else:  # Just return the latest data\n",
        "        vector_snippet = \" WITH c, a ORDER BY a.date DESC LIMIT toInteger($k) \"\n",
        "\n",
        "    return_snippet = \"RETURN '#title ' + a.title + '\\n#date ' + toString(a.date) + '\\n#text ' + c.text AS output\"\n",
        "\n",
        "    complete_query = (\n",
        "        base_query + \" AND \".join(where_queries) + vector_snippet + return_snippet\n",
        "    )\n",
        "    data = graph.query(complete_query, params)\n",
        "    print(f\"Cypher: {complete_query}\\n\")\n",
        "    # Safely remove embedding before printing\n",
        "    params.pop('embedding', None)\n",
        "    print(f\"Parameters: {params}\")\n",
        "    return \"###Article: \".join([el[\"output\"] for el in data])\n"
      ],
      "metadata": {
        "id": "riQdsgk58S4x"
      },
      "execution_count": 7,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "We begin by defining the input parameters. As you can observe, all of them are optional strings. The topic parameter is used to find specific information within documents. In practice, we embed the value of the topic parameter and use it as input for vector similarity search. The other three parameters will be used to demonstrate the pre-filtering approach.\n",
        "If all of the pre-filtering parameters are empty, we can find the relevant documents using the existing vector index. Otherwise, we start preparing the base Cypher statement that will be used for the pre-filtered metadata approach. The clause `CYPHER runtime = parallel parallelRuntimeSupport=all` instructs the Neo4j database to use parallel runtime where available. Next, we prepare a match statement that selects `Chunk` nodes and their corresponding `Article` nodes.\n",
        "\n",
        "Now we are ready to dynamically append metadata filters to the Cypher statement. We will begin with `Organization` filter.\n",
        "\n",
        "If the LLM identifies any particular organization the user is interested in, we must first map the value to the database with the get_candidatesfunction. Under the hood, the `get_candidates` function uses keyword search utilizing a full-text index to find candidate nodes. If multiple candidates are found, we instruct the LLM to ask a follow-up question to the user to clarify which organization they meant exactly. Otherwise we append an existential subquery that filters the articles which mention the particular organization to the list of filters. To prevent any Cypher injection, we use query parameters instead of concatenating the query.\n",
        "\n",
        "Next, we handle situations when a user wants to pre-filter text chunks based on the country of the mentioned organizations.\n",
        "\n",
        "Since countries follow standard naming, we don't have to map values to the database, as LLMs are familiar with most country naming standards.\n",
        "Similarly, we also handle sentiment metadata filtering.\n",
        "\n",
        "We will instruct the LLM to only use two values for a `sentiment` input value, either positive or negative. We then map these two values to appropriate filter values.\n",
        "\n",
        "We handle the `topic` parameter slightly differently as it's not used for prefiltering but rather vector similarity search.\n",
        "\n",
        "If the LLM identifies that the user is interested in a particular topic in the news, we use the topic input's text embedding to find the most relevant documents. On the other hand, if no specific topic is identified, we simply return the latest couple of articles and avoid vector similarity search altogether.\n",
        "\n",
        "Now, we have to put the Cypher statement together and use it to retrieve information from the database.\n",
        "\n",
        "We construct the final complete_query by combining all the query snippets. After that, we use the dynamically generated Cypher statement to retrieve information from the database and return it to the LLM. Let's examine the generated Cypher statement for an example input."
      ],
      "metadata": {
        "id": "5oU9Obs09CTr"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "get_organization_news(\n",
        "    organization='neo4j',\n",
        "    sentiment='positive',\n",
        "    topic='remote work'\n",
        ")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 212
        },
        "id": "csFQe6RLry4_",
        "outputId": "819a25eb-d202-4341-c647-b8bded6867b1"
      },
      "execution_count": 8,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Cypher: CYPHER runtime = parallel parallelRuntimeSupport=all MATCH (c:Chunk)<-[:HAS_CHUNK]-(a:Article) WHERE EXISTS {(a)-[:MENTIONS]->(:Organization {name: $organization})} AND a.sentiment > $sentiment WITH c, a, vector.similarity.cosine(c.embedding,$embedding) AS score ORDER BY score DESC LIMIT toInteger($k) RETURN '#title ' + a.title + '\n",
            "#date ' + toString(a.date) + '\n",
            "#text ' + c.text AS output\n",
            "\n",
            "Parameters: {'k': 5, 'organization': 'Neo4j', 'sentiment': 0.5, 'topic': 'remote work'}\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "'#title Accounts in Transit: Ruder Finn Adds Neo4j\\n#date 2023-04-27T00:00:00Z\\n#text Ruder Finn signs on as North American agency of record for Neo4j, a native graph database and analytics company. The agency will be responsible for implementing an integrated communications program, as well as working to amplify awareness of the company and category. The scope of work will include strategic media relations and executive communications to support corporate and product PR. Antonia Caamaño, SVP of RF Tech, will lead the Ruder Finn team handling the account out of New York. \"We chose Ruder Finn to achieve our next stage of awareness because of the agency\\'s experience in enterprise IT and deep tech, which allows them to deliver smart strategies and creative executions, as well as their long-running relationships with top-tier media,” said Neo4j CMO Chandra Rangan.\\nOak Public Relations is named communications agency of record for Custom Cones USA, which produces supplies for cannabis pre-rolls and other cannabis packaging solutions. The agency will work to expand the presence of Custom Cones USA at conferences and trade shows, as well as publicize the company’s blog content, and introduce its leadership as industry experts to targeted media. “The Custom Cones USA and Oak PR teams align in our passion and dedication working at helping the industry grow, making Oak PR the perfect partner for this next step,” said Custom Cones USA co-founder Harrison Baird.\\nThe Sideways Life signs on as PR agency of record for Utu, a skincare brand targeted at outdoor enthusiasts. The agency will handle all public relations activities for Utu, including media relations, gifting and brand partnerships. Utu founder Richard Welch has worked with brands like Tom Ford, Nokia, Estee Lauder, Axe, Diesel, REN and The North Face. Welch said that The Sideways Life’s “expertise in the outdoor and adventure space, combined with their nimble and collaborative approach to PR, makes them the perfect partner for Utu.”\\n•\\nAccounts in Transit: Havas Formula Lands Mixbook\\nFri., Apr. 28, 2023\\nHavas Formula picks up Mixbook, the No. 1-rated photo book brand... Magrino works with Royal Salute, an aged, blended Scotch whisky, and luxury retailer Fortnum & Mason on initiatives centered around the May 6 coronation of King Charles III... Ripson Group adds Near North Health and Banging Gavel Brews to its client roster.\\n•\\nAccounts in Transit: Coyne PR Picks Up Signature Wafer & Chocolate Co.\\nWed., Apr. 26, 2023\\nCoyne PR is partnering with Signature Wafer & Chocolate Co., the largest wafer manufacturer in the U.S... Firecracker PR signs on to represent Bridge 2 Technologies... Quad agencies Periscope and Rise Interactive are named marketing agencies of record for Jelmar, which makes household cleaning products.\\n•\\nAccounts in Transit: Pan Communications Lands Venti Technologies\\nFri., Apr. 21, 2023\\nPAN Communications adds Venti Technologies... 360PR+ lands skin creative product division at consumer goods company BIC... William Mills Agency wins PR duties for fintech startup Union Credit.\\n•\\nAccounts in Transit: Finn Partners Books Blue Diamond Resorts\\nWed., Apr. 19, 2023\\nFINN Partners books Blue Diamond Resorts, which owns all-inclusive hotel brands across the Caribbean region... Zapwater Communications is retained by Activate Games to unveil its new Chicago location... Allen & Gerritsen lands brand agency of record status for Freight Farms, a manufacturer of container farming technology.###Article: #title Neo4j Announces New Product Integrations with Generative AI Features in Google Cloud Vertex AI\\n#date 2023-06-07T13:00:00Z\\n#text \\'s partnership with Google represents a powerful union of graph technology and cloud computing excellence in a new era of AI,\" said Emil Eifrem, Co-Founder and CEO, Neo4j. \"Together, we empower enterprises seeking to leverage generative AI to better innovate, provide the best outcome for their customers, and unlock the true power of their connected data at unprecedented speed.\"\\nAbout Neo4j\\nNeo4j, the Graph Database & Analytics leader, helps organizations find hidden relationships and patterns across billions of data connections deeply, easily and quickly. Customers leverage the structure of their connected data to reveal new ways of solving their most pressing business problems, from fraud detection, customer 360, knowledge graphs, supply chain, personalization, IoT, network management, and more – even as their data grows. Neo4j\\'s full graph stack delivers powerful native graph storage, data science, advanced analytics, and visualization, with enterprise-grade security controls, scalable architecture and ACID compliance. Neo4j\\'s community of data leaders comprises a vibrant, open-source community of more than 250,000 developers, data scientists, and architects across hundreds of Fortune 500 companies, government agencies and NGOs. Visit neo4j.com.\\nContact:\\npr@neo4j.com\\nneo4j.com/pr\\n© 2022 Neo4j, Inc., Neo Technology®, Neo4j®, Cypher®, Neo4j® Bloom™, Neo4j Graph Data Science Library™, Neo4j® Aura™, Neo4j® AuraDS™, and Neo4j® AuraDB™ are registered trademarks or a trademark of Neo4j, Inc. All other marks are owned by their respective companies.\\nSOURCE Neo4j###Article: #title DXC Technology snags award at Neo4j GraphSummit Australia\\n#date 2023-05-09T03:05:00Z\\n#text Graph database and analytics company Neo4j announced the winners of the 2023 Graphie Awards in Australia and New Zealand, where DXC Technology was awarded.\\nNeo4j aimed to recognise organisations and individuals for “outstanding innovation in implementing Neo4j’s graph technology.”\\nThe awards ceremony was held during the 2023 GraphSummit Australia in Sydney, Canberra and Melbourne from 3 to 9 May.\\nThe successful entrants were selected based on their exemplary use of graph technology to address the most significant enterprise challenges.\\nIT provider DXC technology was awarded for its Excellence in Data Driven Career Development.\\nGraphSummit also featured key Neo4j community leaders and local customers showcasing the most promising applications of graph technology in their respective fields.\\nThis included DXC Technology’s human experience management and workforce management data analytics Michele Howard.\\nOthers acknowledged included GraphAware’s general manager ANZ Dan Newland and InterVenn BioSciences manager Matthew Campbel.\\nNeo4j general manager ANZ Peter Philipp said: “we are excited to celebrate these exceptional organisations who are successfully tackling the complexities of digital ecosystems with graph technology.”\\n“It was a challenge to select the winners from an impressive list of finalists – we can’t wait to see them further excel with Neo4j playing a pivotal role,” he added.###Article: #title This Week in Neo4j – Will it Graph, Python Database Backups, Knowledge Graphs, Kinesis, and Kanye West\\n#date 2021-08-07T00:00:00Z\\n#text It’s August and many of us are thinking about taking a restful break from work for the month, or perhaps returning our kids to school. However, our community members are hard at work generating some great things with Neo4j!\\nThis week, we’re taking the opportunity to highlight Katerina Baousi, who gave an excellent talk at NODES 2021 on looking at Twitter trolls using Neo4j. We also have posts ranging from identifying graphy problems and using temporary graphs for unit testing to how to go from AWS Kinesis to Neo4j in Spark. There’s also an article showing how NASA is using knowledge graphs to manage people, skills, and projects. Lastly, you will not want to miss the fun of going through the Six Degrees of Kanye West!\\nFeatured Community Member: Katerina Baousi\\nThis week’s featured community member is Katerina Baousi.\\nKaterina is a solutions engineer at Cambridge Intelligence. She has a great deal of skill in a broad variety of areas, including web development and data visualization. Her work at Cambridge Intelligence is focused on the KronoGraph tool for exploring timeline analysis within graph data. She also gave an excellent talk at NODES 2021 on Timeseries Visualization of Social Networks with Neo4j.\\nWill It Graph? (Part 2)\\nIn this episode of GraphStuff.FM, Neo4j’s own Lju Lazarevic and William Lyon present information on how to identify whether you have a “graphy” problem and how to know whether a graph database is the right fit for your problem. This is the second part of a series on the topic. Part 1 can be found here.\\nOne key indicator they discuss is that having a lot of JOINs in a typical workflow is a big hint that you may have a graphy problem, since multi-hop traversals can be expensive. This is particularly beneficial when you don’t know how many connections you are interested in at query time (i.e. a variable-length graph traversal). Some examples that they provided include fraud detection and network and IT management.\\nA New Tool to Back Up a Neo4j Database with Python\\nAre you interested in downloading and uploading data into a Neo4j database where using dump files is not an option? Would you like to be able to store your data in different formats, thus allowing, say, easily changing which version of Neo4j you are using? Would you like an open source Python package that is capable of doing so, installable via pip? Then check out the code that Andres Hyer has written to do just that. You can use it on AuraDB, with Docker, via the command-line, or pretty much any way you want. Check it out!\\nNODES 2021 Extended: Semantic AI Platform; What is the Theta Base\\nWe are now in full swing with the NODES 2021 extended talks, which build off the excitement from NODES 2021 with even more high-quality talks.\\nSo we are taking the opportunity to highlight the talks of two users. The first is Siddharth Karumanchi, founding research scientist at QUIPI, who presented a talk entitled “Semantic AI Platform.” The goal of this work is to present the context for enterprise domain knowledge in a convenient way. He showed how to semantically enrich a knowledge graph to aid in text mining and natural language processing problems like entity extraction and disambiguation.\\nThe second talk was presented by Elias Moosman, co-founder of Youiest, who discussed the Theta Base. In this talk, he shared how pulling together thought, data, and ownership can be used to create apps around measuring and influencing employee engagement. This looks at how intentions and values for an organization interact with both positive and negative correlation, managed with Neo4j.\\nHow NASA is Using Knowledge Graphs to Find Talent\\nContinuing their tradition of actively using Neo4j, NASA has detailed their use of a talent mapping database to show the relationships between people, skills, and projects in a knowledge graph.\\nSenior data scientist David Meza described this work to Venture Beat. The aim is to look at identifying things like skills, tasks, and technology within a work role and translate that to employees for things like connecting around training around NASA-specific competencies. It will hopefully give the employees an opportunity to explore how to further their careers and better align people across the organization.\\n(from:Kinesis)-[:VIA_SPARK]→(to:Neo4j)\\nAre you interested in streaming large amounts of real-time data into Neo4j? Davide Fantuzzi of LARUS has written a blog post on how to use the Neo4j Spark connector to###Article: #title This Week in Neo4j – Will it Graph, Python Database Backups, Knowledge Graphs, Kinesis, and Kanye West\\n#date 2021-08-07T00:00:00Z\\n#text  and negative correlation, managed with Neo4j.\\nHow NASA is Using Knowledge Graphs to Find Talent\\nContinuing their tradition of actively using Neo4j, NASA has detailed their use of a talent mapping database to show the relationships between people, skills, and projects in a knowledge graph.\\nSenior data scientist David Meza described this work to Venture Beat. The aim is to look at identifying things like skills, tasks, and technology within a work role and translate that to employees for things like connecting around training around NASA-specific competencies. It will hopefully give the employees an opportunity to explore how to further their careers and better align people across the organization.\\n(from:Kinesis)-[:VIA_SPARK]→(to:Neo4j)\\nAre you interested in streaming large amounts of real-time data into Neo4j? Davide Fantuzzi of LARUS has written a blog post on how to use the Neo4j Spark connector to get an AWS Kinesis Data Stream into a Neo4j database. This post includes a complete demonstration of how to set up a proper IAM user, the Kinesis Data Stream, and the Kenesis Data Generator in preparation for data ingested into Neo4j. He then provides the reader with a Docker container that runs an Apache Zeppelin notebook, allowing you to tinker with Spark and Neo4j and then finally explore the graph in the Neo4j browser.\\nSix Degrees of Kanye West\\nHave you ever wanted to be able to calculate the “Kanye Number” for a given artist? Admit it… you have! Neo4j’s own Rik Van Bruggen has written a blog post showing you how (in part 3/3 of this series). Using the data available from Musicbrainz, he has created a fun demo that demonstrates the power of graph databases with some basic Cypher queries to get you started. There are plenty of worked Cypher examples, including calculating the Kanye Number or finding recordings with the most artists, and it concludes with a nice Bloom demonstration.\\nFun to watch the BGG data get put into Neo4j! If you want to use an average that takes into consideration the number of ratings, use the bayes_average_rating (Bayesian Average). This could be quite interesting if the underlying properties of games could be added also. https://t.co/HMXvb5c3XJ\\n— BoardGameGeek (@BoardGameGeek) August 3, 2021'"
            ],
            "application/vnd.google.colaboratory.intrinsic+json": {
              "type": "string"
            }
          },
          "metadata": {},
          "execution_count": 8
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "The dynamic query generations works as expected, and is able to retrieve relevant information from the database.\n",
        "## Defining OpenAI agent\n",
        "Next, we need to wrap the function as an Agent tool. First, we will add input parameter descriptions."
      ],
      "metadata": {
        "id": "NJ5SEUkt9fJF"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "fewshot_examples = \"\"\"{Input:What are the health benefits for Google employees in the news? Topic: Health benefits}\n",
        "{Input: What is the latest positive news about Google? Topic: None}\n",
        "{Input: Are there any news about VertexAI regarding Google? Topic: VertexAI}\n",
        "{Input: Are there any news about new products regarding Google? Topic: new products}\n",
        "\"\"\"\n",
        "\n",
        "class NewsInput(BaseModel):\n",
        "    topic: Optional[str] = Field(\n",
        "        description=\"Any particular topic that the user wants to finds information for. Here are some examples: \"\n",
        "        + fewshot_examples\n",
        "    )\n",
        "    organization: Optional[str] = Field(\n",
        "        description=\"Organization that the user wants to find information about\"\n",
        "    )\n",
        "    country: Optional[str] = Field(\n",
        "        description=\"Country of organizations that the user is interested in. Use full names like United States of America and France.\"\n",
        "    )\n",
        "    sentiment: Optional[str] = Field(\n",
        "        description=\"Sentiment of articles\", enum=[\"positive\", \"negative\"]\n",
        "    )\n"
      ],
      "metadata": {
        "id": "ycDERj_0vIOA"
      },
      "execution_count": 9,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "The pre-filtering parameters were quite simple to describe, but I had some problems with getting the topic parameter to work as expected. In the end, I decided to add some examples so that the LLM would understand it better. Additionally, you can observe that we give the LLM information about the country naming format as well as provide enumeration for sentiment .\n",
        "Now, we can define a custom tool by giving it a name and description containing instructions for an LLM on when to use it."
      ],
      "metadata": {
        "id": "Ircj5ZqD9kzz"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "class NewsTool(BaseTool):\n",
        "    name = \"NewsInformation\"\n",
        "    description = (\n",
        "        \"useful for when you need to find relevant information in the news\"\n",
        "    )\n",
        "    args_schema: Type[BaseModel] = NewsInput\n",
        "\n",
        "    def _run(\n",
        "        self,\n",
        "        topic: Optional[str] = None,\n",
        "        organization: Optional[str] = None,\n",
        "        country: Optional[str] = None,\n",
        "        sentiment: Optional[str] = None,\n",
        "        run_manager: Optional[CallbackManagerForToolRun] = None,\n",
        "    ) -> str:\n",
        "        \"\"\"Use the tool.\"\"\"\n",
        "        return get_organization_news(topic, organization, country, sentiment)\n",
        "\n",
        "    async def _arun(\n",
        "        self,\n",
        "        topic: Optional[str] = None,\n",
        "        organization: Optional[str] = None,\n",
        "        country: Optional[str] = None,\n",
        "        sentiment: Optional[str] = None,\n",
        "        run_manager: Optional[CallbackManagerForToolRun] = None,\n",
        "    ) -> str:\n",
        "        \"\"\"Use the tool asynchronously.\"\"\"\n",
        "        return get_organization_news(topic, organization, country, sentiment)"
      ],
      "metadata": {
        "id": "mRixc7jX9nN7"
      },
      "execution_count": 10,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "One last thing is to define the Agent executor. I just reuse the LCEL implementation of an OpenAI agent I implemented some time ago."
      ],
      "metadata": {
        "id": "a8p2N1UY92O4"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "llm = ChatOpenAI(temperature=0, model=\"gpt-4-turbo\", streaming=True)\n",
        "tools = [NewsTool()]\n",
        "\n",
        "llm_with_tools = llm.bind(functions=[convert_to_openai_function(t) for t in tools])\n",
        "\n",
        "prompt = ChatPromptTemplate.from_messages(\n",
        "    [\n",
        "        (\n",
        "            \"system\",\n",
        "            \"You are a helpful assistant that finds information about movies \"\n",
        "            \" and recommends them. If tools require follow up questions, \"\n",
        "            \"make sure to ask the user for clarification. Make sure to include any \"\n",
        "            \"available options that need to be clarified in the follow up questions \"\n",
        "            \"Do only the things the user specifically requested. \",\n",
        "        ),\n",
        "        MessagesPlaceholder(variable_name=\"chat_history\"),\n",
        "        (\"user\", \"{input}\"),\n",
        "        MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
        "    ]\n",
        ")\n",
        "\n",
        "def _format_chat_history(chat_history: List[Tuple[str, str]]):\n",
        "    buffer = []\n",
        "    for human, ai in chat_history:\n",
        "        buffer.append(HumanMessage(content=human))\n",
        "        buffer.append(AIMessage(content=ai))\n",
        "    return buffer\n",
        "\n",
        "\n",
        "agent = (\n",
        "    {\n",
        "        \"input\": lambda x: x[\"input\"],\n",
        "        \"chat_history\": lambda x: _format_chat_history(x[\"chat_history\"])\n",
        "        if x.get(\"chat_history\")\n",
        "        else [],\n",
        "        \"agent_scratchpad\": lambda x: format_to_openai_function_messages(\n",
        "            x[\"intermediate_steps\"]\n",
        "        ),\n",
        "    }\n",
        "    | prompt\n",
        "    | llm_with_tools\n",
        "    | OpenAIFunctionsAgentOutputParser()\n",
        ")\n",
        "\n",
        "agent_executor = AgentExecutor(agent=agent, tools=tools)"
      ],
      "metadata": {
        "id": "eVqQKCpYoKD3"
      },
      "execution_count": 11,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "The agent has a single tool it can use to retrieve information about the news. We also added the `chat_history` message placeholder, making the agent conversational and allowing follow-up questions and replies.\n",
        "## Implementation testing\n",
        "Let's run a couple of inputs and examine the generated Cypher statements and parameters."
      ],
      "metadata": {
        "id": "iCua9Yvb9oOJ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "agent_executor.invoke({\"input\": \"What are some positive news regarding neo4j?\"})"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "hslLZZXDoeIq",
        "outputId": "88364579-8e9c-4740-a34f-eec7081f56ff"
      },
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Cypher: CYPHER runtime = parallel parallelRuntimeSupport=all MATCH (c:Chunk)<-[:HAS_CHUNK]-(a:Article) WHERE EXISTS {(a)-[:MENTIONS]->(:Organization {name: $organization})} AND a.sentiment > $sentiment WITH c, a ORDER BY a.date DESC LIMIT toInteger($k) RETURN '#title ' + a.title + '\n",
            "#date ' + toString(a.date) + '\n",
            "#text ' + c.text AS output\n",
            "\n",
            "Parameters: {'k': 5, 'organization': 'Neo4j', 'sentiment': 0.5}\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "{'input': 'What are some positive news regarding neo4j?',\n",
              " 'output': \"Here are some positive news regarding Neo4j:\\n\\n1. **New Product Integrations with Generative AI Features in Google Cloud Vertex AI**:\\n   - Neo4j announced a new product integration with Google Cloud's latest generative AI features in Vertex AI. This integration allows enterprise customers to use knowledge graphs built on Neo4j's cloud offerings in Google Cloud Platform for more accurate, transparent, and explainable generative AI insights and recommendations. This partnership, which began in 2019, has enabled various AI use cases across large enterprises and SMBs, ranging from anti-money laundering to personalized recommendations and more.\\n\\n2. **Recognition at the 2023 Graphie Awards in Australia and New Zealand**:\\n   - During the 2023 GraphSummit Australia, Neo4j recognized organizations and individuals for outstanding innovation in implementing Neo4j’s graph technology. DXC Technology was awarded for its Excellence in Data Driven Career Development, highlighting the impactful use of graph technology in addressing significant enterprise challenges.\\n\\nThese developments showcase Neo4j's continued leadership in graph database technology and its strategic partnerships to enhance AI-driven business solutions.\"}"
            ]
          },
          "metadata": {},
          "execution_count": 12
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "The generated Cypher statement is valid. Since we didn't specify any particular topic, it returns the last five text chunks from positive articles mentioning Neo4j. Let's something a bit more complex:"
      ],
      "metadata": {
        "id": "pM60wp4c95_X"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "agent_executor.invoke({\"input\": \"What are some of the latest negative news about employee happiness for companies from France?\"})"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "yEm97IsmpJgo",
        "outputId": "6b073aa1-9675-46f3-d8e8-53aeacd011ef"
      },
      "execution_count": 13,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Cypher: CYPHER runtime = parallel parallelRuntimeSupport=all MATCH (c:Chunk)<-[:HAS_CHUNK]-(a:Article) WHERE EXISTS {(a)-[:MENTIONS]->(:Organization)-[:IN_CITY]->()-[:IN_COUNTRY]->(:Country {name: $country})} AND a.sentiment < $sentiment WITH c, a, vector.similarity.cosine(c.embedding,$embedding) AS score ORDER BY score DESC LIMIT toInteger($k) RETURN '#title ' + a.title + '\n",
            "#date ' + toString(a.date) + '\n",
            "#text ' + c.text AS output\n",
            "\n",
            "Parameters: {'k': 5, 'country': 'France', 'sentiment': -0.5, 'topic': 'employee happiness'}\n"
          ]
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "{'input': 'What are some of the latest negative news about employee happiness for companies from France?',\n",
              " 'output': 'Here are some of the latest negative news related to employee happiness for companies from France:\\n\\n1. **IBM Whistleblower Case**:\\n   - **Date**: October 13, 2020\\n   - **Summary**: IBM was ordered to pay £22,000 in compensation and two years\\' salary to a British employee who blew the whistle on unlawful working practices within the company. The employee faced retaliation from managers after speaking up about conditions that potentially amounted to sex discrimination. The tribunal criticized IBM\\'s managers for their lack of understanding of discrimination and the hostile work environment created for the whistleblower.\\n\\n2. **Manufacturing Business Leaders Resist Digital Progress**:\\n   - **Date**: February 1, 2021\\n   - **Summary**: A report titled \"The Connected Enterprise\" highlighted skepticism among manufacturing industry leaders in France regarding the benefits of implementing new business technology. Many leaders doubted the technology\\'s ability to improve efficiency, productivity, and customer relationships. The report also noted that poor implementation of technology could create burdens on employees, indicating a negative impact on employee happiness.\\n\\nThese articles reflect concerns about workplace practices and the impact of management decisions on employee well-being in French companies.'}"
            ]
          },
          "metadata": {},
          "execution_count": 13
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "The LLM agent correctly generated prefiltering parameters but also identified a specific employee happiness topic. This topic is used as input to vector similarity search, allowing us to refine the retrieval process even more.\n",
        "## Summary\n",
        "In this blog post, we've implemented example graph-based metadata filters, enhancing vector search accuracy. However, the dataset has extensive and interconnected options that allow for much more sophisticated pre-filtering queries. With a graph data representation, the possibilities for structured filters are virtually limitless when combined with the LLM function-calling feature to generate Cypher statements dynamically.\n",
        "\n",
        "Additionally, your agent could have tools that retrieve unstructured text, as shown in this blog post, as well as other tools that can retrieve structured information, making a knowledge graph an excellent solution for many RAG applications."
      ],
      "metadata": {
        "id": "eErcfCnv99AZ"
      }
    },
    {
      "cell_type": "code",
      "source": [],
      "metadata": {
        "id": "0RZrB6rqqQ0j"
      },
      "execution_count": 13,
      "outputs": []
    }
  ]
}