{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "d35ac8ce-0f92-46f5-9ba4-a46970f0ce19",
   "metadata": {
    "id": "d35ac8ce-0f92-46f5-9ba4-a46970f0ce19"
   },
   "source": [
    "# Cognee - Get Started"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bd981778-0c84-4542-8e6f-1a7712184873",
   "metadata": {
    "editable": true,
    "id": "bd981778-0c84-4542-8e6f-1a7712184873",
    "tags": []
   },
   "source": [
    "## Let's talk about the problem first\n",
    "\n",
    "### Large Language Models (LLMs) have become powerful tools for generating text and answering questions, but they still have several limitations and challenges. Below is an overview of some of the biggest problems with the results they produce:\n",
    "\n",
    "### 1. Hallucinations and Misinformation\n",
    "- Hallucinations: LLMs sometimes produce outputs that are factually incorrect or entirely fabricated. This phenomenon is known as \"hallucination.\" Even if an LLM seems confident, the information it provides might not be reliable.\n",
    "- Misinformation: Misinformation can be subtle or glaring, ranging from minor inaccuracies to entirely fictitious events, sources, or data.\n",
    "\n",
    "### 2. Lack of Contextual Understanding\n",
    "- LLMs can recognize and replicate patterns in language but don’t have true comprehension. This can lead to responses that are coherent but miss nuanced context or deeper meaning.\n",
    "- They can misinterpret multi-turn conversations, leading to confusion in maintaining context over a long dialogue.\n",
    "\n",
    "### 3. Inconsistent Reliability\n",
    "- Depending on the prompt, LLMs might produce inconsistent responses to similar questions or tasks. For example, the same query might result in conflicting answers when asked in slightly different ways.\n",
    "- This inconsistency can undermine trust in the model's outputs, especially in professional or academic settings.\n",
    "\n",
    "### 4. Inability to Access Real-Time Information\n",
    "- Most LLMs are trained on data up to a specific point and cannot access or generate information on current events or emerging trends unless updated. This can make them unsuitable for inquiries requiring up-to-date information.\n",
    "- Real-time browsing capabilities can help, but they are not universally available.\n",
    "\n",
    "### 5. Lack of Personalization and Adaptability\n",
    "- LLMs do not naturally adapt to individual preferences or learning styles unless explicitly programmed to do so. This limits their usefulness in providing personalized recommendations or support.\n",
    "\n",
    "### 6. Difficulty with Highly Technical or Niche Domains\n",
    "- LLMs may struggle with highly specialized or technical topics where domain-specific knowledge is required.\n",
    "- They can produce technically plausible but inaccurate or incomplete information, which can be misleading in areas like law, medicine, or scientific research.\n",
    "\n",
    "### 7. Ambiguity in Response Generation\n",
    "- LLMs might not always specify their level of certainty, making it hard to gauge when they are speculating or providing less confident answers.\n",
    "- They lack a mechanism to say “I don’t know,” which can lead to responses that are less useful or potentially misleading."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d8e606b1-94d3-43ce-bb4b-dbadff7f4ca6",
   "metadata": {
    "id": "d8e606b1-94d3-43ce-bb4b-dbadff7f4ca6"
   },
   "source": [
    "## The next solution was RAGs\n",
    "\n",
    "#### RAGs (Retrieval Augmented Generation) are systems that connect to a vector store and search for similar data so they can enrich LLM response."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "23e74f22-f43c-4f03-afe0-b423cbaa412a",
   "metadata": {
    "id": "23e74f22-f43c-4f03-afe0-b423cbaa412a"
   },
   "source": [
    "![rag.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b6a98710-a14b-4a14-bb56-d3ae055e94d9",
   "metadata": {
    "id": "b6a98710-a14b-4a14-bb56-d3ae055e94d9"
   },
   "source": [
    "#### The problem lies in the nature of the search. If you just find some keywords, and return one or many documents from vectorstore this way, you will have an issue with the the way you would use to organise and prioritise documents.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "RQfw9o-Ege5S",
   "metadata": {
    "id": "RQfw9o-Ege5S"
   },
   "source": [
    "![rag_problem_v2_white.drawio.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d3406b8f-cb9a-46ce-9029-2cbae2755795",
   "metadata": {
    "id": "d3406b8f-cb9a-46ce-9029-2cbae2755795"
   },
   "source": [
    "## Semantic similarity search is not magic\n",
    "#### The most similar result isn't the most relevant one.\n",
    "#### If you search for documents in which the sentiment expressed is \"I like apples.\", one of the closest results you get are documents in which the sentiment expressed is \"I don't like apples.\"\n",
    "#### Wouldn't it be nice to have a semantic model LLMs could use?\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b900f830-8e9e-4272-b198-594606da4457",
   "metadata": {
    "id": "b900f830-8e9e-4272-b198-594606da4457"
   },
   "source": [
    "# That is where Cognee comes in"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d3ae099a-1bbb-4f13-9bcb-c0f778d50e91",
   "metadata": {
    "id": "d3ae099a-1bbb-4f13-9bcb-c0f778d50e91"
   },
   "source": [
    "#### Cognee assists developers in introducing greater predictability and management into their Retrieval-Augmented Generation (RAG) workflows through the use of graph architectures, vector stores, and auto-optimizing pipelines. Displaying information as a graph is the clearest way to grasp the content of your documents. Crucially, graphs allow systematic navigation and extraction of data from documents based on their hierarchy.\n",
    "\n",
    "#### Cognee lets you create tasks and contextual pipelines of tasks that enable composable GraphRAG, where you have full control of all the elements of the pipeline from ingestion until graph creation."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "785383b0-87b5-4a0a-be3f-e809aa284e30",
   "metadata": {
    "id": "785383b0-87b5-4a0a-be3f-e809aa284e30"
   },
   "source": [
    "# Core Concepts"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cbaa9223",
   "metadata": {},
   "source": [
    "## Concept 1: Raw to Processed data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3540ce30-2b22-4ece-8516-8d5ff2a405fe",
   "metadata": {
    "id": "3540ce30-2b22-4ece-8516-8d5ff2a405fe"
   },
   "source": [
    "### Most of the data we provide to a system can be categorized as unstructured, semi-structured, or structured. Rows from a database would belong to structured data, jsons to semi-structured data, and logs that we input into the system could be considered unstructured. To organize and process this data, we need to ensure we have custom loaders for all data types, which can help us unify and organize it properly."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fe0bfa57-dca7-40aa-9ead-c6852b155878",
   "metadata": {
    "id": "fe0bfa57-dca7-40aa-9ead-c6852b155878"
   },
   "source": [
    "![image.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7e47bae4-d27d-4430-a134-e1b381378f5c",
   "metadata": {
    "id": "7e47bae4-d27d-4430-a134-e1b381378f5c"
   },
   "source": [
    "#### In the example above, we have a pipeline in which data has been imported from various sources, normalized, and stored in a database."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2f9c9376-8c68-4397-9081-d260cddcbd25",
   "metadata": {
    "id": "2f9c9376-8c68-4397-9081-d260cddcbd25"
   },
   "source": [
    "## Concept 2: Data Enrichment with LLMs"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "oFMidcsB7ap6",
   "metadata": {
    "id": "oFMidcsB7ap6"
   },
   "source": [
    "#### LLMs are adept at processing unstructured data. They can easily extract summaries, keywords, and other useful information from documents. We use function calling with Pydantic models to extract information from the unstructured data."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "WXP8KevM7dRT",
   "metadata": {
    "id": "WXP8KevM7dRT"
   },
   "source": [
    "![image.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "A9PMdOc37rbo",
   "metadata": {
    "id": "A9PMdOc37rbo"
   },
   "source": [
    "#### We decompose the loaded content into graphs, allowing us to more precisely map out the relationships between entities and concepts."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "VLDoIXqI7uOD",
   "metadata": {
    "id": "VLDoIXqI7uOD"
   },
   "source": [
    "## Concept 3: Graphs"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "t1yh531L7vve",
   "metadata": {
    "id": "t1yh531L7vve"
   },
   "source": [
    "#### Knowledge graphs simply map out knowledge, linking specific facts and their connections. When Large Language Models (LLMs) process text, they infer these links, leading to occasional inaccuracies due to their probabilistic nature. Clearly defined relationships enhance their accuracy. This structured approach can extend beyond concepts to document layouts, pages, or other organizational schemas."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "AArlpK0S7x6X",
   "metadata": {
    "id": "AArlpK0S7x6X"
   },
   "source": [
    "![Untitled-2024-10-08-1656(2).png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "XJ-gpI6f76CD",
   "metadata": {
    "id": "XJ-gpI6f76CD"
   },
   "source": [
    "## Concept 4: Vector and Graph Retrieval"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "tJz0QrQe7-hF",
   "metadata": {
    "id": "tJz0QrQe7-hF"
   },
   "source": [
    "#### Cognee lets you use multiple vector and graph retrieval methods to find the most relevant information."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "BKlaAVQx8AyK",
   "metadata": {
    "id": "BKlaAVQx8AyK"
   },
   "source": [
    "## Concept 5: Auto-Optimizing Pipelines"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3h0kmuL88CU4",
   "metadata": {
    "id": "3h0kmuL88CU4"
   },
   "source": [
    "#### Integrating knowledge graphs into Retrieval-Augmented Generation (RAG) pipelines leads to an intriguing outcome: the system's adeptness at contextual understanding allows it to be evaluated in a way Machine Learning (ML) engineers are accustomed to. This involves bombarding the RAG system with hundreds of synthetic questions, enabling the knowledge graph to evolve and refine its context autonomously over time. This method paves the way for developing self-improving memory engines that can adapt to new data and user feedback."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "074f0ea8-c659-4736-be26-be4b0e5ac665",
   "metadata": {
    "id": "074f0ea8-c659-4736-be26-be4b0e5ac665"
   },
   "source": [
    "# Demo time"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "hVVgc9KZmk3v",
   "metadata": {
    "id": "hVVgc9KZmk3v"
   },
   "source": [
    "#### First we need to install all dependencies:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7ytkuIkFmeiE",
   "metadata": {
    "id": "7ytkuIkFmeiE"
   },
   "outputs": [],
   "source": [
    "!pip install onnxruntime-gpu==1.17.1 --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/ -qqq"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cVPQTKcWmgJ0",
   "metadata": {
    "id": "cVPQTKcWmgJ0"
   },
   "outputs": [],
   "source": [
    "!pip install cognee==0.1.18"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0587d91d",
   "metadata": {
    "id": "0587d91d"
   },
   "source": [
    "#### Then let's define some data that we will cognify and perform a search on"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "df16431d0f48b006",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-20T14:02:48.519686Z",
     "start_time": "2024-09-20T14:02:48.515589Z"
    },
    "id": "df16431d0f48b006"
   },
   "outputs": [],
   "source": [
    "job_position = \"\"\"Senior Data Scientist (Machine Learning)\n",
    "\n",
    "Company: TechNova Solutions\n",
    "Location: San Francisco, CA\n",
    "\n",
    "Job Description:\n",
    "\n",
    "TechNova Solutions is seeking a Senior Data Scientist specializing in Machine Learning to join our dynamic analytics team. The ideal candidate will have a strong background in developing and deploying machine learning models, working with large datasets, and translating complex data into actionable insights.\n",
    "\n",
    "Responsibilities:\n",
    "\n",
    "Develop and implement advanced machine learning algorithms and models.\n",
    "Analyze large, complex datasets to extract meaningful patterns and insights.\n",
    "Collaborate with cross-functional teams to integrate predictive models into products.\n",
    "Stay updated with the latest advancements in machine learning and data science.\n",
    "Mentor junior data scientists and provide technical guidance.\n",
    "Qualifications:\n",
    "\n",
    "Master’s or Ph.D. in Data Science, Computer Science, Statistics, or a related field.\n",
    "5+ years of experience in data science and machine learning.\n",
    "Proficient in Python, R, and SQL.\n",
    "Experience with deep learning frameworks (e.g., TensorFlow, PyTorch).\n",
    "Strong problem-solving skills and attention to detail.\n",
    "Candidate CVs\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9086abf3af077ab4",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-20T14:02:49.120838Z",
     "start_time": "2024-09-20T14:02:49.118294Z"
    },
    "id": "9086abf3af077ab4"
   },
   "outputs": [],
   "source": [
    "job_1 = \"\"\"\n",
    "CV 1: Relevant\n",
    "Name: Dr. Emily Carter\n",
    "Contact Information:\n",
    "\n",
    "Email: emily.carter@example.com\n",
    "Phone: (555) 123-4567\n",
    "Summary:\n",
    "\n",
    "Senior Data Scientist with over 8 years of experience in machine learning and predictive analytics. Expertise in developing advanced algorithms and deploying scalable models in production environments.\n",
    "\n",
    "Education:\n",
    "\n",
    "Ph.D. in Computer Science, Stanford University (2014)\n",
    "B.S. in Mathematics, University of California, Berkeley (2010)\n",
    "Experience:\n",
    "\n",
    "Senior Data Scientist, InnovateAI Labs (2016 – Present)\n",
    "Led a team in developing machine learning models for natural language processing applications.\n",
    "Implemented deep learning algorithms that improved prediction accuracy by 25%.\n",
    "Collaborated with cross-functional teams to integrate models into cloud-based platforms.\n",
    "Data Scientist, DataWave Analytics (2014 – 2016)\n",
    "Developed predictive models for customer segmentation and churn analysis.\n",
    "Analyzed large datasets using Hadoop and Spark frameworks.\n",
    "Skills:\n",
    "\n",
    "Programming Languages: Python, R, SQL\n",
    "Machine Learning: TensorFlow, Keras, Scikit-Learn\n",
    "Big Data Technologies: Hadoop, Spark\n",
    "Data Visualization: Tableau, Matplotlib\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a9de0cc07f798b7f",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-20T14:02:49.675003Z",
     "start_time": "2024-09-20T14:02:49.671615Z"
    },
    "id": "a9de0cc07f798b7f"
   },
   "outputs": [],
   "source": [
    "job_2 = \"\"\"\n",
    "CV 2: Relevant\n",
    "Name: Michael Rodriguez\n",
    "Contact Information:\n",
    "\n",
    "Email: michael.rodriguez@example.com\n",
    "Phone: (555) 234-5678\n",
    "Summary:\n",
    "\n",
    "Data Scientist with a strong background in machine learning and statistical modeling. Skilled in handling large datasets and translating data into actionable business insights.\n",
    "\n",
    "Education:\n",
    "\n",
    "M.S. in Data Science, Carnegie Mellon University (2013)\n",
    "B.S. in Computer Science, University of Michigan (2011)\n",
    "Experience:\n",
    "\n",
    "Senior Data Scientist, Alpha Analytics (2017 – Present)\n",
    "Developed machine learning models to optimize marketing strategies.\n",
    "Reduced customer acquisition cost by 15% through predictive modeling.\n",
    "Data Scientist, TechInsights (2013 – 2017)\n",
    "Analyzed user behavior data to improve product features.\n",
    "Implemented A/B testing frameworks to evaluate product changes.\n",
    "Skills:\n",
    "\n",
    "Programming Languages: Python, Java, SQL\n",
    "Machine Learning: Scikit-Learn, XGBoost\n",
    "Data Visualization: Seaborn, Plotly\n",
    "Databases: MySQL, MongoDB\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "185ff1c102d06111",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-20T14:02:50.286828Z",
     "start_time": "2024-09-20T14:02:50.284369Z"
    },
    "id": "185ff1c102d06111"
   },
   "outputs": [],
   "source": [
    "job_3 = \"\"\"\n",
    "CV 3: Relevant\n",
    "Name: Sarah Nguyen\n",
    "Contact Information:\n",
    "\n",
    "Email: sarah.nguyen@example.com\n",
    "Phone: (555) 345-6789\n",
    "Summary:\n",
    "\n",
    "Data Scientist specializing in machine learning with 6 years of experience. Passionate about leveraging data to drive business solutions and improve product performance.\n",
    "\n",
    "Education:\n",
    "\n",
    "M.S. in Statistics, University of Washington (2014)\n",
    "B.S. in Applied Mathematics, University of Texas at Austin (2012)\n",
    "Experience:\n",
    "\n",
    "Data Scientist, QuantumTech (2016 – Present)\n",
    "Designed and implemented machine learning algorithms for financial forecasting.\n",
    "Improved model efficiency by 20% through algorithm optimization.\n",
    "Junior Data Scientist, DataCore Solutions (2014 – 2016)\n",
    "Assisted in developing predictive models for supply chain optimization.\n",
    "Conducted data cleaning and preprocessing on large datasets.\n",
    "Skills:\n",
    "\n",
    "Programming Languages: Python, R\n",
    "Machine Learning Frameworks: PyTorch, Scikit-Learn\n",
    "Statistical Analysis: SAS, SPSS\n",
    "Cloud Platforms: AWS, Azure\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d55ce4c58f8efb67",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-20T14:02:50.950343Z",
     "start_time": "2024-09-20T14:02:50.946378Z"
    },
    "id": "d55ce4c58f8efb67"
   },
   "outputs": [],
   "source": [
    "job_4 = \"\"\"\n",
    "CV 4: Not Relevant\n",
    "Name: David Thompson\n",
    "Contact Information:\n",
    "\n",
    "Email: david.thompson@example.com\n",
    "Phone: (555) 456-7890\n",
    "Summary:\n",
    "\n",
    "Creative Graphic Designer with over 8 years of experience in visual design and branding. Proficient in Adobe Creative Suite and passionate about creating compelling visuals.\n",
    "\n",
    "Education:\n",
    "\n",
    "B.F.A. in Graphic Design, Rhode Island School of Design (2012)\n",
    "Experience:\n",
    "\n",
    "Senior Graphic Designer, CreativeWorks Agency (2015 – Present)\n",
    "Led design projects for clients in various industries.\n",
    "Created branding materials that increased client engagement by 30%.\n",
    "Graphic Designer, Visual Innovations (2012 – 2015)\n",
    "Designed marketing collateral, including brochures, logos, and websites.\n",
    "Collaborated with the marketing team to develop cohesive brand strategies.\n",
    "Skills:\n",
    "\n",
    "Design Software: Adobe Photoshop, Illustrator, InDesign\n",
    "Web Design: HTML, CSS\n",
    "Specialties: Branding and Identity, Typography\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ca4ecc32721ad332",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-20T14:02:51.548191Z",
     "start_time": "2024-09-20T14:02:51.545520Z"
    },
    "id": "ca4ecc32721ad332"
   },
   "outputs": [],
   "source": [
    "job_5 = \"\"\"\n",
    "CV 5: Not Relevant\n",
    "Name: Jessica Miller\n",
    "Contact Information:\n",
    "\n",
    "Email: jessica.miller@example.com\n",
    "Phone: (555) 567-8901\n",
    "Summary:\n",
    "\n",
    "Experienced Sales Manager with a strong track record in driving sales growth and building high-performing teams. Excellent communication and leadership skills.\n",
    "\n",
    "Education:\n",
    "\n",
    "B.A. in Business Administration, University of Southern California (2010)\n",
    "Experience:\n",
    "\n",
    "Sales Manager, Global Enterprises (2015 – Present)\n",
    "Managed a sales team of 15 members, achieving a 20% increase in annual revenue.\n",
    "Developed sales strategies that expanded customer base by 25%.\n",
    "Sales Representative, Market Leaders Inc. (2010 – 2015)\n",
    "Consistently exceeded sales targets and received the 'Top Salesperson' award in 2013.\n",
    "Skills:\n",
    "\n",
    "Sales Strategy and Planning\n",
    "Team Leadership and Development\n",
    "CRM Software: Salesforce, Zoho\n",
    "Negotiation and Relationship Building\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "onKOiY1ksR30",
   "metadata": {
    "id": "onKOiY1ksR30"
   },
   "source": [
    "#### Please add the necessary environment information bellow:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bce39dc6",
   "metadata": {
    "id": "bce39dc6"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "# # Setting environment variables\n",
    "os.environ[\"GRAPHISTRY_USERNAME\"] = \"\"\n",
    "os.environ[\"GRAPHISTRY_PASSWORD\"] = \"\"\n",
    "\n",
    "os.environ[\"LLM_API_KEY\"] = \"\"\n",
    "\n",
    "# \"neo4j\" or \"networkx\"\n",
    "os.environ[\"GRAPH_DATABASE_PROVIDER\"] = \"networkx\"\n",
    "# Not needed if using networkx\n",
    "# GRAPH_DATABASE_URL=\"\"\n",
    "# GRAPH_DATABASE_USERNAME=\"\"\n",
    "# GRAPH_DATABASE_PASSWORD=\"\"\n",
    "\n",
    "# \"qdrant\", \"weaviate\" or \"lancedb\"\n",
    "os.environ[\"VECTOR_ENGINE_PROVIDER\"] = \"lancedb\"\n",
    "# Not needed if using \"lancedb\"\n",
    "# os.environ[\"VECTOR_DB_URL\"]=\"\"\n",
    "# os.environ[\"VECTOR_DB_KEY\"]=\"\"\n",
    "\n",
    "# Database provider \"sqlite\" or \"postgres\"\n",
    "os.environ[\"DB_PROVIDER\"] = \"sqlite\"\n",
    "\n",
    "# Database name\n",
    "os.environ[\"DB_NAME\"] = \"cognee_db\"\n",
    "\n",
    "# Postgres specific parameters (Only if Postgres is run)\n",
    "# os.environ[\"DB_HOST\"]=\"127.0.0.1\"\n",
    "# os.environ[\"DB_PORT\"]=\"5432\"\n",
    "# os.environ[\"DB_USERNAME\"]=\"cognee\"\n",
    "# os.environ[\"DB_PASSWORD\"]=\"cognee\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9f1a1dbd",
   "metadata": {
    "id": "9f1a1dbd"
   },
   "outputs": [],
   "source": [
    "# Reset the cognee system with the following command:\n",
    "\n",
    "import cognee\n",
    "\n",
    "await cognee.prune.prune_data()\n",
    "await cognee.prune.prune_system(metadata=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "383d6971",
   "metadata": {
    "id": "383d6971"
   },
   "source": [
    "#### After we have defined and gathered our data let's add it to cognee"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "904df61ba484a8e5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-20T14:02:54.243987Z",
     "start_time": "2024-09-20T14:02:52.498195Z"
    },
    "id": "904df61ba484a8e5"
   },
   "outputs": [],
   "source": [
    "import cognee\n",
    "\n",
    "await cognee.add([job_1, job_2, job_3, job_4, job_5, job_position], \"example\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0f15c5b1",
   "metadata": {
    "id": "0f15c5b1"
   },
   "source": [
    "#### All good, let's cognify it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7c431fdef4921ae0",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-20T14:02:57.925667Z",
     "start_time": "2024-09-20T14:02:57.922353Z"
    },
    "id": "7c431fdef4921ae0"
   },
   "outputs": [],
   "source": [
    "from cognee.shared.data_models import KnowledgeGraph\n",
    "from cognee.modules.data.models import Dataset, Data\n",
    "from cognee.modules.data.methods.get_dataset_data import get_dataset_data\n",
    "from cognee.modules.cognify.config import get_cognify_config\n",
    "from cognee.modules.pipelines.tasks.Task import Task\n",
    "from cognee.modules.pipelines import run_tasks, run_tasks_parallel\n",
    "from cognee.modules.users.models import User\n",
    "from cognee.tasks.summarization import summarize_text\n",
    "from cognee.tasks import (\n",
    "    chunk_remove_disconnected,\n",
    "    infer_data_ontology,\n",
    "    save_chunks_to_store,\n",
    "    chunk_update_check,\n",
    "    chunks_into_graph,\n",
    "    source_documents_to_chunks,\n",
    "    check_permissions_on_documents,\n",
    "    classify_documents,\n",
    "    chunk_naive_llm_classifier,\n",
    ")\n",
    "\n",
    "\n",
    "async def run_cognify_pipeline(dataset: Dataset, user: User = None):\n",
    "    data_documents: list[Data] = await get_dataset_data(dataset_id=dataset.id)\n",
    "\n",
    "    try:\n",
    "        root_node_id = None\n",
    "\n",
    "        cognee_config = get_cognify_config()\n",
    "\n",
    "        tasks = [\n",
    "            Task(classify_documents),\n",
    "            Task(check_permissions_on_documents, user=user, permissions=[\"write\"]),\n",
    "            Task(\n",
    "                infer_data_ontology,\n",
    "                root_node_id=root_node_id,\n",
    "                ontology_model=KnowledgeGraph,\n",
    "            ),\n",
    "            Task(\n",
    "                source_documents_to_chunks, chunk_size=800, parent_node_id=root_node_id\n",
    "            ),  # Classify documents and save them as a nodes in graph db, extract text chunks based on the document type\n",
    "            Task(\n",
    "                chunks_into_graph,\n",
    "                graph_model=KnowledgeGraph,\n",
    "                collection_name=\"entities\",\n",
    "                task_config={\"batch_size\": 10},\n",
    "            ),  # Generate knowledge graphs from the document chunks and attach it to chunk nodes\n",
    "            Task(\n",
    "                chunk_update_check, collection_name=\"chunks\"\n",
    "            ),  # Find all affected chunks, so we don't process unchanged chunks\n",
    "            Task(\n",
    "                save_chunks_to_store,\n",
    "                collection_name=\"chunks\",\n",
    "            ),\n",
    "            Task(\n",
    "                summarize_text,\n",
    "                summarization_model=cognee_config.summarization_model,\n",
    "                collection_name=\"summaries\",\n",
    "            ),\n",
    "            Task(\n",
    "                chunk_naive_llm_classifier,\n",
    "                classification_model=cognee_config.classification_model,\n",
    "            ),\n",
    "            Task(chunk_remove_disconnected),  # Remove the obsolete document chunks.\n",
    "        ]\n",
    "\n",
    "        pipeline = run_tasks(tasks, data_documents)\n",
    "\n",
    "        async for result in pipeline:\n",
    "            print(result)\n",
    "    except Exception as error:\n",
    "        raise error"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f0a91b99c6215e09",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-20T14:02:58.905774Z",
     "start_time": "2024-09-20T14:02:58.625915Z"
    },
    "id": "f0a91b99c6215e09"
   },
   "outputs": [],
   "source": [
    "from cognee.modules.users.methods import get_default_user\n",
    "from cognee.modules.data.methods import get_datasets_by_name\n",
    "\n",
    "user = await get_default_user()\n",
    "\n",
    "datasets = await get_datasets_by_name([\"example\"], user.id)\n",
    "\n",
    "await run_cognify_pipeline(datasets[0], user)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "219a6d41",
   "metadata": {
    "id": "219a6d41"
   },
   "source": [
    "#### We get the url to the graph on graphistry in the notebook cell bellow, showing nodes and connections made by the cognify process."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "080389e5",
   "metadata": {
    "id": "080389e5"
   },
   "outputs": [],
   "source": [
    "import os\n",
    "from cognee.shared.utils import render_graph\n",
    "from cognee.infrastructure.databases.graph import get_graph_engine\n",
    "import graphistry\n",
    "\n",
    "graphistry.login(\n",
    "    username=os.getenv(\"GRAPHISTRY_USERNAME\"), password=os.getenv(\"GRAPHISTRY_PASSWORD\")\n",
    ")\n",
    "\n",
    "graph_engine = await get_graph_engine()\n",
    "\n",
    "graph_url = await render_graph(graph_engine.graph)\n",
    "print(graph_url)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "59e6c3c3",
   "metadata": {
    "id": "59e6c3c3"
   },
   "source": [
    "#### We can also do a search on the data to explore the knowledge."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e5e7dfc8",
   "metadata": {
    "id": "e5e7dfc8"
   },
   "outputs": [],
   "source": [
    "async def search(\n",
    "    vector_engine,\n",
    "    collection_name: str,\n",
    "    query_text: str = None,\n",
    "):\n",
    "    query_vector = (await vector_engine.embedding_engine.embed_text([query_text]))[0]\n",
    "\n",
    "    connection = await vector_engine.get_connection()\n",
    "    collection = await connection.open_table(collection_name)\n",
    "\n",
    "    results = await collection.vector_search(query_vector).limit(10).to_pandas()\n",
    "\n",
    "    result_values = list(results.to_dict(\"index\").values())\n",
    "\n",
    "    return [\n",
    "        dict(\n",
    "            id=str(result[\"id\"]),\n",
    "            payload=result[\"payload\"],\n",
    "            score=result[\"_distance\"],\n",
    "        )\n",
    "        for result in result_values\n",
    "    ]\n",
    "\n",
    "\n",
    "from cognee.infrastructure.databases.vector import get_vector_engine\n",
    "\n",
    "vector_engine = get_vector_engine()\n",
    "results = await search(vector_engine, \"entities\", \"sarah.nguyen@example.com\")\n",
    "for result in results:\n",
    "    print(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "F4s9pJyqhgtP",
   "metadata": {
    "id": "F4s9pJyqhgtP"
   },
   "source": [
    "#### We normalize search output scores so the lower the score of the search result is the higher the chance that it's what you're looking for. In the example above we have searched for node entities in the knowledge graph related to \"sarah.nguyen@example.com\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "v3KZN1J38g9c",
   "metadata": {
    "id": "v3KZN1J38g9c"
   },
   "source": [
    "#### In the example bellow we'll use cognee search to summarize information regarding the node most related to \"sarah.nguyen@example.com\" in the knowledge graph"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "o9Cdt1IF8jjH",
   "metadata": {
    "id": "o9Cdt1IF8jjH"
   },
   "outputs": [],
   "source": [
    "from cognee.api.v1.search import SearchType\n",
    "\n",
    "node = (await vector_engine.search(\"entities\", \"sarah.nguyen@example.com\"))[0]\n",
    "node_name = node.payload[\"name\"]\n",
    "\n",
    "search_results = await cognee.search(SearchType.SUMMARIES, query=node_name)\n",
    "print(\"\\n\\nExtracted summaries are:\\n\")\n",
    "for result in search_results:\n",
    "    print(f\"{result}\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "JiqylD3K8n3_",
   "metadata": {
    "id": "JiqylD3K8n3_"
   },
   "source": [
    "#### In this example we'll use cognee search to find chunks in which the node most related to \"sarah.nguyen@example.com\" is a part of"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "j54MkQxQ8nBg",
   "metadata": {
    "id": "j54MkQxQ8nBg"
   },
   "outputs": [],
   "source": [
    "search_results = await cognee.search(SearchType.CHUNKS, query=node_name)\n",
    "print(\"\\n\\nExtracted chunks are:\\n\")\n",
    "for result in search_results:\n",
    "    print(f\"{result}\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "zBeVLjubFrOI",
   "metadata": {
    "id": "zBeVLjubFrOI"
   },
   "source": [
    "#### In this example we'll use cognee search to give us insights from the knowledge graph related to the node most related to \"sarah.nguyen@example.com\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0FSaecZ-FrzF",
   "metadata": {
    "id": "0FSaecZ-FrzF"
   },
   "outputs": [],
   "source": [
    "search_results = await cognee.search(SearchType.INSIGHTS, query=node_name)\n",
    "print(\"\\n\\nExtracted insights are:\\n\")\n",
    "for result in search_results:\n",
    "    print(f\"{result}\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4W1W_Om880Db",
   "metadata": {
    "id": "4W1W_Om880Db"
   },
   "source": [
    "#### Bellow is a diagram of the cognee process for the data used in this example notebook"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2gpysOFT816c",
   "metadata": {
    "id": "2gpysOFT816c"
   },
   "source": [
    "![cognee_final.drawio.png]()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "288ab570",
   "metadata": {
    "id": "288ab570"
   },
   "source": [
    "## Give us a star if you like it!\n",
    "https://github.com/topoteretes/cognee"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "cognee-bGi0WgSG-py3.9",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
