{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# Tensorlake + QDrant + LangGraph RAG Demo with Academic Research Papers\n",
        "\n",
        "Learn more about Qdrant and Tensorlake on the [Tensorlake docs](https://tlake.link/qdrant-tensorlake).\n",
        "\n",
        "Prefer a video walkthrough? Checkout this [YouTube tutorial](https://www.youtube.com/watch?v=Segv3wI1PdM)."
      ],
      "metadata": {
        "id": "q2_s2Pr2NPsX"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Setup and Dependencies"
      ],
      "metadata": {
        "id": "1WNKUPJfD1pV"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install tensorlake qdrant-client sentence-transformers pandas numpy langgraph langsmith langchain-openai"
      ],
      "metadata": {
        "id": "rTvDxOvONSLn"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "%env TENSORLAKE_API_KEY=YOUR_TENSORLAKE_API_KEY\n",
        "%env QDRANT_API_KEY=YOUR_QDRANT_API_KEY\n",
        "%env QDRANT_DATABASE_URL=YOUR_QDRANT_DATABASE_URL\n",
        "%env OPENAI_API_KEY=YOUR_OPENAI_API_KEY"
      ],
      "metadata": {
        "id": "rS35dp1PwpHd"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# TensorLake DocAI setup\n",
        "from tensorlake.documentai import (\n",
        "    DocumentAI,\n",
        "    EnrichmentOptions,\n",
        "    ParsingOptions,\n",
        "    StructuredExtractionOptions,\n",
        "    ChunkingStrategy,\n",
        "    TableOutputMode,\n",
        "    TableParsingFormat,\n",
        "    ParseStatus,\n",
        ")\n",
        "\n",
        "# Qdrant client setup\n",
        "from qdrant_client import QdrantClient\n",
        "from qdrant_client.http import models\n",
        "from qdrant_client.http.models import Filter, FieldCondition, MatchValue, MatchText\n",
        "\n",
        "# LangGraph agent setup\n",
        "from langgraph.prebuilt import create_react_agent\n",
        "\n",
        "# Helper packages\n",
        "from pydantic import BaseModel, Field\n",
        "from sentence_transformers import SentenceTransformer\n",
        "import pandas as pd\n",
        "import numpy as np\n",
        "from typing import List, Dict, Any\n",
        "import re\n",
        "from uuid import uuid4\n",
        "import time\n",
        "import os"
      ],
      "metadata": {
        "id": "12h7ju5W_abo"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Parse all of the documents with Tensorlake\n",
        "1. Set up your Tensorlake Client\n",
        "2. Create two arrays to store structured data and chunks\n",
        "3. Create a list of the file URLs"
      ],
      "metadata": {
        "id": "X987seOp_wS7"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Initialize the Tensorlake DocAI Client"
      ],
      "metadata": {
        "id": "tFirbtzdxWro"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "doc_ai = DocumentAI(api_key=os.getenv('TENSORLAKE_API_KEY'))"
      ],
      "metadata": {
        "id": "tFexJk6wIIKf"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "all_structured_data = []\n",
        "all_chunks = []\n",
        "\n",
        "files = [\n",
        "    \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/CHI_13.pdf\",\n",
        "    \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/CSCW_14_1.pdf\",\n",
        "    \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/CSCW_14_2.pdf\",\n",
        "    \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/CSCW_14_3.pdf\",\n",
        "    \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/ICER_11.pdf\",\n",
        "    \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/ICER_12_2.pdf\",\n",
        "    \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/ICER_13.pdf\",\n",
        "    \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/ITICSE_13.pdf\",\n",
        "    \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/Koli_14.pdf\",\n",
        "    \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/SIGCSE_13.pdf\",\n",
        "    \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/SarahEsper_ResearchExam.pdf\",\n",
        "    \"https://pub-226479de18b2493f96b64c6674705dd8.r2.dev/UCSDTechReport_11.pdf\"\n",
        "]"
      ],
      "metadata": {
        "id": "KjCySzEC_pht"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Define your JSON Schema for Structured Data Extraction"
      ],
      "metadata": {
        "id": "Lu5Vx-JjA7Q5"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "class Author(BaseModel):\n",
        "    \"\"\"Author information for a research paper\"\"\"\n",
        "    name: str = Field(description=\"Full name of the author\")\n",
        "    affiliation: str = Field(description=\"Institution or organization affiliation\")\n",
        "\n",
        "class Conference(BaseModel):\n",
        "    \"\"\"Conference or journal information\"\"\"\n",
        "    name: str = Field(description=\"Name of the conference or journal\")\n",
        "    year: str = Field(description=\"Year of publication\")\n",
        "    location: str = Field(description=\"Location of the conference or journal publication\")\n",
        "\n",
        "class Reference(BaseModel):\n",
        "    \"\"\"Reference to another publication\"\"\"\n",
        "    author_names: List[str] = Field(description=\"List of author names for this reference\")\n",
        "    title: str = Field(description=\"Title of the referenced publication\")\n",
        "    publication: str = Field(description=\"Name of the publication venue (journal, conference, etc.)\")\n",
        "    year: str = Field(description=\"Year of publication\")\n",
        "\n",
        "class ResearchPaper(BaseModel):\n",
        "    \"\"\"Complete schema for extracting research paper information\"\"\"\n",
        "    authors: List[Author] = Field(description=\"List of authors with their affiliations. Authors will be listed below the title and above the main text of the paper. Authors will often be in multiple columns and there may be multiple authors associated to a single affiliation.\")\n",
        "    conference_journal: Conference = Field(description=\"Conference or journal information\")\n",
        "    title: str = Field(description=\"Title of the research paper\")\n",
        "    abstract: str = Field(description=\"Abstract or summary of the paper\")\n",
        "    keywords: List[str] = Field(description=\"List of keywords associated with the paper\")\n",
        "    acm_classification: str = Field(description=\"ACM classification code or category\")\n",
        "    general_terms: List[str] = Field(description=\"List of general terms or categories\")\n",
        "    acknowledgments: str = Field(description=\"Acknowledgments section\")\n",
        "    references: List[Reference] = Field(description=\"List of references cited in the paper\")\n",
        "\n",
        "# Convert to JSON schema for Tensorlake\n",
        "json_schema = ResearchPaper.model_json_schema()"
      ],
      "metadata": {
        "id": "cT-FdWaFA49B"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Define a function for extracting data for each research paper"
      ],
      "metadata": {
        "id": "JU9m4a8NA_KG"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def process_research_paper(file_url):\n",
        "    doc_structured_data = []\n",
        "    doc_chunks = []\n",
        "\n",
        "    # Configure parsing options\n",
        "    parsing_options = ParsingOptions(\n",
        "        chunking_strategy=ChunkingStrategy.SECTION,\n",
        "        table_parsing_strategy=TableParsingFormat.TSR,\n",
        "        table_output_mode=TableOutputMode.MARKDOWN,\n",
        "    )\n",
        "    # Create structured extraction options with the JSON schema\n",
        "    structured_extraction_options = [StructuredExtractionOptions(\n",
        "        schema_name=\"ResearchPaper\",\n",
        "        json_schema=json_schema,\n",
        "    )]\n",
        "    # Create enrichment options\n",
        "    enrichment_options = EnrichmentOptions(\n",
        "        figure_summarization=True,\n",
        "        figure_summarization_prompt=\"Summarize the figure beyond the caption by describing the data as it relates to the context of the research paper.\",\n",
        "        table_summarization=True,\n",
        "        table_summarization_prompt=\"Summarize the table beyond the caption by describing the data as it relates to the context of the research paper.\",\n",
        "    )\n",
        "\n",
        "    # Parse the document\n",
        "    parse_id = doc_ai.parse(file_url, parsing_options, structured_extraction_options, enrichment_options)\n",
        "    print(f\"Started parsing job: {parse_id} for document {file_url}\")\n",
        "    result = doc_ai.wait_for_completion(parse_id)\n",
        "\n",
        "    if result:\n",
        "        print(f\"Job {parse_id} completed successfully for {file_url}\")\n",
        "        if result.structured_data:\n",
        "            print(f\"Extracted {len(result.structured_data)} structured data items\")\n",
        "        if result.chunks:\n",
        "            print(f\"Extracted {len(result.chunks)} chunks\")\n",
        "\n",
        "    # Extract structured data and chunks from the result\n",
        "    if result and result.structured_data:\n",
        "        # Add metadata to structured data\n",
        "        structured_data = result.structured_data\n",
        "        doc_structured_data.append(structured_data)\n",
        "        print(f\"Extracted structured data for {file_url}\")\n",
        "    else:\n",
        "        print(f\"No structured data found for {file_url}\")\n",
        "\n",
        "    if result and result.chunks:\n",
        "        # Process document chunks\n",
        "        chunks = result.chunks\n",
        "        doc_chunks.extend(chunks)\n",
        "        print(f\"Extracted {len(chunks)} chunks for {file_url}\")\n",
        "    else:\n",
        "        print(f\"No chunks found for {file_url}\")\n",
        "\n",
        "    print(f\"Processed {file_url}\")\n",
        "\n",
        "    # Return structured data and chunks\n",
        "    return doc_structured_data, doc_chunks"
      ],
      "metadata": {
        "id": "hgG0FVYMBIw6"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Parse each of the files"
      ],
      "metadata": {
        "id": "zYGj0uBwB0KA"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "for file in files:\n",
        "  print(f\"Processing file: {file}\")\n",
        "\n",
        "  # Process the filings\n",
        "  structured_data, chunks = process_research_paper(file)\n",
        "\n",
        "  # Store results\n",
        "  all_structured_data.append(structured_data)\n",
        "  all_chunks.append(chunks)"
      ],
      "metadata": {
        "id": "YRwvmWkoBssc"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Upload the points to QDrant"
      ],
      "metadata": {
        "id": "sEW9vPteCUCl"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Initialize the QDrant Client"
      ],
      "metadata": {
        "id": "D74VSTLMB7Et"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Initialize QDrant client (Cloud version)\n",
        "qdrant_client = QdrantClient(\n",
        "    url=os.getenv('QDRANT_DATABASE_URL'),\n",
        "    api_key=os.getenv('QDRANT_API_KEY')\n",
        ")\n",
        "\n",
        "# Initialize sentence transformer for embeddings\n",
        "model = SentenceTransformer('all-MiniLM-L6-v2')"
      ],
      "metadata": {
        "id": "-du0dUzmCAl8"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Step 1: Create the collection if it doesn't exist"
      ],
      "metadata": {
        "id": "5WuiVXUXChmV"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Create the collection if it doesn't exist\n",
        "collection_name = \"research_paper_example\"\n",
        "if not qdrant_client.collection_exists(collection_name=collection_name):\n",
        "    qdrant_client.create_collection(\n",
        "        collection_name=collection_name,\n",
        "        vectors_config=models.VectorParams(size=384, distance=models.Distance.COSINE)\n",
        "    )"
      ],
      "metadata": {
        "id": "7vMU-m5SCcNz"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Step 2: Create the Embeddings and Payloads and Upsert to Qdrant\n",
        "From the structured data output from Tensorlake, create a Payload to associate with each chunk for each document."
      ],
      "metadata": {
        "id": "kFu76igPCkp9"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Flatten all chunks and match them with their corresponding structured data\n",
        "all_points = []\n",
        "\n",
        "for doc_idx, (structured_data_list, chunks, pages) in enumerate(zip(all_structured_data, all_chunks, all_pages)):\n",
        "    if not chunks:\n",
        "        print(f\"No chunks found for document {doc_idx}\")\n",
        "        continue\n",
        "\n",
        "    # Get the structured data for this document (assuming first item in list)\n",
        "    structured_data = structured_data_list[0][0] if structured_data_list else None\n",
        "\n",
        "    # Extract metadata from structured data\n",
        "    authors = []\n",
        "    author_names = []  # For searchable text field\n",
        "    references = []\n",
        "    conference_name = \"\"\n",
        "    conference_year = \"\"\n",
        "    conference_location = \"\"\n",
        "    title = \"\"\n",
        "    keywords = []\n",
        "\n",
        "    if structured_data:\n",
        "        print(\"Found structured data\")\n",
        "        # Extract author information\n",
        "        if 'authors' in structured_data.data:\n",
        "            print(f\"Extracting {len(structured_data.data['authors'])} authors\")\n",
        "            for author in structured_data.data['authors']:\n",
        "                print(f\"Processing author: {author}\")\n",
        "                if isinstance(author, dict):\n",
        "                    author_name = author.get('name', '')\n",
        "                    author_affiliation = author.get('affiliation', '')\n",
        "                    authors.append(f\"{author_name} ({author_affiliation})\")\n",
        "                    author_names.append(author_name)  # For searchable text field\n",
        "\n",
        "        # Extract conference information\n",
        "        if 'conference_journal' in structured_data.data:\n",
        "            print(\"Extracting conference information\")\n",
        "            conf = structured_data.data['conference_journal']\n",
        "            print(f\"Processing conference: {conf}\")\n",
        "            if isinstance(conf, dict):\n",
        "                conference_name = conf.get('name', '')\n",
        "                conference_year = conf.get('year', '')\n",
        "                conference_location = conf.get('location', '')\n",
        "                print(f\"Conference: {conference_name} ({conference_year}) at {conference_location}\")\n",
        "\n",
        "        # Extract other metadata\n",
        "        title = structured_data.data.get('title', '')\n",
        "        print(f\"Title: {title}\")\n",
        "        keywords = structured_data.data.get('keywords', [])\n",
        "        print(f\"Keywords: {keywords}\")\n",
        "\n",
        "        # Extract references\n",
        "        if 'references' in structured_data.data:\n",
        "            print(f\"Extracting {len(structured_data.data['references'])} references\")\n",
        "            for ref in structured_data.data['references']:\n",
        "                if isinstance(ref, dict):\n",
        "                    ref_authors = ref.get('author_names', [])\n",
        "                    ref_title = ref.get('title', '')\n",
        "                    ref_publication = ref.get('publication', '')\n",
        "                    ref_year = ref.get('year', '')\n",
        "                    references.append({\n",
        "                        \"authors\": ref_authors,\n",
        "                        \"title\": ref_title,\n",
        "                        \"publication\": ref_publication,\n",
        "                        \"year\": ref_year\n",
        "                    })\n",
        "\n",
        "    # Extract the markdown chunks and table and figure summaries for this document\n",
        "    texts = [chunk.content for chunk in chunks]\n",
        "\n",
        "    # Create embeddings for all of the chunks and summaries\n",
        "    vectors = model.encode(texts).tolist()\n",
        "\n",
        "    for i, data in enumerate(texts):\n",
        "        # Enhanced payload with structured data\n",
        "        payload = {\n",
        "            \"content\": data,\n",
        "            \"document_index\": doc_idx,\n",
        "            # Structured data fields for filtering\n",
        "            \"title\": title,\n",
        "            \"authors\": authors,  # List of \"Name (Affiliation)\" strings\n",
        "            \"author_names\": author_names,  # List of just names for easier filtering\n",
        "            \"conference_name\": conference_name,\n",
        "            \"conference_year\": conference_year,\n",
        "            \"conference_location\": conference_location,\n",
        "            \"keywords\": keywords,\n",
        "            \"references\": references,  # List of reference dicts\n",
        "            # Create searchable text fields\n",
        "            \"authors_text\": \" \".join(author_names),  # For author search (just names)\n",
        "            \"authors_full\": \" \".join(authors),  # Full author info with affiliations\n",
        "            \"conference_text\": f\"{conference_name} {conference_year}\",  # For conference search\n",
        "        }\n",
        "\n",
        "        all_points.append(models.PointStruct(\n",
        "            id=str(uuid4()),  # Unique ID\n",
        "            vector=vectors[i],\n",
        "            payload=payload\n",
        "        ))\n",
        "\n",
        "if not all_points:\n",
        "    raise ValueError(\"No points to upload. Ensure your parsing worked and chunks were generated.\")"
      ],
      "metadata": {
        "id": "ROtMjSQ1Clut"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Upsert into Qdrant\n",
        "qdrant_client.upsert(collection_name=collection_name, points=all_points)\n",
        "print(f\"Inserted {len(all_points)} chunks into Qdrant with enhanced metadata\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "ITqWCBAwtycz",
        "outputId": "9d382672-3e23-4970-9a6d-9c6631bf4f75"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Inserted 566 chunks into Qdrant with enhanced metadata\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Step 3: Create a Qdrant Index for relevant searches based on structured data extraction"
      ],
      "metadata": {
        "id": "EHJs_pKWDJS2"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Create index for author names\n",
        "qdrant_client.create_payload_index(\n",
        "    collection_name=collection_name,\n",
        "    field_name=\"authors_text\",\n",
        "    field_schema=\"keyword\",\n",
        ")\n",
        "\n",
        "# Create index for conference names\n",
        "qdrant_client.create_payload_index(\n",
        "    collection_name=collection_name,\n",
        "    field_name=\"conference_name\",\n",
        "    field_schema=\"keyword\",\n",
        ")\n",
        "\n",
        "# Create index for conference years\n",
        "qdrant_client.create_payload_index(\n",
        "    collection_name=collection_name,\n",
        "    field_name=\"conference_year\",\n",
        "    field_schema=\"keyword\",\n",
        ")\n",
        "\n",
        "# Create index for Author names\n",
        "qdrant_client.create_payload_index(\n",
        "    collection_name=collection_name,\n",
        "    field_name=\"author_names\",\n",
        "    field_schema=\"keyword\",\n",
        ")\n",
        "\n",
        "# Create index for keywords\n",
        "qdrant_client.create_payload_index(\n",
        "    collection_name=collection_name,\n",
        "    field_name=\"keywords\",\n",
        "    field_schema=\"keyword\",\n",
        ")\n",
        "\n",
        "# Create index for Title\n",
        "qdrant_client.create_payload_index(\n",
        "    collection_name=collection_name,\n",
        "    field_name=\"title\",\n",
        "    field_schema=\"keyword\",\n",
        ")\n",
        "\n",
        "print(f\"Created indices for {collection_name} collection\")"
      ],
      "metadata": {
        "id": "trOpVnWODRHe"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Query and filter your Qdrant collection"
      ],
      "metadata": {
        "id": "NcMzRe6iDVll"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Search the Qdrant collection with a query"
      ],
      "metadata": {
        "id": "UaVLvLoaFAFE"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "points = qdrant_client.query_points(\n",
        "    collection_name=\"research_papers\",\n",
        "    query=model.encode(\"Does computer science education improve problem solving skills?\").tolist(),\n",
        "    limit=3,\n",
        ").points\n",
        "\n",
        "for point in points:\n",
        "    print(point.payload.get('title', 'Unknown'), \"score:\", point.score)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "EXnzPX7QKxou",
        "outputId": "58f3ca32-07f9-45a5-bc09-6a68ac377738"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CodeSpells: Bridging Educational Language Features with Industry-Standard Languages score: 0.57552844\n",
            "CHILDREN'S PERCEPTIONS OF WHAT COUNTS AS A PROGRAMMING LANGUAGE score: 0.55624765\n",
            "Experience Report: an AP CS Principles University Pilot score: 0.54369175\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Filter the Qdrant collection\n"
      ],
      "metadata": {
        "id": "AdSO8piSFsMD"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "search_results = qdrant_client.query_points(\n",
        "    collection_name=collection_name,\\\n",
        "    query_filter=models.Filter(\n",
        "        must=[\n",
        "            models.FieldCondition(\n",
        "                key=\"author_names\",\n",
        "                match=models.MatchValue(\n",
        "                    value=\"William G. Griswold\",\n",
        "                ),\n",
        "            )\n",
        "        ]\n",
        "    ),\n",
        "    search_params=models.SearchParams(exact=False),\n",
        "    limit=3,\n",
        ")\n",
        "points = search_results.points\n",
        "\n",
        "print(f\"Found {len(points)} results:\")\n",
        "\n",
        "for point in points:\n",
        "    print(f\" - {point.payload.get('title', 'Unknown')} | {point.payload.get('authors_text', 'Unknown')}\")"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "sNEMh1MyDZMa",
        "outputId": "ef116b86-5b6e-46d6-fe47-eb455f053611"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Found 3 results:\n",
            " - CodeSpells: Embodying the Metaphor of Wizardry for Programming | Sarah Esper Stephen R. Foster William G. Griswold\n",
            " - CODESPELLS: HOW TO DESIGN QUESTS TO TEACH JAVA CONCEPTS * | Sarah Esper Samantha R. Wood Stephen R. Foster Sorin Lerner William G. Griswold\n",
            " - CodeSpells: Bridging Educational Language Features with Industry-Standard Languages | Sarah Esper Stephen R. Foster William G. Griswold Carlos Herrera Wyatt Snyder\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Filter, then search the Qdrant Collection"
      ],
      "metadata": {
        "id": "Ij7U4uTSFjD8"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "points = qdrant_client.query_points(\n",
        "    collection_name=\"research_papers\",\n",
        "    query=model.encode(\"Does computer science education improve problem solving skills?\").tolist(),\n",
        "    query_filter=models.Filter(\n",
        "        must=[\n",
        "            models.FieldCondition(\n",
        "                key=\"author_names\",\n",
        "                match=models.MatchValue(\n",
        "                    value=\"William G. Griswold\",\n",
        "                ),\n",
        "            )\n",
        "        ]\n",
        "    ),\n",
        "    limit=3,\n",
        ").points\n",
        "\n",
        "for point in points:\n",
        "    print(point.payload.get('title', 'Unknown'), point.payload.get('conference_name', 'Unknown'), \"score:\", point.score)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "wUEflCmGLF0y",
        "outputId": "68e49e85-dbb0-446c-c7db-eae27095f9ed"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "CodeSpells: Bridging Educational Language Features with Industry-Standard Languages Koli Calling '14 score: 0.57552844\n",
            "CODESPELLS: HOW TO DESIGN QUESTS TO TEACH JAVA CONCEPTS Consortium for Computing Sciences in Colleges score: 0.4907498\n",
            "CodeSpells: Bridging Educational Language Features with Industry-Standard Languages Koli Calling '14 score: 0.4823265\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Integrate with a LangGraph Agent"
      ],
      "metadata": {
        "id": "KzxDakse7hV0"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Create simple tools that will query Qdrant"
      ],
      "metadata": {
        "id": "dvg0pJN1DE77"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "This tool will query without filtering"
      ],
      "metadata": {
        "id": "rsZwOjBiDLM0"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def query_qdrant(question):\n",
        "  \"\"\"This function will query the Qdrant vector database for the question and return relevant markdown chunks with their respective metadata of where the chunk was found, including title and authors of the paper, what conference it was published at, and the year it was published.\"\"\"\n",
        "  print(f\"Asking {question}\")\n",
        "  search_results = qdrant_client.query_points(\n",
        "      collection_name=\"research_papers\",\n",
        "      query=model.encode(question).tolist(),\n",
        "      limit=3,\n",
        "  )\n",
        "  print(f\"Found {len(search_results.points)} results:\")\n",
        "  for point in search_results.points:\n",
        "    print(f\" - {point.payload.get('title', 'Unknown')} (Score: {point.score:.4f})\")\n",
        "  return search_results.points"
      ],
      "metadata": {
        "id": "6DcSmvXh8Vgr"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "This tool will first filter, then question"
      ],
      "metadata": {
        "id": "CLCRKsZoDP79"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def filtered_qdrant_query(question, filter_field, filter_value):\n",
        "  \"\"\"If the question mentions a person's name, assume it is an Author Name. If the question mentions a conference where papers are published, assume it is the Conference Name. If the question mentioned a year, assume it is the Conference Year. This function will first filter the Qdrant vector database based on the filter_field and filter_value, then query using the question and return relevant markdown chunks with their respective metadata of where the chunk was found, including title and authors of the paper, what conference it was published at, and the year it was published. Filtered fields can be one of: author_names, title, conference_name, conference_year, or keywords\"\"\"\n",
        "  print(f\"Asking {question} by first filtering {filter_field} by {filter_value}\")\n",
        "  search_results = qdrant_client.query_points(\n",
        "      collection_name=collection_name,\\\n",
        "      query_filter=models.Filter(\n",
        "          must=[\n",
        "              models.FieldCondition(\n",
        "                  key=filter_field,\n",
        "                  match=models.MatchValue(\n",
        "                      value=filter_value,\n",
        "                  ),\n",
        "              )\n",
        "          ]\n",
        "      ),\n",
        "      query=model.encode(question).tolist(),\n",
        "      search_params=models.SearchParams(exact=False),\n",
        "      limit=3,\n",
        "  )\n",
        "  print(f\"Found {len(search_results.points)} results:\")\n",
        "  for point in search_results.points:\n",
        "    print(f\" - {point.payload.get('title', 'Unknown')} (Score: {point.score:.4f})\")\n",
        "  return search_results.points"
      ],
      "metadata": {
        "id": "wQOuA8CU-dAg"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Create the LangGraph agent with the tools"
      ],
      "metadata": {
        "id": "Z_V4eOX3F1uo"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "agent = create_react_agent(\n",
        "    model=\"openai:gpt-4o-mini\",\n",
        "    tools=[query_qdrant, filtered_qdrant_query],\n",
        "    # A static prompt that never changes\n",
        "    prompt=\"Answer the question asked using the data retrieved from either the query_qdrant or filtered_qdrant_query tool. In your response, always include metadata of the research paper where the information was found. The metadata will be available in the data from the tool.\"\n",
        ")"
      ],
      "metadata": {
        "id": "Fp4fNGJK195t"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Ask the agent questions"
      ],
      "metadata": {
        "id": "gNhAVB74754s"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "question = \"Does computer science education improve problem solving skills?\"\n",
        "\n",
        "result = agent.invoke({\"messages\": [{\"role\": \"user\", \"content\": f\"{question}\"}]})\n",
        "\n",
        "print(result[\"messages\"][-1].content)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "p3pSd61pBNUv",
        "outputId": "77ef21dd-85e9-48dc-8da7-6a00b352d226"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Asking Does computer science education improve problem solving skills?\n",
            "Found 3 results:\n",
            " - CodeSpells: Bridging Educational Language Features with Industry-Standard Languages (Score: 0.5755)\n",
            " - CHILDREN'S PERCEPTIONS OF WHAT COUNTS AS A PROGRAMMING LANGUAGE (Score: 0.5562)\n",
            " - Experience Report: an AP CS Principles University Pilot (Score: 0.5437)\n",
            "Computer science education has been shown to improve problem-solving skills, particularly through structured programs and innovative teaching methods. For instance, there are studies emphasizing the incorporation of programming languages such as Scratch and Java in educational curricula, which encourage problem-solving abilities among students. The introduction of educational programming environments allows students to engage in computational thinking and improve their ability to analyze problems and create solutions.\n",
            "\n",
            "Here are some insights drawn from relevant academic papers:\n",
            "\n",
            "1. In the paper titled **\"CodeSpells: Bridging Educational Language Features with Industry-Standard Languages\"** by Sarah Esper and colleagues (2014), it reports on efforts to motivate middle school students and bridge educational gaps through structured programming courses, enhancing students' engagement and skills in problem solving.  \n",
            "   - **Conference:** Koli Calling '14  \n",
            "   - **Authors:** Sarah Esper, Stephen R. Foster, William G. Griswold, Carlos Herrera, Wyatt Snyder\n",
            "\n",
            "2. Another paper, **\"CHILDREN'S PERCEPTIONS OF WHAT COUNTS AS A PROGRAMMING LANGUAGE,\"** conducted a study among sixth-grade students, examining their understanding of programming languages and their related problem-solving concepts. This study highlighted the importance of structured intervention in understanding programming and problem-solving connections.  \n",
            "   - **Authors:** Colleen Lewis, Sarah Esper, Victor Bhattacharyya, Noelle Fa-Kaji, Neftali Dominguez, Arielle Schlesinger  \n",
            "   - **Conference:** JCSC 29, 4 (April 2014)\n",
            "\n",
            "3. In **\"Experience Report: an AP CS Principles University Pilot,\"** Beth Simon and her co-authors developed a curriculum focusing on computational concepts, emphasizing problem-solving approaches that involve programming within a contextualized framework.  \n",
            "   - **Authors:** Beth Simon, Sarah Esper, Quintin Cutts\n",
            "\n",
            "These studies collectively indicate that computer science education, particularly through hands-on experience and tailored instructional methods, positively affects the development of problem-solving skills in students.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "question = \"Does William G. Griswold think computer science education improve problem solving skills?\"\n",
        "\n",
        "result = agent.invoke({\"messages\": [{\"role\": \"user\", \"content\": f\"{question}\"}]})\n",
        "\n",
        "print(result[\"messages\"][-1].content)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "wl-PAscSwi-Q",
        "outputId": "bf6d55f6-15f6-4392-bed5-3ec4e25dfed5"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Asking Does William G. Griswold think computer science education improve problem solving skills? by first filtering author_names by William G. Griswold\n",
            "Found 3 results:\n",
            " - CodeSpells: Bridging Educational Language Features with Industry-Standard Languages (Score: 0.5294)\n",
            " - CodeSpells: Bridging Educational Language Features with Industry-Standard Languages (Score: 0.5065)\n",
            " - On the Nature of Fires and How to Spark Them When You’re Not There (Score: 0.5038)\n",
            "William G. Griswold, in collaboration with other researchers, has presented research indicating that computer science education can significantly influence not only programming skills but also broader problem-solving abilities. In the paper titled \"CodeSpells: Bridging Educational Language Features with Industry-Standard Languages,\" they discuss an educational initiative aimed at engaging students in programming and altering their perception of what it means to be a computer scientist. Specifically, the curriculum developed and tested in their study was intended to cultivate a fundamental understanding of programming that ultimately aids in enhancing problem-solving skills. \n",
            "\n",
            "Here are the relevant details from the research paper:\n",
            "\n",
            "- **Title**: CodeSpells: Bridging Educational Language Features with Industry-Standard Languages\n",
            "- **Authors**: Sarah Esper, Stephen R. Foster, William G. Griswold, Carlos Herrera, Wyatt Snyder\n",
            "- **Conference Name**: Koli Calling\n",
            "- **Conference Year**: 2014\n",
            "- **Location**: Koli, Finland\n",
            "\n",
            "This paper presents evidence that involvement in computer science education can lead to the development of critical skills, including problem-solving. \n",
            "\n",
            "For a deeper dive into this subject, you can refer to the full paper for detailed methodologies and findings.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "question = \"What are the key findings in papers published in 2013?\"\n",
        "\n",
        "result = agent.invoke({\"messages\": [{\"role\": \"user\", \"content\": f\"{question}\"}]})\n",
        "\n",
        "print(result[\"messages\"][-1].content)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "lgujQDeeBZUP",
        "outputId": "7079780a-a637-42ff-dc8f-6b063b03e711"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Asking key findings by first filtering conference_year by 2013\n",
            "Found 3 results:\n",
            " - On the Nature of Fires and How to Spark Them When You’re Not There (Score: 0.3649)\n",
            " - CodeSpells: Embodying the Metaphor of Wizardry for Programming (Score: 0.3294)\n",
            " - From Competition to Metacognition: Designing Diverse, Sustainable Educational Games (Score: 0.3256)\n",
            "Here are some key findings from papers published in 2013:\n",
            "\n",
            "1. **Title:** On the Nature of Fires and How to Spark Them When You’re Not There\n",
            "   - **Authors:** Sarah Esper, Stephen R. Foster, William G. Griswold\n",
            "   - **Conference:** SIGCSE\n",
            "   - **Location:** Denver, Colorado, USA\n",
            "   - **Key Findings:** This research discussed the grounded theory on CS0 and CS1 education, emphasizing the role of gamification and active learning in informal learning spaces.\n",
            "   - **Citation Information:** [SIGCSE 2013]\n",
            "\n",
            "2. **Title:** CodeSpells: Embodying the Metaphor of Wizardry for Programming\n",
            "   - **Authors:** Sarah Esper, Stephen R. Foster, William G. Griswold\n",
            "   - **Conference:** ITiCSE\n",
            "   - **Location:** Canterbury, England, UK\n",
            "   - **Key Findings:** The study revealed intriguing results on gameplay behavior and students' interaction with programming environments, highlighting a sense of immersion in game-based learning.\n",
            "   - **Citation Information:** [ITiCSE 2013]\n",
            "\n",
            "3. **Title:** From Competition to Metacognition: Designing Diverse, Sustainable Educational Games\n",
            "   - **Authors:** Stephen R. Foster, Sarah Esper, William G. Griswold\n",
            "   - **Conference:** CHI 2013: Changing Perspectives\n",
            "   - **Location:** Paris, France\n",
            "   - **Key Findings:** The research explored the perceptions of players regarding the skills that contribute to success in gaming, emphasizing the importance of practice and knowledge over innate abilities.\n",
            "   - **Citation Information:** [CHI 2013]\n",
            "\n",
            "These findings collectively point to a strong emphasis on gamification, hands-on learning, and the psychological aspects of learning within tech education in 2013.\n"
          ]
        }
      ]
    }
  ]
}