{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/Avtr99/GenAI_Agents/blob/main/EU_Green_Compliance_FAQ_Bot.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# **EU Green deal compliance FAQ Bot**"
      ],
      "metadata": {
        "id": "jbl2b8rCnaTG"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "A RAG based AI agent that helps SMEs/ businesses quickly find answers to common questions about EU green deal policies. This bot will focus on responding to frequently asked questions (FAQs) related to the most relevant regulations, providing short and clear answers to help businesses understand and meet compliance standards."
      ],
      "metadata": {
        "id": "A-DRamJpnmXH"
      }
    },
    {
      "cell_type": "markdown",
      "source": [],
      "metadata": {
        "id": "6WD-vBxhoee3"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "**Functionality:** The bot answers basic questions about key EU environmental regulations, focusing on common requirements like waste management, carbon footprint reporting, and renewable energy.\n",
        "\n"
      ],
      "metadata": {
        "id": "5gz5uN54n9tc"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# **Motivation**"
      ],
      "metadata": {
        "id": "EjlQCwDLmZ8Z"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Navigating EU green compliance can be overwhelming for businesses, especially smaller ones without dedicated resources. The project aims to simplify this process by creating a smart, accessible FAQ bot that provides instant, accurate answers to common questions about the EU Green Deal, emissions reporting, and waste management. By helping businesses understand and meet green regulations, compliance easier—it will contribute to a more sustainable future for everyone."
      ],
      "metadata": {
        "id": "kbAOrle-3KjZ"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# **Method Details**"
      ],
      "metadata": {
        "id": "nzP_O5Aw4kb5"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### **Document Storage and Embedding:**\n",
        "Large documents are preprocessed into manageable chunks using a LLM for semantic chunking and stored in a vectorstore.\n",
        "### **Query Processing:**\n",
        "User queries are first rephrased to improve clarity and intent matching. The rephrased queries are then embedded using the same model. Using vector similarity and semantic relevance, the system retrieves the most relevant document chunks from the FAISS vectorstore.\n",
        "\n",
        "### **Summarization:**\n",
        "Context-aware and concise response are generated from the retrieved chunks using an LLM. This summarization step emphasizes clarity and ensures the answer directly aligns with the user’s query, distilling only the most relevant information.\n",
        "### **Evaluation:**\n",
        "Generated answers are evaluated against a gold Q&A dataset for factual accuracy and contextual relevance. The evaluation process includes metrics such as cosine similarity, F1 score, and semantic match.\n",
        "### **Key Agents:**\n",
        "Retriever Agent:\n",
        "Retrieves the most semantically relevant chunks from the FAISS vectorstore based on the processed and embedded user query\n",
        "\n",
        "Summarizer Agent:\n",
        "Generate a coherent, concise response based on retrieved content.\n",
        "\n",
        "Evaluation Agent:\n",
        "Evaluates the quality of the generated response using gold-standard answers and similarity metrics."
      ],
      "metadata": {
        "id": "_rnRFFTQ4o57"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# **Benefits of the Approach**"
      ],
      "metadata": {
        "id": "JEEUMXYK45Of"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "\n",
        "### **Accuracy and Fact-Checking:**\n",
        "Reduces hallucination by grounding answers in external knowledge.\n",
        "\n",
        "### **Modularity:**\n",
        "The system's components (retriever, summarizer, evaluator) are independently - designed, allowing seamless improvements or replacements as needed.\n",
        "\n",
        "### **Better evaluation:**\n",
        "Combines advanced metrics like cosine similarity and F1 scores with gold q&a benchmark.\n",
        "\n",
        "### **Flexibility:**\n",
        "Adaptable across various domains and use cases with minimal pipeline changes, accommodating tailored retriever and summarizer configurations.\n",
        "\n",
        "### **Context-Aware Responses:**\n",
        "Incorporates context from both the query and the retrieved information."
      ],
      "metadata": {
        "id": "5DTH7fO447Kg"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# **Setup**"
      ],
      "metadata": {
        "id": "UFUC77hGx40C"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Import the required libraries"
      ],
      "metadata": {
        "id": "tvUFkzg-x_Yz"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install langchain langchain-openai python-dotenv openai\n",
        "pip install langchain-experimental\n",
        "pip install faiss-cpu"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "I12hsUcadQNA",
        "outputId": "6d2b7022-e79c-48a9-cbcb-91217c06f277"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Requirement already satisfied: langchain in /usr/local/lib/python3.10/dist-packages (0.3.7)\n",
            "Collecting langchain-openai\n",
            "  Downloading langchain_openai-0.2.8-py3-none-any.whl.metadata (2.6 kB)\n",
            "Collecting python-dotenv\n",
            "  Downloading python_dotenv-1.0.1-py3-none-any.whl.metadata (23 kB)\n",
            "Requirement already satisfied: openai in /usr/local/lib/python3.10/dist-packages (1.54.4)\n",
            "Requirement already satisfied: PyYAML>=5.3 in /usr/local/lib/python3.10/dist-packages (from langchain) (6.0.2)\n",
            "Requirement already satisfied: SQLAlchemy<3,>=1.4 in /usr/local/lib/python3.10/dist-packages (from langchain) (2.0.36)\n",
            "Requirement already satisfied: aiohttp<4.0.0,>=3.8.3 in /usr/local/lib/python3.10/dist-packages (from langchain) (3.10.10)\n",
            "Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in /usr/local/lib/python3.10/dist-packages (from langchain) (4.0.3)\n",
            "Requirement already satisfied: langchain-core<0.4.0,>=0.3.15 in /usr/local/lib/python3.10/dist-packages (from langchain) (0.3.17)\n",
            "Requirement already satisfied: langchain-text-splitters<0.4.0,>=0.3.0 in /usr/local/lib/python3.10/dist-packages (from langchain) (0.3.2)\n",
            "Requirement already satisfied: langsmith<0.2.0,>=0.1.17 in /usr/local/lib/python3.10/dist-packages (from langchain) (0.1.142)\n",
            "Requirement already satisfied: numpy<2,>=1 in /usr/local/lib/python3.10/dist-packages (from langchain) (1.26.4)\n",
            "Requirement already satisfied: pydantic<3.0.0,>=2.7.4 in /usr/local/lib/python3.10/dist-packages (from langchain) (2.9.2)\n",
            "Requirement already satisfied: requests<3,>=2 in /usr/local/lib/python3.10/dist-packages (from langchain) (2.32.3)\n",
            "Requirement already satisfied: tenacity!=8.4.0,<10,>=8.1.0 in /usr/local/lib/python3.10/dist-packages (from langchain) (9.0.0)\n",
            "Collecting tiktoken<1,>=0.7 (from langchain-openai)\n",
            "  Downloading tiktoken-0.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.6 kB)\n",
            "Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.10/dist-packages (from openai) (3.7.1)\n",
            "Requirement already satisfied: distro<2,>=1.7.0 in /usr/local/lib/python3.10/dist-packages (from openai) (1.9.0)\n",
            "Requirement already satisfied: httpx<1,>=0.23.0 in /usr/local/lib/python3.10/dist-packages (from openai) (0.27.2)\n",
            "Requirement already satisfied: jiter<1,>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from openai) (0.7.1)\n",
            "Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from openai) (1.3.1)\n",
            "Requirement already satisfied: tqdm>4 in /usr/local/lib/python3.10/dist-packages (from openai) (4.66.6)\n",
            "Requirement already satisfied: typing-extensions<5,>=4.11 in /usr/local/lib/python3.10/dist-packages (from openai) (4.12.2)\n",
            "Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (2.4.3)\n",
            "Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.3.1)\n",
            "Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (24.2.0)\n",
            "Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.5.0)\n",
            "Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (6.1.0)\n",
            "Requirement already satisfied: yarl<2.0,>=1.12.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp<4.0.0,>=3.8.3->langchain) (1.17.1)\n",
            "Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->openai) (3.10)\n",
            "Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->openai) (1.2.2)\n",
            "Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx<1,>=0.23.0->openai) (2024.8.30)\n",
            "Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.10/dist-packages (from httpx<1,>=0.23.0->openai) (1.0.6)\n",
            "Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/lib/python3.10/dist-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai) (0.14.0)\n",
            "Requirement already satisfied: jsonpatch<2.0,>=1.33 in /usr/local/lib/python3.10/dist-packages (from langchain-core<0.4.0,>=0.3.15->langchain) (1.33)\n",
            "Requirement already satisfied: packaging<25,>=23.2 in /usr/local/lib/python3.10/dist-packages (from langchain-core<0.4.0,>=0.3.15->langchain) (24.2)\n",
            "Requirement already satisfied: orjson<4.0.0,>=3.9.14 in /usr/local/lib/python3.10/dist-packages (from langsmith<0.2.0,>=0.1.17->langchain) (3.10.11)\n",
            "Requirement already satisfied: requests-toolbelt<2.0.0,>=1.0.0 in /usr/local/lib/python3.10/dist-packages (from langsmith<0.2.0,>=0.1.17->langchain) (1.0.0)\n",
            "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.10/dist-packages (from pydantic<3.0.0,>=2.7.4->langchain) (0.7.0)\n",
            "Requirement already satisfied: pydantic-core==2.23.4 in /usr/local/lib/python3.10/dist-packages (from pydantic<3.0.0,>=2.7.4->langchain) (2.23.4)\n",
            "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain) (3.4.0)\n",
            "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests<3,>=2->langchain) (2.2.3)\n",
            "Requirement already satisfied: greenlet!=0.4.17 in /usr/local/lib/python3.10/dist-packages (from SQLAlchemy<3,>=1.4->langchain) (3.1.1)\n",
            "Requirement already satisfied: regex>=2022.1.18 in /usr/local/lib/python3.10/dist-packages (from tiktoken<1,>=0.7->langchain-openai) (2024.9.11)\n",
            "Requirement already satisfied: jsonpointer>=1.9 in /usr/local/lib/python3.10/dist-packages (from jsonpatch<2.0,>=1.33->langchain-core<0.4.0,>=0.3.15->langchain) (3.0.0)\n",
            "Requirement already satisfied: propcache>=0.2.0 in /usr/local/lib/python3.10/dist-packages (from yarl<2.0,>=1.12.0->aiohttp<4.0.0,>=3.8.3->langchain) (0.2.0)\n",
            "Downloading langchain_openai-0.2.8-py3-none-any.whl (50 kB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.4/50.4 kB\u001b[0m \u001b[31m3.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hDownloading python_dotenv-1.0.1-py3-none-any.whl (19 kB)\n",
            "Downloading tiktoken-0.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.2/1.2 MB\u001b[0m \u001b[31m40.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hInstalling collected packages: python-dotenv, tiktoken, langchain-openai\n",
            "Successfully installed langchain-openai-0.2.8 python-dotenv-1.0.1 tiktoken-0.8.0\n",
            "Collecting rank_bm25\n",
            "  Downloading rank_bm25-0.2.2-py3-none-any.whl.metadata (3.2 kB)\n",
            "Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from rank_bm25) (1.26.4)\n",
            "Downloading rank_bm25-0.2.2-py3-none-any.whl (8.6 kB)\n",
            "Installing collected packages: rank_bm25\n",
            "Successfully installed rank_bm25-0.2.2\n"
          ]
        }
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "\n",
        "# Then import necessary modules\n",
        "import os  # Add this import first\n",
        "import time\n",
        "\n",
        "from langchain_openai import ChatOpenAI\n",
        "from langchain.schema import HumanMessage, SystemMessage, AIMessage\n",
        "from typing import List, Dict\n",
        "from dotenv import load_dotenv\n",
        "\n",
        "# Set your API key\n",
        "os.environ[\"OPENAI_API_KEY\"] = \"ADD your key here\" #set an openAI key"
      ],
      "metadata": {
        "id": "5RZSMaoxx7ce"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "initialize language model"
      ],
      "metadata": {
        "id": "pXfxFrXyyELP"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "llm = ChatOpenAI(model=\"gpt-4o-mini\", max_tokens=1000, temperature=0.7)"
      ],
      "metadata": {
        "id": "PpZGFMB3yGnA"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# **Graph**"
      ],
      "metadata": {
        "id": "P2RwfGU_PY1V"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from IPython.display import Image, display\n",
        "\n",
        "def render_mermaid(graph_definition: str, width: int = 800, height: int = 600):\n",
        "    \"\"\"\n",
        "    Render a mermaid graph as an image using mermaid.ink and scale it.\n",
        "\n",
        "    Args:\n",
        "        graph_definition (str): The mermaid graph definition in string format.\n",
        "        width (int): Desired width of the graph.\n",
        "        height (int): Desired height of the graph.\n",
        "    \"\"\"\n",
        "    import base64\n",
        "    graph_bytes = graph_definition.encode(\"utf-8\")\n",
        "    base64_bytes = base64.urlsafe_b64encode(graph_bytes)\n",
        "    base64_string = base64_bytes.decode(\"ascii\")\n",
        "    image_url = f\"https://mermaid.ink/img/{base64_string}\"\n",
        "    display(Image(url=image_url, width=width, height=height))\n",
        "\n",
        "# Modified Mermaid Graph\n",
        "mermaid_graph = \"\"\"\n",
        "graph TD\n",
        "    subgraph User_Query\n",
        "        U[User Input Query] -->|Initiates Process| E[Rephrased Query]\n",
        "    end\n",
        "    subgraph Knowledge_Base_Processing\n",
        "        A[EU Compliance Documents] -->|Text Splitter| B[Document Chunks]\n",
        "        B -->|OpenAI Embedding| C[Vector Embeddings]\n",
        "        C -->|Embeddings to Retriever| F[Retriever Agent]\n",
        "    end\n",
        "    subgraph Retriever_Agent\n",
        "        E -->|Query Rephrasing| F[Processed Query]\n",
        "        F -->|Vector Similarity Search| H[Retriever Search]\n",
        "        H -->|Top-K Relevant Chunks| J[Retrieved Chunks]\n",
        "    end\n",
        "    subgraph Summarizer_Agent\n",
        "        J -->|Contextual Summary| K[Context-Aware Summary]\n",
        "        K -->|OpenAI LLM| L[Generated Summary]\n",
        "        L -->|Summary for User| M[Final Summary]\n",
        "    end\n",
        "    subgraph Evaluation_Agent\n",
        "        L -->|Evaluate Answer| N{Evaluation Metrics}\n",
        "        P[(Gold Q&A Dictionary)] -->|Benchmark for Evaluation| N\n",
        "        N -->|Cosine Similarity, F1 Score| O{Score Evaluation}\n",
        "        N -->|Precision@1, Semantic Match| O\n",
        "        O -->|Displayed Answer| M\n",
        "    end\n",
        "    M -->|Final Answer| T[User]\n",
        "\"\"\"\n",
        "render_mermaid(mermaid_graph, width=1200, height=1600)\n"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        },
        "id": "ByATOTIQcayE",
        "outputId": "536ea66f-25d0-46c4-efb2-263214aad201"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "text/html": [
              "<img src=\"https://mermaid.ink/img/CmdyYXBoIFRECiAgICBzdWJncmFwaCBVc2VyX1F1ZXJ5CiAgICAgICAgVVtVc2VyIElucHV0IFF1ZXJ5XSAtLT58SW5pdGlhdGVzIFByb2Nlc3N8IEVbUmVwaHJhc2VkIFF1ZXJ5XQogICAgZW5kCiAgICBzdWJncmFwaCBLbm93bGVkZ2VfQmFzZV9Qcm9jZXNzaW5nCiAgICAgICAgQVtFVSBDb21wbGlhbmNlIERvY3VtZW50c10gLS0-fFRleHQgU3BsaXR0ZXJ8IEJbRG9jdW1lbnQgQ2h1bmtzXQogICAgICAgIEIgLS0-fE9wZW5BSSBFbWJlZGRpbmd8IENbVmVjdG9yIEVtYmVkZGluZ3NdCiAgICAgICAgQyAtLT58RW1iZWRkaW5ncyB0byBSZXRyaWV2ZXJ8IEZbUmV0cmlldmVyIEFnZW50XQogICAgZW5kCiAgICBzdWJncmFwaCBSZXRyaWV2ZXJfQWdlbnQKICAgICAgICBFIC0tPnxRdWVyeSBSZXBocmFzaW5nfCBGW1Byb2Nlc3NlZCBRdWVyeV0KICAgICAgICBGIC0tPnxWZWN0b3IgU2ltaWxhcml0eSBTZWFyY2h8IEhbUmV0cmlldmVyIFNlYXJjaF0KICAgICAgICBIIC0tPnxUb3AtSyBSZWxldmFudCBDaHVua3N8IEpbUmV0cmlldmVkIENodW5rc10KICAgIGVuZAogICAgc3ViZ3JhcGggU3VtbWFyaXplcl9BZ2VudAogICAgICAgIEogLS0-fENvbnRleHR1YWwgU3VtbWFyeXwgS1tDb250ZXh0LUF3YXJlIFN1bW1hcnldCiAgICAgICAgSyAtLT58T3BlbkFJIExMTXwgTFtHZW5lcmF0ZWQgU3VtbWFyeV0KICAgICAgICBMIC0tPnxTdW1tYXJ5IGZvciBVc2VyfCBNW0ZpbmFsIFN1bW1hcnldCiAgICBlbmQKICAgIHN1YmdyYXBoIEV2YWx1YXRpb25fQWdlbnQKICAgICAgICBMIC0tPnxFdmFsdWF0ZSBBbnN3ZXJ8IE57RXZhbHVhdGlvbiBNZXRyaWNzfQogICAgICAgIFBbKEdvbGQgUSZBIERpY3Rpb25hcnkpXSAtLT58QmVuY2htYXJrIGZvciBFdmFsdWF0aW9ufCBOCiAgICAgICAgTiAtLT58Q29zaW5lIFNpbWlsYXJpdHksIEYxIFNjb3JlfCBPe1Njb3JlIEV2YWx1YXRpb259CiAgICAgICAgTiAtLT58UHJlY2lzaW9uQDEsIFNlbWFudGljIE1hdGNofCBPCiAgICAgICAgTyAtLT58RGlzcGxheWVkIEFuc3dlcnwgTQogICAgZW5kCiAgICBNIC0tPnxGaW5hbCBBbnN3ZXJ8IFRbVXNlcl0K\" width=\"1200\" height=\"1600\"/>"
            ],
            "text/plain": [
              "<IPython.core.display.Image object>"
            ]
          },
          "metadata": {}
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Chunking the documents and Vector store\n"
      ],
      "metadata": {
        "id": "oS_t3XONKNEK"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Semantic chunker using LLM and storing in a vectorstore"
      ],
      "metadata": {
        "id": "LC1Iu8IEwOPK"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "import sys\n",
        "from langchain_experimental.text_splitter import SemanticChunker\n",
        "from langchain_openai.embeddings import OpenAIEmbeddings\n",
        "from langchain.vectorstores import FAISS\n",
        "\n",
        "# Step 1: Set the folder path containing the documents\n",
        "folder_path = \"/content/data\"  # Path to the folder containing documents\n",
        "\n",
        "# Step 2: Read and combine content from all documents in the folder\n",
        "def load_documents(folder_path):\n",
        "    \"\"\"\n",
        "    Load and combine content from all text documents in the specified folder.\n",
        "\n",
        "    Args:\n",
        "        folder_path (str): Path to the folder containing documents.\n",
        "\n",
        "    Returns:\n",
        "        str: Combined content of all documents.\n",
        "    \"\"\"\n",
        "    combined_content = \"\"\n",
        "    for filename in os.listdir(folder_path):\n",
        "        file_path = os.path.join(folder_path, filename)\n",
        "        if os.path.isfile(file_path) and filename.endswith((\".txt\", \".md\", \".docx\")):  # Adjust extensions as needed\n",
        "            with open(file_path, 'r', encoding='utf-8') as file:\n",
        "                combined_content += file.read() + \"\\n\"\n",
        "    return combined_content\n",
        "\n",
        "content = load_documents(folder_path)\n",
        "if not content:\n",
        "    raise ValueError(\"No valid documents found in the folder.\")\n",
        "\n",
        "# Step 3: Initialize SemanticChunker with the custom embedding model\n",
        "embedding_model = OpenAIEmbeddings(model=\"text-embedding-3-small\")  # Specify the desired embedding model\n",
        "text_splitter = SemanticChunker(\n",
        "    embeddings=embedding_model,  # Use the custom embedding model here\n",
        "    breakpoint_threshold_type='percentile',  # Use percentile-based semantic shifts for splitting\n",
        "    breakpoint_threshold_amount=90  # Define the threshold value (90th percentile)\n",
        ")\n",
        "\n",
        "# Step 4: Create semantic chunks from the combined document content\n",
        "docs = text_splitter.create_documents([content])  # Semantic chunks as documents\n",
        "print(f\"Generated {len(docs)} semantic chunks.\")\n",
        "\n",
        "# Step 5: Embed and store chunks in FAISS vectorstore using the custom embedding model\n",
        "vectorstore = FAISS.from_documents(docs, embedding_model)\n",
        "\n",
        "# Step 6: Configure a retriever for the chunks\n",
        "chunks_query_retriever = vectorstore.as_retriever(search_kwargs={\"k\": 3})  # Retrieve top-3 relevant chunks\n",
        "\n",
        "# Step 7: Example Query\n",
        "query = \"What are the goals of the European Green Deal?\"\n",
        "retrieved_chunks = chunks_query_retriever.invoke(query)\n",
        "\n",
        "# Output the retrieved chunks for the query\n",
        "print(\"Retrieved Chunks for the Query:\")\n",
        "for idx, chunk in enumerate(retrieved_chunks, start=1):\n",
        "    print(f\"Chunk {idx}: {chunk.page_content}\")\n"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "hA1WoJ9MwQz0",
        "outputId": "0408ffe1-4d04-44aa-b3c6-3d86c937c759"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Generated 1531 semantic chunks.\n",
            "Retrieved Chunks for the Query:\n",
            "Chunk 1: What is the European Green Deal?\n",
            "Chunk 2: Forests and oceans are being polluted and destroyed 1 . The European Green Deal is a response to these challenges. It is a new growth strategy that aims to transform the EU into a fair and prosperous society, with a modern, resource-efficient and competitive economy where there are no net emissions of greenhouse gases in 2050 and where economic growth is decoupled from resource use. It also aims to protect, conserve and enhance the EU's natural capital, and protect the health and well-being of citizens from environment-related risks and impacts. At the same time, this transition must be just and inclusive.\n",
            "Chunk 3: The policy response must be bold and comprehensive and seek to maximise benefits for health, quality of life, resilience and competitiveness. It will require intense coordination to exploit the available synergies across all policy areas 2 . The Green Deal is an integral part of this Commission’s strategy to implement the United Nation’s 2030 Agenda and the sustainable development goals 3 , and the other priorities announced in President von der Leyen’s political guidelines 4 . As part of the Green Deal, the Commission will refocus the European Semester process of macroeconomic coordination to integrate the United Nations’ sustainable development goals, to put sustainability and the well-being of citizens at the centre of economic policy, and the sustainable development goals at the heart of the EU’s policymaking and action. The figure below illustrates the various elements of the Green Deal. Figure 1: The European Green Deal\n",
            "\n",
            "2.Transforming the EU’s economy for a sustainable future\n",
            "\n",
            "2.1.Designing a set of deeply transformative policies\n",
            "\n",
            "To deliver the European Green Deal, there is a need to rethink policies for clean energy supply across the economy, industry, production and consumption, large-scale infrastructure, transport, food and agriculture, construction, taxation and social benefits. To achieve these aims, it is essential to increase the value given to protecting and restoring natural ecosystems, to the sustainable use of resources and to improving human health. This is where transformational change is most needed and potentially most beneficial for the EU economy, society and natural environment. The EU should also promote and invest in the necessary digital transformation and tools as these are essential enablers of the changes. While all of these areas for action are strongly interlinked and mutually reinforcing, careful attention will have to be paid when there are potential trade-offs between economic, environmental and social objectives. The Green Deal will make consistent use of all policy levers: regulation and standardisation, investment and innovation, national reforms, dialogue with social partners and international cooperation. The European Pillar of Social Rights will guide action in ensuring that no one is left behind. New measures on their own will not be enough to achieve the European Green Deal’s objectives.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# **Define the different functions for the collaboration system**"
      ],
      "metadata": {
        "id": "gFK08ZnHC5w-"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Next, the retriever agent should retreive the relevant chunks. Using both vector similarity and LLM-based grading."
      ],
      "metadata": {
        "id": "EhyL20Y3AHj2"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## **Retriever agent**"
      ],
      "metadata": {
        "id": "4NJaXioG_Ep_"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from typing import List, Dict\n",
        "from openai import OpenAI\n",
        "import json\n",
        "import numpy as np\n",
        "import os\n",
        "from langchain.chat_models import ChatOpenAI\n",
        "from langchain.prompts import PromptTemplate\n",
        "from langchain.agents import initialize_agent, Tool\n",
        "from langchain.agents import AgentExecutor\n",
        "from langchain.llms import OpenAI\n",
        "import requests\n",
        "\n",
        "class RetrieverAgent:\n",
        "    def __init__(self, vectorstore, model=\"gpt-4o-mini\", temperature=0.0):\n",
        "        \"\"\"\n",
        "        Initialize the Retriever Agent with a FAISS vectorstore and OpenAI model.\n",
        "\n",
        "        Args:\n",
        "            vectorstore: FAISS vectorstore containing document chunks and their embeddings\n",
        "            model (str): OpenAI model to use for relevance scoring (default: gpt-4o-mini)\n",
        "        \"\"\"\n",
        "        self.vectorstore = vectorstore\n",
        "        self.model = model\n",
        "        self.temperature = temperature\n",
        "        openai.api_key = os.getenv(\"OPENAI_API_KEY\")  # Ensure the OpenAI API key is set from environment variable\n",
        "\n",
        "        # Initialize the LLM client for grading\n",
        "        self.llm = OpenAI(model=self.model, temperature=self.temperature)\n",
        "\n",
        "        # Define the system prompt for grading\n",
        "        self.system = \"\"\"You are a grader assessing relevance of a retrieved document to a user question.\n",
        "                         If the document contains keyword(s) or semantic meaning related to the user question,\n",
        "                         grade it as relevant.\n",
        "                         It does not need to be a stringent test. The goal is to filter out erroneous retrievals.\n",
        "                         Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.\"\"\"\n",
        "\n",
        "    def _get_relevance_score(self, query: str, chunk_text: str) -> str:\n",
        "        \"\"\"\n",
        "        Use the LLM with function call to grade the relevance of the chunk.\n",
        "\n",
        "        Args:\n",
        "            query (str): User query\n",
        "            chunk_text (str): Text content of the chunk\n",
        "\n",
        "        Returns:\n",
        "            str: 'yes' or 'no' indicating whether the chunk is relevant or not\n",
        "        \"\"\"\n",
        "        prompt = f\"\"\"Query: {query}\n",
        "                    Chunk: {chunk_text}\n",
        "                    Grade the relevance of this chunk to the query. Respond only with 'yes' or 'no'.\"\"\"\n",
        "\n",
        "        try:\n",
        "            # Use LLM to grade the chunk relevance\n",
        "            response = self.llm.generate([prompt])  # Assuming llm has a generate method\n",
        "            grade = response['choices'][0]['text'].strip()\n",
        "            return grade.lower()\n",
        "\n",
        "        except Exception as e:\n",
        "            print(f\"Error in grading: {e}\")\n",
        "            return \"no\"  # Default to no if there's an error\n",
        "\n",
        "    def retrieve_relevant_chunks(self, query: str, top_k: int = 3, rerank: bool = True) -> List[Dict]:\n",
        "      \"\"\"\n",
        "      Retrieve and optionally rerank the most relevant chunks using both vector similarity\n",
        "      and LLM-based grading.\n",
        "\n",
        "      Args:\n",
        "          query (str): User query\n",
        "          top_k (int): Number of top relevant chunks to return\n",
        "          rerank (bool): Whether to rerank results using LLM grading\n",
        "\n",
        "      Returns:\n",
        "          list: List of dictionaries containing similarity scores and chunk text\n",
        "      \"\"\"\n",
        "      # First, get candidates using vector similarity\n",
        "      retrieved_docs = self.vectorstore.similarity_search_with_score(\n",
        "          query,\n",
        "          k=top_k * (2 if rerank else 1)  # Get more candidates if reranking\n",
        "      )\n",
        "\n",
        "      # Debugging: Print the raw retrieved_docs to check its structure\n",
        "      print(\"Retrieved Docs (Raw):\", retrieved_docs)\n",
        "\n",
        "      relevant_chunks = []\n",
        "\n",
        "      for doc, vector_score in retrieved_docs:\n",
        "          # Use vector_score directly for similarity, and 1 - vector_score for ranking\n",
        "          chunk_info = {\n",
        "              \"vector_similarity\": float(vector_score),  # Vector similarity score\n",
        "              \"chunk_text\": doc.page_content,\n",
        "              \"metadata\": doc.metadata\n",
        "          }\n",
        "\n",
        "          if rerank:\n",
        "              # Get LLM-based relevance grade ('yes' or 'no')\n",
        "              relevance_grade = self._get_relevance_score(query, doc.page_content)\n",
        "\n",
        "              # Only add chunks that are graded as relevant ('yes')\n",
        "              if relevance_grade == \"yes\":\n",
        "                  chunk_info[\"relevance_grade\"] = relevance_grade\n",
        "                  chunk_info[\"combined_score\"] = 1 - vector_score  # Adjust this as necessary\n",
        "                  relevant_chunks.append(chunk_info)\n",
        "          else:\n",
        "              # If reranking is disabled, just use vector similarity\n",
        "              chunk_info[\"combined_score\"] = 1 - vector_score  # Adjust this as necessary\n",
        "              relevant_chunks.append(chunk_info)\n",
        "\n",
        "      # Sort by combined score and take top_k\n",
        "      relevant_chunks.sort(key=lambda x: x[\"combined_score\"], reverse=True)\n",
        "      return relevant_chunks[:top_k]\n",
        "\n",
        "\n",
        "    def batch_retrieve(self, queries: List[str], top_k: int = 3, rerank: bool = True) -> Dict[str, List[Dict]]:\n",
        "        \"\"\"\n",
        "        Batch process multiple queries.\n",
        "\n",
        "        Args:\n",
        "            queries (List[str]): List of queries to process\n",
        "            top_k (int): Number of top relevant chunks to return per query\n",
        "            rerank (bool): Whether to rerank results using LLM grading\n",
        "\n",
        "        Returns:\n",
        "            Dict[str, List[Dict]]: Dictionary mapping queries to their relevant chunks\n",
        "        \"\"\"\n",
        "        results = {}\n",
        "        for query in queries:\n",
        "            results[query] = self.retrieve_relevant_chunks(query, top_k, rerank)\n",
        "        return results\n",
        "\n",
        "def create_retriever_agent(vectorstore, model=\"gpt-4o-mini\", temperature=0.0):\n",
        "    \"\"\"\n",
        "    Factory function to create a RetrieverAgent instance.\n",
        "\n",
        "    Args:\n",
        "        vectorstore: FAISS vectorstore containing document chunks\n",
        "        model (str): OpenAI model to use for scoring (default: gpt-4o-mini)\n",
        "\n",
        "    Returns:\n",
        "        RetrieverAgent: Initialized retriever agent\n",
        "    \"\"\"\n",
        "    return RetrieverAgent(vectorstore, model, temperature)\n"
      ],
      "metadata": {
        "id": "6YmfxEeyAGqj"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## **Summarizer Agent**"
      ],
      "metadata": {
        "id": "VDh3RWRW-9Pa"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Context aware summarization using LLM"
      ],
      "metadata": {
        "id": "nlTRqu7zjXjQ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import openai\n",
        "import os\n",
        "import requests\n",
        "from typing import List, Dict\n",
        "\n",
        "class SummarizerAgent:\n",
        "    def __init__(self, model=\"gpt-4o-mini\"):  # Default model can be adjusted\n",
        "        \"\"\"\n",
        "        Initialize the Summarizer Agent with OpenAI model.\n",
        "\n",
        "        Args:\n",
        "            model (str): OpenAI model to use for summarization (default: gpt-4o-mini)\n",
        "        \"\"\"\n",
        "        self.model = model\n",
        "        openai.api_key = os.getenv(\"OPENAI_API_KEY\")  # Ensure the OpenAI API key is set from environment variable\n",
        "\n",
        "    def summarize_text(self, query: str, text: str) -> str:\n",
        "        \"\"\"\n",
        "        Summarize the given text in the context of the query, focusing on concise and clear details within two sentences.\n",
        "\n",
        "        Args:\n",
        "            query (str): User query.\n",
        "            text (str): Text content to summarize.\n",
        "\n",
        "        Returns:\n",
        "            str: Concise and clear summary relevant to the query.\n",
        "        \"\"\"\n",
        "        url = \"https://api.openai.com/v1/chat/completions\"\n",
        "        headers = {\n",
        "            \"Content-Type\": \"application/json\",\n",
        "            \"Authorization\": f\"Bearer {os.getenv('OPENAI_API_KEY')}\"  # Ensure the OpenAI API key is set\n",
        "        }\n",
        "\n",
        "        prompt = f\"\"\"Summarize the following text based on the query. Focus on extracting the most relevant details in a clear and concise manner, ensuring the summary is no more than two sentences.\n",
        "\n",
        "        Query: {query}\n",
        "\n",
        "        Text to summarize: {text}\n",
        "\n",
        "        Please make sure the summary is brief, clear, and focuses on the key information, avoiding unnecessary details and providing a direct answer to the query.\n",
        "        \"\"\"\n",
        "\n",
        "        data = {\n",
        "            \"model\": self.model,\n",
        "            \"messages\": [\n",
        "                {\"role\": \"system\", \"content\": \"You are a summarization assistant. Your task is to summarize text into two sentences, focusing on the key points and ensuring clarity and conciseness.\"},\n",
        "                {\"role\": \"user\", \"content\": prompt}\n",
        "            ],\n",
        "            \"temperature\": 0.3,  # Low temperature for more focused responses\n",
        "            \"max_tokens\": 150  # Ensure a concise summary\n",
        "        }\n",
        "\n",
        "        try:\n",
        "            # Make the request to OpenAI's API\n",
        "            response = requests.post(url, headers=headers, json=data)\n",
        "            response.raise_for_status()  # Raise an exception if the request fails\n",
        "\n",
        "            # Extract the summarized content from the response\n",
        "            result = response.json()\n",
        "            summarized_text = result['choices'][0]['message']['content'].strip()\n",
        "            return summarized_text\n",
        "\n",
        "        except requests.exceptions.RequestException as e:\n",
        "            print(f\"Error in summarization: {e}\")\n",
        "            return \"Sorry, I could not generate the summary at the moment.\"\n",
        "\n",
        "    def batch_summarize(self, queries: List[str], texts: List[str]) -> Dict[str, str]:\n",
        "        \"\"\"\n",
        "        Batch process multiple queries and summarize corresponding texts.\n",
        "\n",
        "        Args:\n",
        "            queries (List[str]): List of queries to process.\n",
        "            texts (List[str]): List of texts to summarize.\n",
        "\n",
        "        Returns:\n",
        "            Dict[str, str]: Dictionary mapping each query to its summarized text.\n",
        "        \"\"\"\n",
        "        summaries = {}\n",
        "        for query, text in zip(queries, texts):\n",
        "            summaries[query] = self.summarize_text(query, text)\n",
        "        return summaries\n",
        "\n",
        "# Example usage of the SummarizerAgent\n",
        "summarizer = SummarizerAgent(model=\"gpt-4o-mini\")  # Use the same model or another available model\n",
        "\n",
        "query = \"What is the European Green Deal?\"\n",
        "text = \"\"\"The European Green Deal is a set of policy initiatives by the European Commission to address climate change, promote sustainability, and reduce carbon emissions by 2030. The Deal includes measures to promote clean energy, sustainable agriculture, and investments in green technologies. It aims to make Europe the first carbon-neutral continent by 2050.\"\"\"\n",
        "\n",
        "summary = summarizer.summarize_text(query, text)\n",
        "print(f\"Summary: {summary}\")\n"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "WVEj6pjlYkp9",
        "outputId": "15483a22-bd9e-42c4-f068-f92d763bc9a2"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Summary: The European Green Deal is a comprehensive set of policy initiatives by the European Commission aimed at combating climate change and achieving carbon neutrality by 2050. It includes measures to promote clean energy, sustainable agriculture, and investments in green technologies, with a target to reduce carbon emissions by 2030.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# **Evaluation Agent**"
      ],
      "metadata": {
        "id": "ibbbv-OnN5M2"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Gold Q&A: List of curated question and answers that will be used to evaluted the answer"
      ],
      "metadata": {
        "id": "mqz-G1HYO2uR"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "gold_qa_dict = [\n",
        "    {\"query\": \"What is the European Green Deal (EGD)?\", \"answer\": \"The EGD is the EU’s strategy to reach net zero greenhouse gas emissions by 2050 while achieving sustainable economic growth. It covers policies across sectors like agriculture, energy, and manufacturing to ensure products meet higher sustainability standards.\"},\n",
        "    {\"query\": \"What is the Farm to Fork (F2F) Strategy?\", \"answer\": \"The F2F strategy is part of the EGD, focusing on making the EU’s food system fair, healthy, and environmentally friendly. It targets reducing pesticide use, nutrient loss, and promoting organic farming.\"},\n",
        "    {\"query\": \"What is the Circular Economy Action Plan (CEAP)?\", \"answer\": \"CEAP aims to eliminate waste by promoting the reuse, repair, and recycling of materials. It emphasizes creating sustainable products and reducing waste generation in industries like packaging, textiles, and electronics.\"},\n",
        "    {\"query\": \"What is the EU Green Deal Industrial Plan?\", \"answer\": \"The Plan aims to enhance Europe’s net-zero industrial base by simplifying regulations, increasing funding, developing skills, and fostering trade. It focuses on manufacturing key technologies like batteries, hydrogen systems, and wind turbines to achieve climate neutrality by 2050.\"},\n",
        "    {\"query\": \"What is the Net-Zero Industry Act (NZIA)?\", \"answer\": \"The NZIA aims to boost the EU's manufacturing capacity for net-zero technologies, such as solar panels, batteries, and electrolysers. It sets goals like manufacturing at least 40% of strategic net-zero technologies domestically by 2030.\"},\n",
        "    {\"query\": \"What is the EU Biodiversity Strategy for 2030?\", \"answer\": \"A key part of the Green Deal, it focuses on reversing biodiversity loss by restoring degraded ecosystems, reducing pesticide use by 50%, and ensuring 25% of farmland is organic by 2030.\"},\n",
        "    {\"query\": \"What is the Carbon Border Adjustment Mechanism (CBAM)?\", \"answer\": \"CBAM is a policy tool designed to prevent carbon leakage by imposing carbon costs on imports of certain goods from countries with less stringent climate policies. It ensures that imported products are priced similarly to EU-manufactured goods under the EU's carbon pricing system.\"},\n",
        "    {\"query\": \"Which sectors does CBAM initially cover?\", \"answer\": \"CBAM applies to high-emission sectors such as cement, iron and steel, fertilizers, electricity, and aluminum. Additional sectors may be included in the future.\"},\n",
        "    {\"query\": \"How does CBAM impact SMEs exporting to the EU?\", \"answer\": \"SMEs exporting CBAM-regulated goods must report the carbon emissions embedded in their products and potentially pay a carbon price. This may require investment in cleaner technologies and better transparency in production processes.\"},\n",
        "    {\"query\": \"When will CBAM come into effect?\", \"answer\": \"CBAM will be implemented in stages, starting with a reporting phase in 2023 and transitioning to full operation with financial obligations by 2026.\"},\n",
        "    {\"query\": \"How can exporters mitigate CBAM costs?\", \"answer\": \"Exporters can invest in low-carbon production methods or provide evidence of carbon taxes already paid in their home countries to reduce or eliminate CBAM charges.\"},\n",
        "    {\"query\": \"What sustainability standards must SMEs exporting to the EU meet?\", \"answer\": \"SMEs must meet standards for reduced waste, traceable production, eco-friendly packaging, and compliance with the new Ecodesign for Sustainable Products Regulation.\"},\n",
        "    {\"query\": \"What are the traceability requirements for exporters?\", \"answer\": \"Exporters must provide detailed information on product life cycles, including manufacturing, materials used, and compliance with sustainability criteria.\"},\n",
        "    {\"query\": \"How does the Carbon Border Adjustment Mechanism (CBAM) affect imports?\", \"answer\": \"CBAM imposes carbon taxes on imported goods with high greenhouse gas footprints, ensuring imports align with EU environmental standards.\"},\n",
        "    {\"query\": \"What is required under the new EU organic regulations?\", \"answer\": \"Imported organic products must display control body codes, follow strict organic certification rules, and meet labeling requirements.\"},\n",
        "    {\"query\": \"How does the Green Deal Industrial Plan simplify regulations for SMEs?\", \"answer\": \"The Plan introduces streamlined permitting processes and 'one-stop shops' to reduce red tape for projects related to renewable technologies.\"},\n",
        "    {\"query\": \"What is the Digital Product Passport (DPP)?\", \"answer\": \"The DPP provides detailed information about a product’s lifecycle, ensuring traceability and compliance with sustainability standards. It helps SMEs align with EU buyers' expectations.\"},\n",
        "    {\"query\": \"What are the biodiversity-related commitments for agricultural land?\", \"answer\": \"By 2030, 10% of farmland must feature biodiversity-friendly measures, and pesticide use must be cut by 50%.\"},\n",
        "    {\"query\": \"What challenges might SMEs face due to the EGD?\", \"answer\": \"SMEs may encounter higher production costs, complex sustainability reporting requirements, and the need to adapt to new eco-friendly technologies.\"},\n",
        "    {\"query\": \"What are the compliance deadlines for key regulations?\", \"answer\": \"Major regulations like the revision of pesticide use directives and the CBAM will be implemented in stages, with some taking effect by 2024.\"},\n",
        "    {\"query\": \"How does the EU support skill development for the green transition?\", \"answer\": \"The EU is establishing Net-Zero Industry Academies to train workers in net-zero technologies, with funding for reskilling and upskilling programs.\"},\n",
        "    {\"query\": \"What is the timeline for major Green Deal initiatives?\", \"answer\": \"Key initiatives like the NZIA and biodiversity commitments have milestones up to 2030, with significant mid-term reviews and funding disbursements expected between 2023 and 2026.\"},\n",
        "    {\"query\": \"What funding mechanisms are available for SMEs under the Green Deal?\", \"answer\": \"SMEs can access funding through programs like the Innovation Fund, InvestEU, and the European Sovereignty Fund. These mechanisms support green technology projects and offer tax breaks.\"},\n",
        "    {\"query\": \"What is the European Hydrogen Bank?\", \"answer\": \"It is a financial instrument to support renewable hydrogen production and imports. The Bank offers subsidies to bridge the cost gap between renewable and fossil hydrogen.\"},\n",
        "    {\"query\": \"What trade opportunities does the Green Deal provide?\", \"answer\": \"The Plan promotes open and fair trade through partnerships, free trade agreements, and initiatives like the Critical Raw Materials Club to ensure supply chain resilience.\"},\n",
        "    {\"query\": \"How can SMEs benefit from the EU Green Deal?\", \"answer\": \"SMEs can capitalize on increased demand for sustainable products, gain partnerships with EU companies, and access new markets driven by sustainability goals.\"},\n",
        "    {\"query\": \"What support is available for SMEs transitioning to sustainable practices?\", \"answer\": \"EU-based programs provide subsidies, technical support, and resources like the Digital Product Passport to help SMEs adapt.\"},\n",
        "    {\"query\": \"What opportunities do CEAP and F2F provide?\", \"answer\": \"These initiatives create markets for sustainable products, such as organic food and recycled textiles, enhancing SME competitiveness.\"},\n",
        "    {\"query\": \"What is the role of the EU Digital Product Passport?\", \"answer\": \"This tool standardizes and simplifies compliance, providing detailed product information to buyers while promoting transparency.\"},\n",
        "    {\"query\": \"What are Net-Zero Strategic Projects?\", \"answer\": \"These are priority projects essential for the EU's energy transition, such as large-scale solar or battery manufacturing plants. They benefit from accelerated permitting and funding.\"},\n",
        "    {\"query\": \"How does the EU address biodiversity in urban planning?\", \"answer\": \"Through the Green City Accord, urban planning integrates green spaces and biodiversity-focused infrastructure.\"},\n",
        "    {\"query\": \"What role does hydrogen play in the EU's climate strategy?\", \"answer\": \"Hydrogen is a cornerstone for reducing industrial emissions, with a target of producing 10 million tonnes of renewable hydrogen in the EU and importing an additional 10 million tonnes by 2030.\"},\n",
        "    {\"query\": \"What are the packaging requirements under the EGD?\", \"answer\": \"All packaging must be reusable or recyclable by 2024, with reduced material complexity and increased recycled content.\"},\n",
        "    {\"query\": \"How does the EU Biodiversity Strategy impact exporters?\", \"answer\": \"Exporters must ensure their products do not contribute to deforestation or biodiversity loss and comply with due diligence laws.\"}\n",
        "]\n"
      ],
      "metadata": {
        "id": "bRw44HKHO3BF"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**Evaluation Agent:** Evaluates the generated answer"
      ],
      "metadata": {
        "id": "vB7nBll9Ns5Z"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import numpy as np\n",
        "from collections import Counter\n",
        "from sklearn.metrics.pairwise import cosine_similarity\n",
        "from sklearn.metrics import precision_score, recall_score, f1_score\n",
        "\n",
        "class EvaluationAgent:\n",
        "    def __init__(self, gold_qa_dict, similarity_threshold=0.85):\n",
        "        \"\"\"\n",
        "        Initialize the Evaluation Agent with a cosine similarity-based approach.\n",
        "\n",
        "        Args:\n",
        "            gold_qa_dict (list): A list of dictionaries containing gold Q&A where each\n",
        "                                  dictionary has keys \"query\" and \"answer\".\n",
        "            similarity_threshold (float): Minimum cosine similarity score to accept an answer\n",
        "                                           without human review (default is 0.85).\n",
        "        \"\"\"\n",
        "        self.gold_qa_dict = gold_qa_dict\n",
        "        self.similarity_threshold = similarity_threshold\n",
        "\n",
        "    def _tokenize_text(self, text):\n",
        "        \"\"\"\n",
        "        Tokenize the text by splitting it into words and converting to lowercase.\n",
        "\n",
        "        Args:\n",
        "            text (str): The text to tokenize.\n",
        "\n",
        "        Returns:\n",
        "            list: List of tokens (words).\n",
        "        \"\"\"\n",
        "        return text.lower().split()\n",
        "\n",
        "    def _vectorize_text(self, text):\n",
        "        \"\"\"\n",
        "        Convert tokenized text into a term frequency (TF) vector.\n",
        "\n",
        "        Args:\n",
        "            text (str): The text to vectorize.\n",
        "\n",
        "        Returns:\n",
        "            dict: Term frequency (TF) vector.\n",
        "        \"\"\"\n",
        "        tokens = self._tokenize_text(text)\n",
        "        return Counter(tokens)\n",
        "\n",
        "    def _cosine_similarity(self, vec1, vec2):\n",
        "        \"\"\"\n",
        "        Calculate cosine similarity between two term frequency vectors.\n",
        "\n",
        "        Args:\n",
        "            vec1 (dict): Term frequency vector of the first text.\n",
        "            vec2 (dict): Term frequency vector of the second text.\n",
        "\n",
        "        Returns:\n",
        "            float: Cosine similarity score between 0 and 1.\n",
        "        \"\"\"\n",
        "        # Convert term frequency vectors to sorted lists of word counts\n",
        "        all_tokens = set(vec1.keys()).union(set(vec2.keys()))\n",
        "        vec1_list = [vec1.get(token, 0) for token in all_tokens]\n",
        "        vec2_list = [vec2.get(token, 0) for token in all_tokens]\n",
        "\n",
        "        # Compute cosine similarity\n",
        "        return cosine_similarity([vec1_list], [vec2_list])[0][0]\n",
        "\n",
        "    def _calculate_f1_score(self, generated_answer, gold_answer):\n",
        "        \"\"\"\n",
        "        Calculate F1 score based on token overlap between generated and gold answers.\n",
        "\n",
        "        Args:\n",
        "            generated_answer (str): The answer generated by the system.\n",
        "            gold_answer (str): The gold standard answer.\n",
        "\n",
        "        Returns:\n",
        "            float: F1 score based on token overlap.\n",
        "        \"\"\"\n",
        "        gen_tokens = set(self._tokenize_text(generated_answer))\n",
        "        gold_tokens = set(self._tokenize_text(gold_answer))\n",
        "\n",
        "        # Calculate Precision and Recall\n",
        "        precision = len(gen_tokens & gold_tokens) / len(gen_tokens) if len(gen_tokens) > 0 else 0\n",
        "        recall = len(gen_tokens & gold_tokens) / len(gold_tokens) if len(gold_tokens) > 0 else 0\n",
        "\n",
        "        # Calculate F1 Score\n",
        "        f1 = 2 * (precision * recall) / (precision + recall) if (precision + recall) > 0 else 0\n",
        "        return f1\n",
        "\n",
        "    def evaluate_answer(self, generated_answer, query):\n",
        "        \"\"\"\n",
        "        Evaluate the generated answer using multiple metrics including F1 score, Precision@1, and cosine similarity.\n",
        "\n",
        "        Args:\n",
        "            generated_answer (str): The answer generated by the system.\n",
        "            query (str): The user query to evaluate.\n",
        "\n",
        "        Returns:\n",
        "            dict: Evaluation results with various metrics.\n",
        "        \"\"\"\n",
        "        # Normalize query to lowercase and strip extra spaces\n",
        "        normalized_query = query.strip().lower()\n",
        "\n",
        "        # Check if the normalized query exists in the gold QA list\n",
        "        gold_answer = None\n",
        "        for qa in self.gold_qa_dict:\n",
        "            gold_query = qa[\"query\"].strip().lower()\n",
        "            if normalized_query == gold_query:\n",
        "                gold_answer = qa[\"answer\"]\n",
        "                break\n",
        "\n",
        "        if not gold_answer:\n",
        "            return {\"error\": \"No Gold Standard: The query is not in the gold Q&A dictionary.\"}\n",
        "\n",
        "        # Vectorize both the generated answer and the gold standard answer\n",
        "        gen_vec = self._vectorize_text(generated_answer)\n",
        "        gold_vec = self._vectorize_text(gold_answer)\n",
        "\n",
        "        # Calculate cosine similarity between the vectors\n",
        "        cosine_sim = self._cosine_similarity(gen_vec, gold_vec)\n",
        "\n",
        "        # Calculate F1 Score (overlap) based on tokenized text\n",
        "        f1 = self._calculate_f1_score(generated_answer, gold_answer)\n",
        "\n",
        "        # Evaluate based on the similarity score\n",
        "        semantic_match = cosine_sim >= self.similarity_threshold\n",
        "        precision_at_1 = 1 if semantic_match else 0\n",
        "\n",
        "        # Human review only if the similarity score is below the threshold\n",
        "        human_review_needed = cosine_sim < self.similarity_threshold\n",
        "\n",
        "        # Return a dictionary with the evaluation results\n",
        "        return {\n",
        "            \"cosine_similarity\": cosine_sim,\n",
        "            \"f1_score\": f1,\n",
        "            \"precision_at_1\": precision_at_1,\n",
        "            \"semantic_match\": semantic_match,\n",
        "            \"human_review_needed\": human_review_needed,\n",
        "            \"generated_answer\": generated_answer,\n",
        "            \"gold_answer\": gold_answer\n",
        "        }\n",
        "\n",
        "\n",
        "# Example Usage\n",
        "\n",
        "# Define the gold Q&A dictionary as a list of dictionaries\n",
        "gold_qa_dict = [\n",
        "    {\"query\": \"What is the European Green Deal (EGD)?\", \"answer\":\n",
        "     \"The EGD is the EU’s strategy to reach net zero greenhouse gas emissions by 2050 while achieving sustainable economic growth. It covers policies across sectors like agriculture, energy, and manufacturing to ensure products meet higher sustainability standards.\"},\n",
        "    {\"query\": \"What is the Farm to Fork strategy (F2F)?\", \"answer\":\n",
        "     \"The F2F strategy is part of the European Green Deal, focusing on making the EU’s food system fair, healthy, and environmentally friendly. It targets reducing pesticide use, nutrient loss, and promoting organic farming.\"}\n",
        "]\n",
        "\n",
        "# Initialize the evaluation agent with the gold Q&A dictionary\n",
        "evaluation_agent = EvaluationAgent(gold_qa_dict, similarity_threshold=0.85)\n",
        "\n",
        "# Assume `generated_answer` is the answer from the system and `user_question` is the query\n",
        "generated_answer = \"The F2F strategy is part of the EGD, focusing on making the EU’s food system fair, healthy, and environmentally friendly. It targets reducing pesticide use, nutrient loss, and promoting organic farming.\"\n",
        "user_question = \"What is the Farm to Fork strategy (F2F)?\"\n",
        "\n",
        "# Evaluate the generated answer\n",
        "evaluation_result = evaluation_agent.evaluate_answer(generated_answer, user_question)\n",
        "\n",
        "# Print the evaluation result\n",
        "print(f\"Cosine Similarity: {evaluation_result['cosine_similarity']:.2f}\")\n",
        "print(f\"F1 Score (Overlap): {evaluation_result['f1_score']:.2f}\")\n",
        "print(f\"Precision@1: {evaluation_result['precision_at_1']}\")\n",
        "print(f\"Semantic Match: {evaluation_result['semantic_match']}\")\n",
        "print(f\"Human Review Needed: {evaluation_result['human_review_needed']}\")\n",
        "print(f\"Generated Answer: {evaluation_result['generated_answer']}\")\n",
        "print(f\"Gold Answer: {evaluation_result['gold_answer']}\")\n"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Ngkj5brHNvN3",
        "outputId": "b3915818-e192-45d9-f441-68d03706bec4"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Cosine Similarity: 0.95\n",
            "F1 Score (Overlap): 0.93\n",
            "Precision@1: 1\n",
            "Semantic Match: True\n",
            "Human Review Needed: False\n",
            "Generated Answer: The F2F strategy is part of the EGD, focusing on making the EU’s food system fair, healthy, and environmentally friendly. It targets reducing pesticide use, nutrient loss, and promoting organic farming.\n",
            "Gold Answer: The F2F strategy is part of the European Green Deal, focusing on making the EU’s food system fair, healthy, and environmentally friendly. It targets reducing pesticide use, nutrient loss, and promoting organic farming.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# **RelevanceSummarySystem Class**"
      ],
      "metadata": {
        "id": "SZpj62HjvzRT"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Brings together all the agents. There is also a rephraser function, that rephrases the user query for better retrieval accuracy"
      ],
      "metadata": {
        "id": "fz4CExNumYqF"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "import os\n",
        "import requests\n",
        "\n",
        "class RelevanceSummarizationSystem:\n",
        "    def __init__(self, retriever_agent, summarizer_agent, evaluation_agent, relevance_threshold=0.6, openai_api_key=None):\n",
        "        \"\"\"\n",
        "        Initialize the Relevance Summarization System.\n",
        "        \"\"\"\n",
        "        self.retriever_agent = retriever_agent\n",
        "        self.summarizer_agent = summarizer_agent\n",
        "        self.evaluation_agent = evaluation_agent\n",
        "        self.relevance_threshold = relevance_threshold\n",
        "        self.openai_api_key = openai_api_key or os.getenv(\"OPENAI_API_KEY\")\n",
        "\n",
        "        if not self.openai_api_key:\n",
        "            raise ValueError(\"OpenAI API key is required for rephrasing queries.\")\n",
        "\n",
        "    def _send_openai_request(self, prompt: str, model=\"gpt-4o-mini\", temperature=0.7, max_tokens=150):\n",
        "        \"\"\"\n",
        "        Helper function to send a request to OpenAI's API and handle the response.\n",
        "        \"\"\"\n",
        "        url = \"https://api.openai.com/v1/chat/completions\"\n",
        "        headers = {\n",
        "            \"Content-Type\": \"application/json\",\n",
        "            \"Authorization\": f\"Bearer {self.openai_api_key}\"\n",
        "        }\n",
        "\n",
        "        data = {\n",
        "            \"model\": model,\n",
        "            \"messages\": [\n",
        "                {\"role\": \"system\", \"content\": \"You are an assistant.\"},\n",
        "                {\"role\": \"user\", \"content\": prompt}\n",
        "            ],\n",
        "            \"temperature\": temperature,\n",
        "            \"max_tokens\": max_tokens\n",
        "        }\n",
        "\n",
        "        try:\n",
        "            response = requests.post(url, headers=headers, json=data)\n",
        "            response.raise_for_status()\n",
        "            return response.json()['choices'][0]['message']['content'].strip()\n",
        "        except requests.exceptions.RequestException as e:\n",
        "            print(f\"❌ Error during API request: {e}\")\n",
        "            return None\n",
        "\n",
        "    def rephrase_query(self, query: str) -> str:\n",
        "        \"\"\"\n",
        "        Rephrase the query using OpenAI's API to improve retrieval accuracy.\n",
        "        \"\"\"\n",
        "        prompt = f\"You are a rephrasing expert. Rephrase the following question to make it clearer and more likely to retrieve relevant information: {query}\"\n",
        "        rephrased_query = self._send_openai_request(prompt, model=\"gpt-4o-mini\", max_tokens=60)\n",
        "\n",
        "        if rephrased_query:\n",
        "            print(f\"🔄 Rephrased query: {rephrased_query}\")\n",
        "            return rephrased_query\n",
        "        return query  # Fallback to the original query if rephrasing fails\n",
        "\n",
        "    def process_query(self, query: str, top_k: int = 3):\n",
        "        \"\"\"\n",
        "        Process a user query by retrieving relevant chunks and summarizing them.\n",
        "        \"\"\"\n",
        "        print(f\"🔍 Processing query: {query}\\n\")\n",
        "\n",
        "        # Step 1: Rephrase the query\n",
        "        rephrased_query = self.rephrase_query(query)\n",
        "\n",
        "        # Step 2: Retrieve relevant chunks for both original and rephrased query\n",
        "        try:\n",
        "            original_chunks = self.retriever_agent.retrieve_relevant_chunks(query, top_k=top_k)\n",
        "            rephrased_chunks = self.retriever_agent.retrieve_relevant_chunks(rephrased_query, top_k=top_k)\n",
        "        except Exception as e:\n",
        "            print(f\"❌ Error during retrieval: {e}\")\n",
        "            return \"An error occurred while processing your query. Please try again later.\"\n",
        "\n",
        "        # Merge both sets of retrieved chunks\n",
        "        all_chunks = sorted(original_chunks + rephrased_chunks, key=lambda x: x[\"combined_score\"], reverse=True)\n",
        "\n",
        "        if not all_chunks:\n",
        "            print(\"⚠️ No relevant chunks found.\\n\")\n",
        "            return \"I don't know the answer to this question. Can you try rephrasing your question and try again?\"\n",
        "\n",
        "        # Step 3: Check relevance of the top chunk\n",
        "        top_relevance = all_chunks[0][\"combined_score\"]\n",
        "        print(f\"📊 Top relevance score: {top_relevance:.2f}\")\n",
        "\n",
        "        if top_relevance < self.relevance_threshold:\n",
        "            print(f\"⚠️ Relevance score too low (Score: {top_relevance:.2f}).\\n\")\n",
        "            return \"I don't know the answer to this question. Can you try rephrasing your question and try again?\"\n",
        "\n",
        "        # Step 4: Summarize the retrieved chunks\n",
        "        try:\n",
        "            summary = self.summarizer_agent.summarize_retrieved_chunks(all_chunks, query)\n",
        "        except Exception as e:\n",
        "            print(f\"❌ Error during summarization: {e}\")\n",
        "            return \"An error occurred while summarizing the information. Please try again later.\"\n",
        "\n",
        "        # Step 5: Evaluate the answer\n",
        "        evaluation_result = self.evaluation_agent.evaluate_answer(summary, query)\n",
        "\n",
        "        # Print the concise output\n",
        "        print(f\"📝 Evaluation Results: {evaluation_result}\\n\")\n",
        "\n",
        "        # Return only the final summary and evaluation results as output\n",
        "        return summary.strip(), evaluation_result\n"
      ],
      "metadata": {
        "id": "XNXaTBFyv8md"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# **Example Usage**"
      ],
      "metadata": {
        "id": "vhbV7AsWmm_Y"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "Try executing the code to type your question"
      ],
      "metadata": {
        "id": "i-owDHcLmqFx"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Example Usage:\n",
        "\n",
        "# Initialize the Evaluation Agent\n",
        "evaluation_agent = EvaluationAgent(gold_qa_dict)\n",
        "\n",
        "# Initialize the RelevanceSummarizationSystem with the retriever, summarizer, and evaluation agent\n",
        "relevance_system = RelevanceSummarizationSystem(\n",
        "    retriever_agent=retriever_agent,  # Assuming this is already defined\n",
        "    summarizer_agent=summarizer_agent,  # Assuming this is already defined\n",
        "    evaluation_agent=evaluation_agent,\n",
        "    relevance_threshold=0.6\n",
        ")\n",
        "\n",
        "# Take user input (query)\n",
        "user_question = input(\"Enter your question: \")  # User-provided query\n",
        "\n",
        "# Process the user query and get the response\n",
        "final_summary, evaluation_results = relevance_system.process_query(user_question, top_k=3)\n",
        "\n",
        "# Print the result\n",
        "print(\"\\nResponse:\")\n",
        "print(final_summary)  # Clean and concise summary\n",
        "print(\"\\nEvaluation Results:\")\n",
        "print(evaluation_results)  # Evaluation metrics\n"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "fkhw_wVDyU6z",
        "outputId": "bda686fb-1ed2-4ae5-eaf8-6801c94b01c0"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Enter your question: What is the Farm to Fork (F2F) Strategy?\n",
            "🔍 Processing query: What is the Farm to Fork (F2F) Strategy?\n",
            "\n",
            "🔄 Rephrased query: What does the Farm to Fork (F2F) Strategy entail?\n",
            "📊 Top relevance score: 0.95\n",
            "📝 Summarizer Agent: Summarizing retrieved content with context...\n",
            "✅ Context-aware summary generated:\n",
            "The Farm to Fork (F2F) Strategy is an initiative launched by the European Commission on May 20, 2020, aimed at making Europe’s food systems more sustainable and climate-neutral by 2050. It addresses the climate crisis by promoting a fair, healthy, and environmentally-friendly food system for all Europeans, with specific actions to reduce the environmental and climate footprint of food production and reverse biodiversity loss.\n",
            "\n",
            "F2F outlines five key targets to be achieved by 2030:\n",
            "\n",
            "1. Reduce the use and risk of chemical pesticides by 50%.\n",
            "2. Reduce nutrient losses by at least 50%.\n",
            "3. Reduce fertilizer use by at least 20%.\n",
            "4. Cut sales of antibiotics for farm animals by 50%.\n",
            "5. Increase organic farming area to at least 25% of total arable land.\n",
            "\n",
            "The strategy emphasizes the importance of sustainable practices such as precision agriculture and aims to ensure fair prices for food producers while promoting affordable, healthy food for all citizens. It also includes initiatives to reduce food waste, enhance consumer information regarding food sources and environmental impacts, and protect ecosystems. Overall, the F2F Strategy is a comprehensive approach to transforming the European food system in alignment with sustainability goals.\n",
            "\n",
            "📝 Evaluation Results: {'error': 'No Gold Standard: The query is not in the gold Q&A dictionary.'}\n",
            "\n",
            "\n",
            "Response:\n",
            "The Farm to Fork (F2F) Strategy is an initiative launched by the European Commission on May 20, 2020, aimed at making Europe’s food systems more sustainable and climate-neutral by 2050. It addresses the climate crisis by promoting a fair, healthy, and environmentally-friendly food system for all Europeans, with specific actions to reduce the environmental and climate footprint of food production and reverse biodiversity loss.\n",
            "\n",
            "F2F outlines five key targets to be achieved by 2030:\n",
            "\n",
            "1. Reduce the use and risk of chemical pesticides by 50%.\n",
            "2. Reduce nutrient losses by at least 50%.\n",
            "3. Reduce fertilizer use by at least 20%.\n",
            "4. Cut sales of antibiotics for farm animals by 50%.\n",
            "5. Increase organic farming area to at least 25% of total arable land.\n",
            "\n",
            "The strategy emphasizes the importance of sustainable practices such as precision agriculture and aims to ensure fair prices for food producers while promoting affordable, healthy food for all citizens. It also includes initiatives to reduce food waste, enhance consumer information regarding food sources and environmental impacts, and protect ecosystems. Overall, the F2F Strategy is a comprehensive approach to transforming the European food system in alignment with sustainability goals.\n",
            "\n",
            "Evaluation Results:\n",
            "{'error': 'No Gold Standard: The query is not in the gold Q&A dictionary.'}\n"
          ]
        }
      ]
    }
  ]
}
