{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ur8xi4C7S06n"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JAPoU8Sm5E6e"
      },
      "source": [
        "# Know Your Customer Use Case - Gemini Grounding with Google Search \n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/kyc/kyc-with-grounding.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fuse-cases%2Fkyc%2Fkyc-with-grounding.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/use-cases/kyc/kyc-with-grounding.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/kyc/kyc-with-grounding.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/kyc/kyc-with-grounding.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/kyc/kyc-with-grounding.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/kyc/kyc-with-grounding.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/kyc/kyc-with-grounding.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/use-cases/kyc/kyc-with-grounding.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "84f0f73a0f76"
      },
      "source": [
        "| Author |\n",
        "| --- |\n",
        "| [Lukas Geiger](https://github.com/ljogeiger) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tvgnzT1CKxrO"
      },
      "source": [
        "## Overview\n",
        "\n",
        "This notebook demonstrates how to use the Gemini API (specifically the `gemini-2.5-flash` model) to find and summarize negative news articles related to a specified entity. The entity can be a person, company, or ship. The notebook leverages Google Search as a tool for the Gemini API to ground its responses in real-world information.\n",
        "\n",
        "You will learn how to:\n",
        "* Configure the Gemini API client.\n",
        "* Define a detailed system instruction to guide the model's behavior.\n",
        "* Craft a prompt that incorporates an input entity.\n",
        "* Use Google Search as a grounding tool for the model.\n",
        "* Process the model's response to extract the generated text and grounding metadata (sources).\n",
        "* Evaluate the responses using GCP's Evaluation Framework and create custom metrics.\n",
        "\n",
        "## Use Case Definition\n",
        "\"Know Your Customer\" (KYC) is a crucial due diligence process used by businesses, particularly in regulated industries, to verify the identity of their clients and assess potential risks associated with doing business with them. The primary goal of KYC is to prevent financial crimes like money laundering, terrorist financing, and fraud.\n",
        "\n",
        "For example, imagine you are conducting interviews for a board seat. As part of this process you might choose to do a background check to evaluate whether or not the candidate has been involved in any illegal activities. Then based on the report received you can make a more informed decision whether to proceed with the candidate or not. This can be applied to many other use cases and across many entity types (companies, people, vessels, governments, etc.)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "61RBz8LLbxCR"
      },
      "source": [
        "## Get started"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "No17Cw5hgx12"
      },
      "source": [
        "### Install Google Gen AI SDK and other required packages\n",
        "The following command installs the Google Generative AI SDK, which is necessary to interact with the Gemini API."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tFy3H3aPgx12"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --quiet google-genai"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dmWOrTJ3gx13"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "If you're running this notebook on Google Colab, run the cell below to authenticate your environment. This allows the notebook to access Google Cloud services."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "metadata": {
        "id": "NyKGtVQjgx13"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c19f5974a642"
      },
      "source": [
        "### Authenticate your notebook environment"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1307a625f802"
      },
      "outputs": [],
      "source": [
        "!gcloud auth application-default login"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DF4l8DTdWgPY"
      },
      "source": [
        "### Set Google Cloud project information and Initialize API Client\n",
        "\n",
        "To get started using Vertex AI (which hosts the Gemini models used here), you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment).\n",
        "\n",
        "This cell also initializes the `genai.Client` which will be used to interact with the Gemini API."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Nqwi-5ufWp_B"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "from google import genai\n",
        "\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    # Attempt to get project ID from environment variable if not set by user\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "    if not PROJECT_ID or PROJECT_ID == \"None\":\n",
        "        raise ValueError(\"Please set your Google Cloud Project ID.\")\n",
        "print(f\"Using Project ID: {PROJECT_ID}\")\n",
        "\n",
        "LOCATION = os.environ.get(\n",
        "    \"GOOGLE_CLOUD_REGION\", \"us-central1\"\n",
        ")  # Default to us-central1 if not set\n",
        "print(f\"Using Location: {LOCATION}\")\n",
        "\n",
        "# Initialize the Google Gen AI client\n",
        "client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5303c05f7aa6"
      },
      "source": [
        "### Import libraries\n",
        "Import necessary libraries, including `google.genai.types` for defining specific configurations for the API call, and `IPython.display` for better rendering of markdown in the notebook."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "id": "6fc324893334"
      },
      "outputs": [],
      "source": [
        "import json  # Though not used in the final script, often useful for handling API responses\n",
        "\n",
        "from IPython.display import Markdown, display\n",
        "from google.genai import types"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EdvJRUWRNGHE"
      },
      "source": [
        "## Generating Negative News Reports with Gemini"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e43229f3ad4f"
      },
      "source": [
        "### Load model\n",
        "Specify the model ID to be used. We are using `gemini-2.5-flash`, a fast and versatile model with reasoning capabilities."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "id": "cf93d5f0ce00"
      },
      "outputs": [],
      "source": [
        "MODEL_ID = \"gemini-2.5-flash\"  # @param {type:\"string\"}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "new_markdown_cell_prompt_system"
      },
      "source": [
        "### Define Prompt Template and System Instructions\n",
        "\n",
        "**System Instructions:** These provide high-level guidance to the model on its role, the desired output format, and steps to follow. We instruct the model to act as a professional report generator for negative news, to search thoroughly, cite dates, and handle cases where no negative news is found. A list of specific activities is provided to focus the search.\n",
        "\n",
        "**Prompt Template:** This is the specific query sent to the model for each entity. It includes a placeholder `{entity}` which will be filled in with the actual entity name during execution."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "id": "new_code_cell_prompt_system_def"
      },
      "outputs": [],
      "source": [
        "prompt_template = \"\"\"\n",
        "Your task is to provide a comprehensive and professional report of negative news articles for a given input entity. The input entity can be a person, company, or ship.\n",
        "\n",
        "Input Entity:\n",
        "{input_entity}\n",
        "\n",
        "Activities:\n",
        "Money Laundering\n",
        "Forgery\n",
        "Bribery and Corruption\n",
        "Human Trafficking\n",
        "\n",
        "Follow these steps:\n",
        "\n",
        "1.  If the input entity is a company, map it to its legal business name.\n",
        "2.  Thoroughly search Google News for negative news articles related to the input entity and the specified activities across all time.\n",
        "3.  Summarize and interpret the Google Search results for the given input entity and each activity. If there are no results for a given activity, skip it.\n",
        "4.  For each activity with search results, create a headline that summarizes the event.\n",
        "5.  Group the search results under the corresponding headline, including the date of each news article.\n",
        "6.  For person names, strictly follow the entity names.\n",
        "7.  If there are no negative news articles associated with the input entity, respond with: \"There are no results found for {input_entity}.\"\n",
        "\n",
        "\n",
        "Output Format:\n",
        "\n",
        "Headline: [Summary of the event]\n",
        "Date: [Date of the news article]\n",
        "Summary: [Brief summary of the news article]\n",
        "\n",
        "Example:\n",
        "\n",
        "Headline: John Doe Accused of Money Laundering\\n\n",
        "Date: 2023-01-15\\n\n",
        "Summary: John Doe is accused of laundering money through offshore accounts, according to a report by the International Consortium of Investigative Journalists.\n",
        "\n",
        "Ensure that the report is comprehensive, accurate, and professionally presented.\n",
        "\"\"\"\n",
        "\n",
        "system_instructions_text = \"\"\"\n",
        "You are a professional news analyst tasked with providing comprehensive reports on negative news articles related to a given entity. Your reports must be thorough, accurate, and professionally presented.\n",
        "\"\"\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "new_markdown_cell_helpers"
      },
      "source": [
        "### Helper Function to Get Sources\n",
        "The `get_sources` function processes the `grounding_metadata` from the model's response. This metadata contains information about the web pages the model used to generate its answer (when using Google Search as a tool). The function extracts titles and URLs for these sources and formats them for display. This is crucial for verifying the information provided by the model."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "id": "new_code_cell_get_sources"
      },
      "outputs": [],
      "source": [
        "def get_sources(response):\n",
        "    \"\"\"Return a formatted string of sources corresponding with response citations\n",
        "\n",
        "    Args:\n",
        "        response: The response from the Gemini API containing grounding metadata\n",
        "\n",
        "    Returns:\n",
        "        A formatted string containing the sources with their titles and URLs\n",
        "    \"\"\"\n",
        "    source_text = \"\\n\\n**Sources:**\\n\"\n",
        "    if not response.candidates or not response.candidates[0].grounding_metadata:\n",
        "        return source_text + \"No grounding metadata found.\\n\"\n",
        "\n",
        "    metadata = response.candidates[0].grounding_metadata\n",
        "    sources = {}\n",
        "    source_titles = {}\n",
        "    max_chunk_index = -1\n",
        "\n",
        "    if not metadata.grounding_supports:\n",
        "        return source_text + \"No grounding supports found in metadata.\\n\"\n",
        "\n",
        "    for support in metadata.grounding_supports:\n",
        "        for chunk_index in support.grounding_chunk_indices:\n",
        "            display_chunk_index = chunk_index + 1  # offset 0 list index\n",
        "            if display_chunk_index > max_chunk_index:\n",
        "                max_chunk_index = display_chunk_index\n",
        "            if display_chunk_index not in source_titles and chunk_index < len(\n",
        "                metadata.grounding_chunks\n",
        "            ):\n",
        "                chunk = metadata.grounding_chunks[chunk_index]\n",
        "                source_titles[display_chunk_index] = chunk.web.title\n",
        "                sources[display_chunk_index] = chunk.web.uri  # Corrected to use uri\n",
        "            elif chunk_index >= len(metadata.grounding_chunks):\n",
        "                print(\n",
        "                    f\"Warning: chunk_index {chunk_index} out of bounds for grounding_chunks (len: {len(metadata.grounding_chunks)}).\"\n",
        "                )\n",
        "    sorted_source_titles = dict(sorted(source_titles.items()))\n",
        "\n",
        "    if sources:\n",
        "        for i in sorted_source_titles:\n",
        "            source_text += f\"[[{i}] {sorted_source_titles[i]}]({sources[i]})\\n\"\n",
        "    else:\n",
        "        source_text += \"No sources extracted from grounding metadata.\\n\"\n",
        "\n",
        "    # Debugging information (optional, can be commented out)\n",
        "    # print(f\"Max Chunk Index: {max_chunk_index}\")\n",
        "    # print(f\"Length of GroundingChunks: {len(metadata.grounding_chunks)}\")\n",
        "    return source_text"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "new_markdown_cell_generate_func"
      },
      "source": [
        "### Define Generation Function\n",
        "The `generate_negative_news_report` function encapsulates the logic for calling the Gemini API. It takes an entity string and the system instructions text as input.\n",
        "Key configurations:\n",
        "* **`model`**: Uses the `MODEL_ID` defined earlier.\n",
        "* **`contents`**: The user prompt, formatted with the specific entity.\n",
        "* **`tools`**: Configured to use `types.GoogleSearch()`, enabling the model to perform Google searches to find relevant information.\n",
        "* **`generate_content_config`**: \n",
        "    * `temperature=1`, `top_p=0.95`: These parameters control the randomness and creativity of the output. Higher temperature and top_p values lead to more diverse responses.\n",
        "    * `max_output_tokens=8192`: Sets the maximum length of the generated response.\n",
        "    * `response_modalities=[\"TEXT\"]`: Specifies that we expect a text response.\n",
        "    * **`safety_settings`**: **Important Note:** All harm categories (`HATE_SPEECH`, `DANGEROUS_CONTENT`, `SEXUALLY_EXPLICIT`, `HARASSMENT`) are set to `\"OFF\"`. This is done to ensure the model can retrieve and report on potentially sensitive topics related to negative news. However, in a production environment or for other use cases, you should carefully consider and configure appropriate safety settings based on your application's requirements and responsible AI practices.\n",
        "    * `system_instruction`: The detailed instructions for the model's task.\n",
        "\n",
        "The function calls `client.models.generate_content` and returns the API response."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "id": "new_code_cell_generate_def"
      },
      "outputs": [],
      "source": [
        "def generate_kyc_report_from_entity(\n",
        "    entity_name: str, system_instructions: str, prompt_template: str\n",
        "):\n",
        "    \"\"\"Generate a KYC report for a given entity using the Gemini API.\n",
        "\n",
        "    Args:\n",
        "        entity_name: The name of the entity to generate a report for\n",
        "        system_instructions: The system instructions to guide the model's behavior\n",
        "        prompt_template: The prompt template to use for the report\n",
        "    Returns:\n",
        "        response: The response from the Gemini API containing the generated report\n",
        "    \"\"\"\n",
        "    current_prompt = prompt_template.format(input_entity=entity_name)\n",
        "    contents = [current_prompt]\n",
        "\n",
        "    tools = [\n",
        "        types.Tool(google_search=types.GoogleSearch()),\n",
        "    ]\n",
        "\n",
        "    generate_content_config = types.GenerateContentConfig(\n",
        "        temperature=1,\n",
        "        top_p=0.95,\n",
        "        max_output_tokens=8192,\n",
        "        response_modalities=[\"TEXT\"],  # Expect text modality output\n",
        "        safety_settings=[\n",
        "            types.SafetySetting(\n",
        "                category=\"HARM_CATEGORY_HATE_SPEECH\", threshold=\"BLOCK_NONE\"\n",
        "            ),  # Using new enums if applicable, else use \"OFF\"\n",
        "            types.SafetySetting(\n",
        "                category=\"HARM_CATEGORY_DANGEROUS_CONTENT\", threshold=\"BLOCK_NONE\"\n",
        "            ),\n",
        "            types.SafetySetting(\n",
        "                category=\"HARM_CATEGORY_SEXUALLY_EXPLICIT\", threshold=\"BLOCK_NONE\"\n",
        "            ),\n",
        "            types.SafetySetting(\n",
        "                category=\"HARM_CATEGORY_HARASSMENT\", threshold=\"BLOCK_NONE\"\n",
        "            ),\n",
        "        ],\n",
        "        tools=tools,\n",
        "        system_instruction=types.Content(\n",
        "            parts=[types.Part(text=system_instructions)]\n",
        "        ),  # System instructions should be Content object\n",
        "        thinking_config=types.ThinkingConfig(\n",
        "            include_thoughts=True,\n",
        "        ),\n",
        "    )\n",
        "\n",
        "    print(f\"\\n--- Generating report for: {entity_name} ---\")\n",
        "    response = client.models.generate_content(\n",
        "        model=MODEL_ID,  # Fully qualified model name\n",
        "        contents=contents,\n",
        "        config=generate_content_config,  # Parameter name is generation_config\n",
        "    )\n",
        "\n",
        "    return response"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "new_markdown_cell_run"
      },
      "source": [
        "### Define Entities and Run Analysis\n",
        "Define a list of entities for which to generate reports. The code then iterates through this list, calls the `generate_negative_news_report` function for each entity, and displays the model's text response along with the extracted sources. Using `display(Markdown(...))` helps in rendering the output in a more readable format."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "new_code_cell_run_analysis"
      },
      "outputs": [],
      "source": [
        "entities_to_check = [\n",
        "    \"Ricardo Martinelli\",\n",
        "    \"Robert Burke\",\n",
        "]  # Example entities, you can change or extend this list\n",
        "\n",
        "\n",
        "def generate_kyc_report_from_entity_list(\n",
        "    entity_list: list[str], system_instructions_text: str, prompt_template: str\n",
        "):\n",
        "    \"\"\"Generate a KYC report for a given entity using the Gemini API.\n",
        "\n",
        "    Args:\n",
        "        entity_list: The list of entities to generate a report for\n",
        "        system_instructions: The system instructions to guide the model's behavior\n",
        "        prompt_template: The prompt template to use for the report\n",
        "    Returns:\n",
        "        response: The response from the Gemini API containing the generated report\n",
        "    \"\"\"\n",
        "    for entity in entity_list:\n",
        "        response = generate_kyc_report_from_entity(\n",
        "            entity, system_instructions_text, prompt_template\n",
        "        )\n",
        "\n",
        "        # Display the model's text response\n",
        "        try:\n",
        "            if response.candidates and len(response.candidates) > 0:\n",
        "                parts = response.candidates[0].content.parts\n",
        "                # First display any thought parts\n",
        "                thought_parts = [\n",
        "                    part for part in parts if hasattr(part, \"thought\") and part.thought\n",
        "                ]\n",
        "                if thought_parts:\n",
        "                    display(Markdown(\"**Model Thoughts:**\"))\n",
        "                    for part in thought_parts:\n",
        "                        if hasattr(part, \"text\") and part.text:\n",
        "                            display(Markdown(part.text))\n",
        "\n",
        "                # Then display the final response\n",
        "                response_parts = [\n",
        "                    part\n",
        "                    for part in parts\n",
        "                    if hasattr(part, \"thought\")\n",
        "                    and not part.thought\n",
        "                    and hasattr(part, \"text\")\n",
        "                    and part.text\n",
        "                ]\n",
        "                if response_parts:\n",
        "                    display(Markdown(\"**Model Response:**\"))\n",
        "                    for part in response_parts:\n",
        "                        display(Markdown(part.text))\n",
        "                else:\n",
        "                    display(\n",
        "                        Markdown(\n",
        "                            \"**Model Response:**\\nNo text content found in response parts.\"\n",
        "                        )\n",
        "                    )\n",
        "        except Exception as e:\n",
        "            display(Markdown(f\"**Error displaying response:** {str(e)}\"))\n",
        "            print(\"Full response object:\", response)\n",
        "\n",
        "        # Get and display grounding information (sources)\n",
        "        sources_text = get_sources(response)\n",
        "        display(Markdown(sources_text))\n",
        "\n",
        "        print(\"------------------------------------------------------\")\n",
        "\n",
        "\n",
        "generate_kyc_report_from_entity_list(\n",
        "    entities_to_check, system_instructions_text, prompt_template\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c8d94bdf9283"
      },
      "source": [
        "## Evaluation\n",
        "Add section for evaluating of response using Evaluation Framework"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "37e677bd9264"
      },
      "source": [
        "Import necessary libraries"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {
        "id": "f05ee5c78e48"
      },
      "outputs": [],
      "source": [
        "from IPython.display import Markdown, display\n",
        "import pandas as pd\n",
        "import plotly.graph_objects as go\n",
        "from vertexai.evaluation import EvalTask, PointwiseMetric, PointwiseMetricPromptTemplate"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "64b671ead9cd"
      },
      "source": [
        "Define helper functions"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {
        "id": "8567afdb0b28"
      },
      "outputs": [],
      "source": [
        "def display_eval_result(eval_result, metrics=None):\n",
        "    \"\"\"Display the evaluation results.\n",
        "\n",
        "    Args:\n",
        "        eval_result: The evaluation result object containing metrics\n",
        "        metrics: Optional list of metric names to filter the display\n",
        "    Returns:\n",
        "        metrics_df: DataFrame containing summary metrics\n",
        "        metrics_table: DataFrame containing detailed metrics\n",
        "    \"\"\"\n",
        "    summary_metrics, metrics_table = (\n",
        "        eval_result.summary_metrics,\n",
        "        eval_result.metrics_table,\n",
        "    )\n",
        "\n",
        "    metrics_df = pd.DataFrame.from_dict(summary_metrics, orient=\"index\").T\n",
        "    if metrics:\n",
        "        metrics_df = metrics_df.filter(\n",
        "            [\n",
        "                metric\n",
        "                for metric in metrics_df.columns\n",
        "                if any(selected_metric in metric for selected_metric in metrics)\n",
        "            ]\n",
        "        )\n",
        "        metrics_table = metrics_table.filter(\n",
        "            [\n",
        "                metric\n",
        "                for metric in metrics_table.columns\n",
        "                if any(selected_metric in metric for selected_metric in metrics)\n",
        "            ]\n",
        "        )\n",
        "\n",
        "    # Display the summary metrics\n",
        "    display(Markdown(\"### Summary Metrics\"))\n",
        "    display(metrics_df)\n",
        "    # Display the metrics table\n",
        "    display(Markdown(\"### Row-based Metrics\"))\n",
        "    display(metrics_table)\n",
        "\n",
        "\n",
        "def plot_bar_plot(eval_results, metrics=None):\n",
        "    \"\"\"Create a bar plot of evaluation results.\n",
        "\n",
        "    Args:\n",
        "        eval_results: List of tuples containing (title, summary_metrics, metrics_table)\n",
        "        metrics: Optional list of metric names to filter the plot\n",
        "    Returns:\n",
        "        fig: The bar plot figure\n",
        "    \"\"\"\n",
        "    fig = go.Figure()\n",
        "    data = []\n",
        "\n",
        "    for eval_result in eval_results:\n",
        "        title, summary_metrics, _ = eval_result\n",
        "        if metrics:\n",
        "            summary_metrics = {\n",
        "                k: summary_metrics[k]\n",
        "                for k, v in summary_metrics.items()\n",
        "                if any(selected_metric in k for selected_metric in metrics)\n",
        "            }\n",
        "\n",
        "        data.append(\n",
        "            go.Bar(\n",
        "                x=list(summary_metrics.keys()),\n",
        "                y=list(summary_metrics.values()),\n",
        "                name=title,\n",
        "            )\n",
        "        )\n",
        "\n",
        "    fig = go.Figure(data=data)\n",
        "    fig.update_layout(barmode=\"group\")\n",
        "    fig.show()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3f2c996dcc6c"
      },
      "source": [
        "### Test Evaluation 1: Response Completeness and Accuracy"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "11ccdd014218"
      },
      "outputs": [],
      "source": [
        "completeness_accuracy_template = PointwiseMetricPromptTemplate(\n",
        "    criteria={\n",
        "        \"category_coverage\": \"The response should cover all relevant negative news categories (Money Laundering, Forgery, Bribery, Human Trafficking) if they exist for the entity.\",\n",
        "        \"source_citation\": \"The response should properly cite sources for each claim.\",\n",
        "    },\n",
        "    rating_rubric={\n",
        "        \"5\": \"Response covers all relevant categories and properly cites sources.\",\n",
        "        \"3\": \"Response covers most categories but may miss some citations.\",\n",
        "        \"1\": \"Response is incomplete or contains inaccuracies.\",\n",
        "    },\n",
        ")\n",
        "\n",
        "completeness_accuracy_metric = PointwiseMetric(\n",
        "    metric=\"completeness_accuracy\",\n",
        "    metric_prompt_template=completeness_accuracy_template,\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cbb8467d904e"
      },
      "outputs": [],
      "source": [
        "def generate_ground_truth_data(\n",
        "    entities: list[str], system_instructions: str, prompt_template: str\n",
        ") -> dict[str, list[str]]:\n",
        "    \"\"\"Generate ground truth data for evaluation using generate_kyc_report_from_entity.\n",
        "\n",
        "    Args:\n",
        "        entities: List of entity names to generate reports for\n",
        "        system_instructions: System instructions for the model\n",
        "        prompt_template: The prompt template to use for the report\n",
        "\n",
        "    Returns:\n",
        "        Dictionary containing prompts, references, and responses for each entity\n",
        "    \"\"\"\n",
        "    ground_truth_data = {\"prompt\": [], \"reference\": [], \"response\": []}\n",
        "\n",
        "    # Process each entity's data\n",
        "    for entity in entities:\n",
        "        # Generate the prompt\n",
        "        prompt = prompt_template.format(input_entity=entity)\n",
        "        ground_truth_data[\"prompt\"].append(prompt)\n",
        "\n",
        "        # Generate the report\n",
        "        response = generate_kyc_report_from_entity(\n",
        "            entity, system_instructions, prompt_template\n",
        "        )\n",
        "\n",
        "        # Extract the response text\n",
        "        if response.candidates and len(response.candidates) > 0:\n",
        "            response_parts = [\n",
        "                part.text\n",
        "                for part in response.candidates[0].content.parts\n",
        "                if hasattr(part, \"text\") and part.text\n",
        "            ]\n",
        "            response_text = \"\\n\".join(response_parts)\n",
        "        else:\n",
        "            response_text = \"No response generated.\"\n",
        "\n",
        "        ground_truth_data[\"response\"].append(response_text)\n",
        "\n",
        "        # Note: we'll use second run response as reference\n",
        "        # In a real scenario, you might want to use human-verified references\n",
        "        ref_response = generate_kyc_report_from_entity(\n",
        "            entity, system_instructions, prompt_template\n",
        "        )\n",
        "\n",
        "        # Extract the response text\n",
        "        if ref_response.candidates and len(ref_response.candidates) > 0:\n",
        "            ref_response_parts = [\n",
        "                part.text\n",
        "                for part in ref_response.candidates[0].content.parts\n",
        "                if hasattr(part, \"text\") and part.text\n",
        "            ]\n",
        "            ref_response_text = \"\\n\".join(ref_response_parts)\n",
        "        else:\n",
        "            ref_response_text = \"No response generated.\"\n",
        "\n",
        "        ground_truth_data[\"reference\"].append(ref_response_text)\n",
        "\n",
        "    return ground_truth_data\n",
        "\n",
        "\n",
        "# Example usage:\n",
        "entities_to_check = [\n",
        "    \"Ricardo Martinelli\",\n",
        "    \"Robert Burke\",\n",
        "]\n",
        "\n",
        "ground_truth_data = generate_ground_truth_data(\n",
        "    entities_to_check, system_instructions_text, prompt_template\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "794b91041df4"
      },
      "outputs": [],
      "source": [
        "# Create evaluation dataset\n",
        "eval_dataset = pd.DataFrame(ground_truth_data)\n",
        "\n",
        "# Run evaluation\n",
        "eval_task = EvalTask(\n",
        "    dataset=eval_dataset,\n",
        "    metrics=[completeness_accuracy_metric],\n",
        "    experiment=\"kyc-completeness-accuracy\",\n",
        ")\n",
        "\n",
        "eval_result = eval_task.evaluate()\n",
        "\n",
        "# Display results\n",
        "display_eval_result(eval_result)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2f1bc78ea45c"
      },
      "source": [
        "#### Test Evaluation 2: Response Structure and Professionalism"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cea9aad4cad0"
      },
      "outputs": [],
      "source": [
        "structure_professionalism_template = PointwiseMetricPromptTemplate(\n",
        "    criteria={\n",
        "        \"formatting\": \"The response should follow the specified format with clear headlines, dates, and summaries.\",\n",
        "        \"professional_tone\": \"The response should maintain a professional and objective tone throughout.\",\n",
        "        \"clarity\": \"The information should be presented clearly and be easy to understand.\",\n",
        "    },\n",
        "    rating_rubric={\n",
        "        \"5\": \"Response is well-structured, professional, and clear.\",\n",
        "        \"3\": \"Response has good structure but could be more professional or clearer.\",\n",
        "        \"1\": \"Response lacks proper structure or professionalism.\",\n",
        "    },\n",
        ")\n",
        "\n",
        "structure_professionalism_metric = PointwiseMetric(\n",
        "    metric=\"structure_professionalism\",\n",
        "    metric_prompt_template=structure_professionalism_template,\n",
        ")\n",
        "\n",
        "# Create evaluation dataset\n",
        "eval_dataset = pd.DataFrame(ground_truth_data)\n",
        "\n",
        "# Run evaluation\n",
        "eval_task = EvalTask(\n",
        "    dataset=eval_dataset,\n",
        "    metrics=[structure_professionalism_metric],\n",
        "    experiment=\"kyc-structure-professionalism\",\n",
        ")\n",
        "\n",
        "eval_result = eval_task.evaluate()\n",
        "\n",
        "# Display results\n",
        "display_eval_result(eval_result)\n",
        "\n",
        "# Create visualization of results\n",
        "plot_bar_plot(\n",
        "    [\n",
        "        (\n",
        "            \"Structure and Professionalism\",\n",
        "            eval_result.summary_metrics,\n",
        "            eval_result.metrics_table,\n",
        "        )\n",
        "    ]\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2a4e033321ad"
      },
      "source": [
        "## Cleaning up\n",
        "This notebook primarily makes API calls to Google Gemini models and does not create persistent resources in your Google Cloud project (like VMs, storage buckets, etc.) beyond the API usage itself. Therefore, specific cleanup steps for created resources are generally not required after running this notebook.\n",
        "\n",
        "If you want to disable the Vertex AI API used, you can do so from the Google Cloud Console, but this would affect any other services or notebooks relying on it."
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "kyc-with-grounding.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
