{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "3a0f14e9",
   "metadata": {},
   "source": [
    "# Azure AI Search Blob knowledge source sample\n",
    "\n",
    "This Python notebook demonstrates the [blob knowledge source](https://learn.microsoft.com/azure/search/search-knowledge-source-how-to-blob) feature of Azure AI Search that is currently in public preview.\n",
    "\n",
    "Blob knowledge source takes a dependency on [integrated vectorization](https://learn.microsoft.com/azure/search/vector-search-integrated-vectorization). It automatically:\n",
    "* Generates a data source, skillset, indexer, and index to represent the underlying blob content in Azure AI Search\n",
    "* Connects an embedding model and chat completion model you bring with a skillset using any of the following skills:\n",
    "   * [Azure OpenAI Embedding skill](https://learn.microsoft.com/azure/search/cognitive-search-skill-azure-openai-embedding)\n",
    "   * [GenAI Prompt skill](https://learn.microsoft.com/azure/search/cognitive-search-skill-genai-prompt)\n",
    "   * [Azure AI Vision multimodal embeddings skill](https://learn.microsoft.com/azure/search/cognitive-search-skill-vision-vectorize)\n",
    "   * [AML Skill](https://learn.microsoft.com/azure/search/cognitive-search-aml-skill)\n",
    "\n",
    "These example uses PDFs from the `data/documents` folder to create a blob knowledge source. A knowledge agent is then used to chat with these PDFs.\n",
    "\n",
    "## Prerequisites\n",
    "+ An Azure subscription, with [access to Azure OpenAI](https://aka.ms/oai/access).\n",
    " \n",
    "+ Azure AI Search, Basic or higher for this workload. [Semantic ranker](https://learn.microsoft.com/azure/search/semantic-how-to-enable-disable) is required for [agentic retrieval](https://learn.microsoft.com/azure/search/search-agentic-retrieval-concept) and it's not available on the free tier.\n",
    "\n",
    "+ A deployment of the `text-embedding-3-large` model on Azure OpenAI.\n",
    "\n",
    "+ A deployment of the `gpt-5` model on Azure OpenAI.\n",
    "\n",
    "+ Azure Blob Storage. This notebook connects to your storage account and loads a container with the sample PDFs."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b693336f",
   "metadata": {},
   "source": [
    "### Set up a Python virtual environment in Visual Studio Code\n",
    "\n",
    "1. Open the Command Palette (Ctrl+Shift+P).\n",
    "1. Search for **Python: Create Environment**.\n",
    "1. Select **Venv**.\n",
    "1. Select a Python interpreter. Choose 3.10 or later.\n",
    "\n",
    "It can take a minute to set up. If you run into problems, see [Python environments in VS Code](https://code.visualstudio.com/docs/python/environments)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf271f16",
   "metadata": {},
   "source": [
    "### Install packages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6a43d171",
   "metadata": {},
   "outputs": [],
   "source": [
    "! pip install -r requirements.txt --quiet"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "13a923ac",
   "metadata": {},
   "source": [
    "### Set and load environment variables\n",
    "\n",
    "1. Copy .env-sample from the demo-python folder to the knowledge subfolder.\n",
    "1. Rename the file to `.env`.\n",
    "1. Set the knowledge variables to your Azure resources, API keys, connections strings, and provide names for the new objects created by the code.\n",
    "1. Execute the cell to load the environment variables.\n",
    "\n",
    "This example includes image verbalization that uses an LLM to describe embedded images in your source content. You can enable image verbalization by setting `USE_VERBALIZATION` to true in the `.env` file."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "b6498ea2",
   "metadata": {},
   "outputs": [],
   "source": [
    "from dotenv import load_dotenv\n",
    "from azure.identity.aio import DefaultAzureCredential\n",
    "from azure.core.credentials import AzureKeyCredential\n",
    "import os\n",
    "\n",
    "load_dotenv(override=True) # take environment variables from .env.\n",
    "\n",
    "# Variables not used here do not need to be updated in your .env file\n",
    "endpoint = os.environ[\"AZURE_SEARCH_SERVICE_ENDPOINT\"]\n",
    "credential = AzureKeyCredential(os.getenv(\"AZURE_SEARCH_ADMIN_KEY\")) if os.getenv(\"AZURE_SEARCH_ADMIN_KEY\") else DefaultAzureCredential()\n",
    "knowledge_source_name = os.getenv(\"AZURE_SEARCH_KNOWLEDGE_SOURCE\", \"blob-knowledge-source\")\n",
    "knowledge_agent_name = os.getenv(\"AZURE_SEARCH_KNOWLEDGE_AGENT\", \"blob-knowledge-agent\")\n",
    "blob_connection_string = os.environ[\"BLOB_CONNECTION_STRING\"]\n",
    "# search blob datasource connection string is optional - defaults to blob connection string\n",
    "# This field is only necessary if you are using MI to connect to the data source\n",
    "# https://learn.microsoft.com/azure/search/search-howto-indexing-azure-blob-storage#supported-credentials-and-connection-strings\n",
    "search_blob_connection_string = os.getenv(\"SEARCH_BLOB_DATASOURCE_CONNECTION_STRING\", blob_connection_string)\n",
    "blob_container_name = os.getenv(\"BLOB_CONTAINER_NAME\", \"documents\")\n",
    "azure_openai_endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
    "azure_openai_key = os.getenv(\"AZURE_OPENAI_KEY\")\n",
    "azure_openai_embedding_deployment = os.getenv(\"AZURE_OPENAI_EMBEDDING_DEPLOYMENT\", \"text-embedding-3-large\")\n",
    "azure_openai_embedding_model_name = os.getenv(\"AZURE_OPENAI_EMBEDDING_MODEL_NAME\", \"text-embedding-3-large\")\n",
    "azure_openai_chatgpt_deployment = os.getenv(\"AZURE_OPENAI_CHATGPT_DEPLOYMENT\", \"gpt-5\")\n",
    "azure_openai_chatgpt_model_name = os.getenv(\"AZURE_OPENAI_CHATGPT_MODEL_NAME\", \"gpt-5\")\n",
    "use_verbalization = os.getenv(\"USE_VERBALIZATION\", \"false\") == \"true\"\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b986409f",
   "metadata": {},
   "source": [
    "## Connect to Blob Storage and load documents\n",
    "\n",
    "Retrieve documents from Blob Storage. You can use the sample documents in the data/documents folder.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b4fc383e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Setup sample data in documents\n"
     ]
    }
   ],
   "source": [
    "from azure.storage.blob.aio import BlobServiceClient  \n",
    "import glob\n",
    "\n",
    "sample_docs_directory = os.path.join(\"..\", \"..\", \"..\", \"data\", \"benefitdocs\")\n",
    "\n",
    "async def upload_sample_documents(\n",
    "        blob_connection_string: str,\n",
    "        blob_container_name: str,\n",
    "        documents_directory: str,\n",
    "        # Set to false if you want to use credentials included in the blob connection string\n",
    "        # Otherwise your identity will be used as credentials\n",
    "        use_user_identity: bool = True\n",
    "    ):\n",
    "        # Connect to Blob Storage\n",
    "        async with DefaultAzureCredential() as user_credential, BlobServiceClient.from_connection_string(logging_enable=True, conn_str=blob_connection_string, credential=user_credential if use_user_identity else None) as blob_service_client:\n",
    "            async with blob_service_client.get_container_client(blob_container_name) as container_client:\n",
    "                if not await container_client.exists():\n",
    "                    await container_client.create_container()\n",
    "\n",
    "                files = glob.glob(os.path.join(documents_directory, '*'))\n",
    "                for file in files:\n",
    "                    with open(file, \"rb\") as data:\n",
    "                        name = os.path.basename(file)\n",
    "                        async with container_client.get_blob_client(name) as blob_client:\n",
    "                            if not await blob_client.exists():\n",
    "                                await blob_client.upload_blob(data)\n",
    "\n",
    "docs_directory = sample_docs_directory\n",
    "\n",
    "await upload_sample_documents(\n",
    "    blob_connection_string = blob_connection_string,\n",
    "    blob_container_name = blob_container_name,\n",
    "    documents_directory = docs_directory)\n",
    "\n",
    "print(f\"Setup sample data in {blob_container_name}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "31bb33a4",
   "metadata": {},
   "source": [
    "## Create a blob knowledge source on Azure AI Search\n",
    "\n",
    "Creating a [blob knowledge source](https://learn.microsoft.com/azure/search/search-knowledge-source-how-to-blob) sets up all necessary resources to index the uploaded documents."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "fa342b91",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Created knowledge source: blob-knowledge-source\n"
     ]
    }
   ],
   "source": [
    "from azure.search.documents.indexes.models import AzureBlobKnowledgeSource, AzureBlobKnowledgeSourceParameters, AzureOpenAIVectorizer, AzureOpenAIVectorizerParameters, KnowledgeAgentAzureOpenAIModel\n",
    "from azure.search.documents.indexes.aio import SearchIndexClient\n",
    "\n",
    "chat_model = KnowledgeAgentAzureOpenAIModel(\n",
    "    azure_open_ai_parameters=AzureOpenAIVectorizerParameters(\n",
    "        resource_url=azure_openai_endpoint,\n",
    "        deployment_name=azure_openai_chatgpt_deployment,\n",
    "        api_key=azure_openai_key,\n",
    "        model_name=azure_openai_chatgpt_model_name\n",
    "    )\n",
    ")\n",
    "\n",
    "knowledge_source = AzureBlobKnowledgeSource(\n",
    "    name=knowledge_source_name,\n",
    "    azure_blob_parameters=AzureBlobKnowledgeSourceParameters(\n",
    "        connection_string=search_blob_connection_string,\n",
    "        container_name=blob_container_name,\n",
    "        embedding_model=AzureOpenAIVectorizer(\n",
    "            vectorizer_name=\"blob-vectorizer\",\n",
    "            parameters=AzureOpenAIVectorizerParameters(\n",
    "                resource_url=azure_openai_endpoint,\n",
    "                deployment_name=azure_openai_embedding_deployment,\n",
    "                api_key=azure_openai_key,\n",
    "                model_name=azure_openai_embedding_model_name\n",
    "            )\n",
    "        ),\n",
    "        chat_completion_model=chat_model if use_verbalization else None,\n",
    "        disable_image_verbalization=not use_verbalization\n",
    "    )\n",
    ")\n",
    "\n",
    "async with SearchIndexClient(endpoint=endpoint, credential=credential) as client:\n",
    "    await client.create_or_update_knowledge_source(knowledge_source)\n",
    "    print(f\"Created knowledge source: {knowledge_source.name}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e0cdb045",
   "metadata": {},
   "source": [
    "## Create a knowledge agent on Azure AI Search\n",
    "\n",
    "This step creates a knowledge agent, which acts as a wrapper for your knowledge source and LLM deployment.\n",
    "\n",
    "`EXTRACTIVE_DATA` is the default modality and returns content from your knowledge sources without generative alteration. Use the `ANSWER_SYNTHESIS` modality for [LLM-generated answers that cite the retrieved content](https://learn.microsoft.com/azure/search/search-agentic-retrieval-how-to-synthesize)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "17f5a2c6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Created knowledge agent: blob-knowledge-agent\n"
     ]
    }
   ],
   "source": [
    "from azure.search.documents.indexes.models import KnowledgeAgent, KnowledgeSourceReference, KnowledgeAgentOutputConfiguration, KnowledgeAgentOutputConfigurationModality\n",
    "\n",
    "output_config = KnowledgeAgentOutputConfiguration(\n",
    "    modality=KnowledgeAgentOutputConfigurationModality.ANSWER_SYNTHESIS,\n",
    "    include_activity=True\n",
    ")\n",
    "\n",
    "agent = KnowledgeAgent(\n",
    "    name=knowledge_agent_name,\n",
    "    models=[chat_model],\n",
    "    knowledge_sources=[\n",
    "        KnowledgeSourceReference(\n",
    "            name=knowledge_source.name,\n",
    "            include_reference_source_data=True,\n",
    "            always_query_source=True\n",
    "        )\n",
    "    ],\n",
    "    output_configuration=output_config\n",
    ")\n",
    "\n",
    "async with SearchIndexClient(endpoint=endpoint, credential=credential) as index_client:\n",
    "    await index_client.create_or_update_agent(agent)\n",
    "    print(f\"Created knowledge agent: {agent.name}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e78ca732",
   "metadata": {},
   "source": [
    "## Use agentic retrieval to fetch results\n",
    "\n",
    "This step runs the agentic retrieval pipeline to produce a grounded, citation-backed answer. Given the conversation history and retrieval parameters, your knowledge agent:\n",
    "\n",
    "* Analyzes the entire conversation to infer the user's information need.\n",
    "* Decomposes the compound query into focused subqueries.\n",
    "* Executes the subqueries concurrently against your knowledge source.\n",
    "* Uses semantic ranker to rerank and filter the results.\n",
    "* Synthesizes the top results into a natural-language answer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "cda61b67",
   "metadata": {},
   "outputs": [],
   "source": [
    "from azure.search.documents.agent.aio import KnowledgeAgentRetrievalClient\n",
    "from azure.search.documents.agent.models import KnowledgeAgentRetrievalRequest, KnowledgeAgentMessage, KnowledgeAgentMessageTextContent, SearchIndexKnowledgeSourceParams\n",
    "\n",
    "messages = [\n",
    "    KnowledgeAgentMessage(\n",
    "        role=\"user\",\n",
    "        content=[KnowledgeAgentMessageTextContent(\n",
    "            text=\"Differences between Northwind Health Plus and Standard\"\n",
    "        )]\n",
    "    )\n",
    "]\n",
    "\n",
    "agent_client = KnowledgeAgentRetrievalClient(endpoint=endpoint, agent_name=knowledge_agent_name, credential=credential)\n",
    "result = await agent_client.retrieve(KnowledgeAgentRetrievalRequest(messages=messages))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c3e3e31",
   "metadata": {},
   "source": [
    "## Review the retrieval response, activity, and results\n",
    "Because your knowledge agent is configured for answer synthesis, the retrieval response contains the following values:\n",
    "\n",
    "* `response_content`: An LLM-generated answer to the query that cites the retrieved documents.\n",
    "* `activity_content`: Detailed planning and execution information, including subqueries, reranking decisions, and intermediate steps.\n",
    "* `references_content`: Source documents and chunks that contributed to the answer.\n",
    "\n",
    "*Tip:* Retrieval parameters, such as reranker thresholds and knowledge source parameters, influence how aggressively your agent reranks and which sources it queries. Inspect the activity and references to validate grounding and build traceable citations.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "94562da2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Northwind Health Plus and Northwind Standard differ in several key areas:\n",
      "\n",
      "- **Coverage Scope**: Northwind Health Plus is a comprehensive plan that covers medical, vision, and dental services, as well as prescription drugs, mental health and substance abuse services, preventive care, and emergency services (both in-network and out-of-network). Northwind Standard is a more basic plan that covers medical, vision, and dental services, preventive care, and prescription drugs, but does not cover emergency services, mental health and substance abuse services, or out-of-network services [ref_id:0][ref_id:2][ref_id:1].\n",
      "\n",
      "- **Prescription Drugs**: Northwind Health Plus covers a wider range of prescription drugs, including generic, brand-name, and specialty drugs. Northwind Standard only covers generic and brand-name drugs [ref_id:0].\n",
      "\n",
      "- **Vision and Dental**: Both plans cover vision and dental services, but Northwind Health Plus includes coverage for vision exams, glasses, contact lenses, and dental exams, cleanings, and fillings. Northwind Standard only covers vision exams and glasses [ref_id:0].\n",
      "\n",
      "- **Medical Services**: Northwind Health Plus covers hospital stays, doctor visits, lab tests, and X-rays. Northwind Standard covers only doctor visits and lab tests [ref_id:0].\n",
      "\n",
      "- **Cost Structure**: The cost to the employee depends on the selected plan and the number of people covered. Costs are deducted from each paycheck throughout the year [ref_id:1].\n",
      "\n",
      "- **Network**: Northwind Health Plus offers both in-network and out-of-network coverage, while Northwind Standard only covers in-network services [ref_id:0][ref_id:2].\n",
      "\n",
      "In summary, Northwind Health Plus offers broader and more comprehensive coverage compared to Northwind Standard [ref_id:0][ref_id:1][ref_id:2].\n"
     ]
    }
   ],
   "source": [
    "print(result.response[0].content[0].text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "1b53cc71",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "activity_content:\n",
      " [\n",
      "  {\n",
      "    \"id\": 0,\n",
      "    \"type\": \"modelQueryPlanning\",\n",
      "    \"elapsed_ms\": 913,\n",
      "    \"input_tokens\": 2021,\n",
      "    \"output_tokens\": 48\n",
      "  },\n",
      "  {\n",
      "    \"id\": 1,\n",
      "    \"type\": \"azureBlob\",\n",
      "    \"elapsed_ms\": 555,\n",
      "    \"knowledge_source_name\": \"blob-knowledge-source\",\n",
      "    \"query_time\": \"2025-09-09T05:40:40.117Z\",\n",
      "    \"count\": 50,\n",
      "    \"azure_blob_arguments\": {\n",
      "      \"search\": \"Differences between Northwind Health Plus and Northwind Health Standard plans\"\n",
      "    }\n",
      "  },\n",
      "  {\n",
      "    \"id\": 2,\n",
      "    \"type\": \"semanticReranker\",\n",
      "    \"input_tokens\": 0\n",
      "  },\n",
      "  {\n",
      "    \"id\": 3,\n",
      "    \"type\": \"modelAnswerSynthesis\",\n",
      "    \"elapsed_ms\": 3968,\n",
      "    \"input_tokens\": 7010,\n",
      "    \"output_tokens\": 402\n",
      "  }\n",
      "] \n",
      "\n"
     ]
    }
   ],
   "source": [
    "import json\n",
    "\n",
    "# Activity -> JSON string of activity as list of dicts\n",
    "\n",
    "activity_content = json.dumps([a.as_dict() for a in result.activity], indent=2)\n",
    "print(\"activity_content:\\n\", activity_content, \"\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "23f64b89",
   "metadata": {},
   "outputs": [],
   "source": [
    "# References -> JSON string of references as list of dicts\n",
    "references_content = json.dumps([r.as_dict() for r in result.references], indent=2)\n",
    "print(\"references_content:\\n\", references_content, \"\\n\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "knowledge",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
