{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "f37c771a",
   "metadata": {},
   "source": [
    "### Set up a Python virtual environment in Visual Studio Code\n",
    "\n",
    "1. Open the Command Palette (Ctrl+Shift+P).\n",
    "1. Search for **Python: Create Environment**.\n",
    "1. Select **Venv**.\n",
    "1. Select a Python interpreter. Choose 3.10 or later.\n",
    "\n",
    "It can take a minute to set up. If you run into problems, see [Python environments in VS Code](https://code.visualstudio.com/docs/python/environments)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b5a25479",
   "metadata": {},
   "source": [
    "### Install packages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "df57746a",
   "metadata": {},
   "outputs": [],
   "source": [
    "! pip install -r requirements.txt --quiet"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "02ef9952",
   "metadata": {},
   "source": [
    "### Load .env file (Copy .env-sample to .env and update accordingly)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6761f3c5",
   "metadata": {},
   "outputs": [],
   "source": [
    "from dotenv import load_dotenv\n",
    "from azure.identity.aio import DefaultAzureCredential\n",
    "from azure.core.credentials import AzureKeyCredential\n",
    "import os\n",
    "\n",
    "load_dotenv(override=True) # take environment variables from .env.\n",
    "\n",
    "# Variables not used here do not need to be updated in your .env file\n",
    "endpoint = os.environ[\"AZURE_SEARCH_SERVICE_ENDPOINT\"]\n",
    "credential = AzureKeyCredential(os.getenv(\"AZURE_SEARCH_ADMIN_KEY\")) if os.getenv(\"AZURE_SEARCH_ADMIN_KEY\") else DefaultAzureCredential()\n",
    "knowledge_source_name = os.getenv(\"AZURE_SEARCH_KNOWLEDGE_SOURCE\", \"json-knowledge-source\")\n",
    "knowledge_agent_name = os.getenv(\"AZURE_SEARCH_KNOWLEDGE_AGENT\", \"json-knowledge-agent\")\n",
    "index_name = os.getenv(\"AZURE_SEARCH_INDEX_NAME\", \"json-knowledge-index\")\n",
    "blob_connection_string = os.environ[\"BLOB_CONNECTION_STRING\"]\n",
    "# search blob datasource connection string is optional - defaults to blob connection string\n",
    "# This field is only necessary if you are using MI to connect to the data source\n",
    "# https://learn.microsoft.com/azure/search/search-howto-indexing-azure-blob-storage#supported-credentials-and-connection-strings\n",
    "search_blob_connection_string = os.getenv(\"SEARCH_BLOB_DATASOURCE_CONNECTION_STRING\", blob_connection_string)\n",
    "blob_container_name = os.getenv(\"BLOB_CONTAINER_NAME\", \"json-documents\")\n",
    "azure_openai_endpoint = os.environ[\"AZURE_OPENAI_ENDPOINT\"]\n",
    "azure_openai_key = os.getenv(\"AZURE_OPENAI_KEY\")\n",
    "azure_openai_embedding_deployment = os.getenv(\"AZURE_OPENAI_EMBEDDING_DEPLOYMENT\", \"text-embedding-3-large\")\n",
    "azure_openai_embedding_model_name = os.getenv(\"AZURE_OPENAI_EMBEDDING_MODEL_NAME\", \"text-embedding-3-large\")\n",
    "azure_openai_embedding_model_dimensions = int(os.getenv(\"AZURE_OPENAI_EMBEDDING_MODEL_DIMENSIONS\", \"3072\"))\n",
    "azure_openai_chatgpt_deployment = os.getenv(\"AZURE_OPENAI_CHATGPT_DEPLOYMENT\", \"gpt-5-mini\")\n",
    "azure_openai_chatgpt_model_name = os.getenv(\"AZURE_OPENAI_CHATGPT_MODEL_NAME\", \"gpt-5-mini\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a6c81f6",
   "metadata": {},
   "source": [
    "## Connect to Blob Storage and load documents\n",
    "\n",
    "Retrieve documents from Blob Storage. You can use the sample documents in the data/documents folder.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9dc7654a",
   "metadata": {},
   "outputs": [],
   "source": [
    "from azure.storage.blob.aio import BlobServiceClient  \n",
    "import glob\n",
    "\n",
    "sample_docs_directory = os.path.join(\"..\", \"..\", \"..\", \"data\", \"jsondocuments\")\n",
    "\n",
    "async def upload_sample_documents(\n",
    "        blob_connection_string: str,\n",
    "        blob_container_name: str,\n",
    "        documents_directory: str,\n",
    "        # Set to false if you want to use credentials included in the blob connection string\n",
    "        # Otherwise your identity will be used as credentials\n",
    "        use_user_identity: bool = True\n",
    "    ):\n",
    "        # Connect to Blob Storage\n",
    "        async with DefaultAzureCredential() as user_credential, BlobServiceClient.from_connection_string(conn_str=blob_connection_string, credential=user_credential if use_user_identity else None) as blob_service_client:\n",
    "            async with blob_service_client.get_container_client(blob_container_name) as container_client:\n",
    "                if not await container_client.exists():\n",
    "                    await container_client.create_container()\n",
    "\n",
    "                files = glob.glob(os.path.join(documents_directory, '*'))\n",
    "                for file in files:\n",
    "                    with open(file, \"rb\") as data:\n",
    "                        name = os.path.basename(file)\n",
    "                        async with container_client.get_blob_client(name) as blob_client:\n",
    "                            if not await blob_client.exists():\n",
    "                                await blob_client.upload_blob(data)\n",
    "\n",
    "docs_directory = sample_docs_directory\n",
    "\n",
    "await upload_sample_documents(\n",
    "    blob_connection_string = blob_connection_string,\n",
    "    blob_container_name = blob_container_name,\n",
    "    documents_directory = docs_directory)\n",
    "\n",
    "print(f\"Setup sample data in {blob_container_name}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "10735c34",
   "metadata": {},
   "source": [
    "## Create a blob data source connector on Azure AI Search"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "05e7a3ea",
   "metadata": {},
   "outputs": [],
   "source": [
    "from azure.search.documents.indexes.aio import SearchIndexerClient\n",
    "from azure.search.documents.indexes.models import (\n",
    "    SearchIndexerDataContainer,\n",
    "    SearchIndexerDataSourceConnection\n",
    ")\n",
    "from azure.search.documents.indexes.models import SoftDeleteColumnDeletionDetectionPolicy\n",
    "\n",
    "# Create a data source \n",
    "async with SearchIndexerClient(endpoint, credential) as indexer_client:\n",
    "    container = SearchIndexerDataContainer(name=blob_container_name)\n",
    "    data_source_connection = SearchIndexerDataSourceConnection(\n",
    "        name=f\"{index_name}-blob\",\n",
    "        type=\"azureblob\",\n",
    "        connection_string=search_blob_connection_string,\n",
    "        container=container,\n",
    "        data_deletion_detection_policy=SoftDeleteColumnDeletionDetectionPolicy(soft_delete_column_name=\"is_deleted\", soft_delete_marker_value=\"true\")\n",
    "    )\n",
    "    data_source = await indexer_client.create_or_update_data_source_connection(data_source_connection)\n",
    "\n",
    "    print(f\"Data source '{data_source.name}' created or updated\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bf9f50b0",
   "metadata": {},
   "source": [
    "## Create a search index\n",
    "\n",
    "Vector and nonvector content is stored in a search index."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f237879b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from azure.search.documents.indexes.aio import SearchIndexClient\n",
    "from azure.search.documents.indexes.models import (\n",
    "    SearchField,\n",
    "    SearchFieldDataType,\n",
    "    VectorSearch,\n",
    "    HnswAlgorithmConfiguration,\n",
    "    VectorSearchProfile,\n",
    "    AzureOpenAIVectorizer,\n",
    "    AzureOpenAIVectorizerParameters,\n",
    "    SemanticConfiguration,\n",
    "    SemanticSearch,\n",
    "    SemanticPrioritizedFields,\n",
    "    SemanticField,\n",
    "    SearchIndex,\n",
    "    BinaryQuantizationCompression\n",
    ")\n",
    "\n",
    "# Create a search index  \n",
    "fields = [\n",
    "    SearchField(name=\"parent_id\", type=SearchFieldDataType.String, sortable=True, filterable=True, facetable=True),  \n",
    "    SearchField(name=\"chunk_id\", type=SearchFieldDataType.String, key=True, sortable=True, filterable=True, facetable=True, analyzer_name=\"keyword\"),\n",
    "    SearchField(name=\"event_id\", type=SearchFieldDataType.String, filterable=True, facetable=True, searchable=True, analyzer_name=\"keyword\"),\n",
    "    SearchField(name=\"event_name\", type=SearchFieldDataType.String, searchable=True),\n",
    "    SearchField(name=\"playlist_id\", type=SearchFieldDataType.String, filterable=True, facetable=True, searchable=True, analyzer_name=\"keyword\"),\n",
    "    SearchField(name=\"playlist_name\", type=SearchFieldDataType.String, searchable=True),\n",
    "    SearchField(name=\"video_id\", type=SearchFieldDataType.String, filterable=True, facetable=True, searchable=True, analyzer_name=\"keyword\"),\n",
    "    SearchField(name=\"session_title\", type=SearchFieldDataType.String, searchable=True, sortable=True),\n",
    "    SearchField(name=\"speaker\", type=SearchFieldDataType.String, searchable=True, facetable=True),\n",
    "    SearchField(name=\"content\", type=SearchFieldDataType.String, searchable=True),\n",
    "    SearchField(name=\"timestamp_start\", type=SearchFieldDataType.String, filterable=True, sortable=True, searchable=False),\n",
    "    SearchField(name=\"timestamp_end\", type=SearchFieldDataType.String, filterable=True, sortable=True, searchable=False),\n",
    "    SearchField(name=\"chunk_index\", type=SearchFieldDataType.Int32, filterable=True, sortable=True),\n",
    "    SearchField(name=\"duration\", type=SearchFieldDataType.Int32, filterable=True, sortable=True),\n",
    "    SearchField(name=\"upload_date\", type=SearchFieldDataType.String, filterable=True, sortable=True, searchable=False),\n",
    "    SearchField(name=\"view_count\", type=SearchFieldDataType.Int64, filterable=True, sortable=True, facetable=True, searchable=False),\n",
    "    SearchField(name=\"processed_at\", type=SearchFieldDataType.DateTimeOffset, filterable=True, sortable=True),\n",
    "    SearchField(name=\"content_length\", type=SearchFieldDataType.Int64, filterable=True, sortable=True),\n",
    "\n",
    "    # Vector field for semantic / vector search (dimensions variable from env)\n",
    "    SearchField(\n",
    "        name=\"vector\",\n",
    "        type=SearchFieldDataType.Collection(SearchFieldDataType.Single),\n",
    "        # See https://learn.microsoft.com/azure/search/vector-search-how-to-storage-options\n",
    "        stored=False,\n",
    "        vector_search_dimensions=azure_openai_embedding_model_dimensions,\n",
    "        vector_search_profile_name=\"myHnswProfile\"\n",
    "    ),\n",
    "]\n",
    "\n",
    "# Configure the vector search configuration  \n",
    "vector_search = VectorSearch(  \n",
    "    algorithms=[  \n",
    "        HnswAlgorithmConfiguration(name=\"myHnsw\"),\n",
    "    ],  \n",
    "    profiles=[  \n",
    "        VectorSearchProfile(  \n",
    "            name=\"myHnswProfile\",  \n",
    "            algorithm_configuration_name=\"myHnsw\",  \n",
    "            vectorizer_name=\"myOpenAI\",  \n",
    "            compression_name=\"myBinaryCompression\",\n",
    "        )\n",
    "    ],  \n",
    "    vectorizers=[  \n",
    "        AzureOpenAIVectorizer(  \n",
    "            vectorizer_name=\"myOpenAI\",  \n",
    "            kind=\"azureOpenAI\",  \n",
    "            parameters=AzureOpenAIVectorizerParameters(  \n",
    "                resource_url=azure_openai_endpoint,  \n",
    "                deployment_name=azure_openai_embedding_deployment,\n",
    "                model_name=azure_openai_embedding_model_name,\n",
    "                api_key=azure_openai_key,\n",
    "            ),\n",
    "        ),  \n",
    "    ],\n",
    "    compressions=[\n",
    "        # See https://learn.microsoft.com/azure/search/vector-search-how-to-quantization\n",
    "        BinaryQuantizationCompression(compression_name=\"myBinaryCompression\")\n",
    "    ]\n",
    ")  \n",
    "  \n",
    "semantic_config = SemanticConfiguration(  \n",
    "    name=\"my-semantic-config\",\n",
    "    prioritized_fields=SemanticPrioritizedFields(  \n",
    "        content_fields=[SemanticField(field_name=\"content\")]\n",
    "    ),  \n",
    ")\n",
    "  \n",
    "# Create the semantic search with the configuration  \n",
    "semantic_search = SemanticSearch(configurations=[semantic_config], default_configuration_name=semantic_config.name)  \n",
    "  \n",
    "# Create the search index\n",
    "index = SearchIndex(name=index_name, fields=fields, vector_search=vector_search, semantic_search=semantic_search) \n",
    "async with SearchIndexClient(endpoint=endpoint, credential=credential) as index_client:\n",
    "    result = await index_client.create_or_update_index(index)  \n",
    "    print(f\"{result.name} created\")  \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "433f93c8",
   "metadata": {},
   "source": [
    "## Create a skillset\n",
    "\n",
    "Skills drive integrated vectorization. [Text Split](https://learn.microsoft.com/azure/search/cognitive-search-skill-textsplit) provides data chunking. [AzureOpenAIEmbedding](https://learn.microsoft.com/azure/search/cognitive-search-skill-azure-openai-embedding) handles calls to Azure OpenAI, using the connection information you provide in the environment variables. An [indexer projection](https://learn.microsoft.com/azure/search/index-projections-concept-intro) specifies secondary indexes used for chunked data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "332ea588",
   "metadata": {},
   "outputs": [],
   "source": [
    "from azure.search.documents.indexes.models import (\n",
    "    SplitSkill,\n",
    "    InputFieldMappingEntry,\n",
    "    OutputFieldMappingEntry,\n",
    "    AzureOpenAIEmbeddingSkill,\n",
    "    SearchIndexerIndexProjection,\n",
    "    SearchIndexerIndexProjectionSelector,\n",
    "    SearchIndexerIndexProjectionsParameters,\n",
    "    IndexProjectionMode,\n",
    "    SearchIndexerSkillset\n",
    ")\n",
    "\n",
    "# Create a skillset name \n",
    "skillset_name = f\"{index_name}-skillset\"\n",
    "\n",
    "def create_skillset():\n",
    "    split_skill = SplitSkill(  \n",
    "        description=\"Split skill to chunk documents\",  \n",
    "        text_split_mode=\"pages\",  \n",
    "        context=\"/document\",  \n",
    "        maximum_page_length=2000,  \n",
    "        page_overlap_length=500,  \n",
    "        inputs=[  \n",
    "            InputFieldMappingEntry(name=\"text\", source=\"/document/content\"),  \n",
    "        ],  \n",
    "        outputs=[  \n",
    "            OutputFieldMappingEntry(name=\"textItems\", target_name=\"pages\")  \n",
    "        ]\n",
    "    )\n",
    "\n",
    "    embedding_skill = AzureOpenAIEmbeddingSkill(  \n",
    "        description=\"Skill to generate embeddings via Azure OpenAI\",  \n",
    "        context=\"/document/pages/*\",  \n",
    "        resource_url=azure_openai_endpoint,  \n",
    "        deployment_name=azure_openai_embedding_deployment,  \n",
    "        model_name=azure_openai_embedding_model_name,\n",
    "        dimensions=azure_openai_embedding_model_dimensions,\n",
    "        api_key=azure_openai_key,  \n",
    "        inputs=[  \n",
    "            InputFieldMappingEntry(name=\"text\", source=\"/document/pages/*\"),  \n",
    "        ],  \n",
    "        outputs=[\n",
    "            OutputFieldMappingEntry(name=\"embedding\", target_name=\"vector\")  \n",
    "        ]\n",
    "    )\n",
    "\n",
    "    index_projections = SearchIndexerIndexProjection(  \n",
    "        selectors=[  \n",
    "            SearchIndexerIndexProjectionSelector(  \n",
    "                target_index_name=index_name,  \n",
    "                parent_key_field_name=\"parent_id\",  \n",
    "                source_context=\"/document/pages/*\",  \n",
    "                mappings=[\n",
    "                    InputFieldMappingEntry(name=\"content\", source=\"/document/pages/*\"),\n",
    "                    InputFieldMappingEntry(name=\"vector\", source=\"/document/pages/*/vector\"),\n",
    "                    InputFieldMappingEntry(name=\"timestamp_start\", source=\"/document/timestamp_start\"),\n",
    "                    InputFieldMappingEntry(name=\"timestamp_end\", source=\"/document/timestamp_end\"),\n",
    "                    InputFieldMappingEntry(name=\"chunk_index\", source=\"/document/chunk_index\"),\n",
    "                    InputFieldMappingEntry(name=\"duration\", source=\"/document/duration\"),\n",
    "                    InputFieldMappingEntry(name=\"content_length\", source=\"/document/content_length\"),\n",
    "                    InputFieldMappingEntry(name=\"event_id\", source=\"/document/event_id\"),\n",
    "                    InputFieldMappingEntry(name=\"event_name\", source=\"/document/event_name\"),\n",
    "                    InputFieldMappingEntry(name=\"playlist_id\", source=\"/document/playlist_id\"),\n",
    "                    InputFieldMappingEntry(name=\"playlist_name\", source=\"/document/playlist_name\"),\n",
    "                    InputFieldMappingEntry(name=\"video_id\", source=\"/document/video_id\"),\n",
    "                    InputFieldMappingEntry(name=\"session_title\", source=\"/document/session_title\"),\n",
    "                    InputFieldMappingEntry(name=\"speaker\", source=\"/document/speaker\"),\n",
    "                    InputFieldMappingEntry(name=\"upload_date\", source=\"/document/upload_date\"),\n",
    "                    InputFieldMappingEntry(name=\"view_count\", source=\"/document/view_count\"),\n",
    "                    InputFieldMappingEntry(name=\"processed_at\", source=\"/document/processed_at\"),\n",
    "                ]\n",
    "            )\n",
    "        ],  \n",
    "        parameters=SearchIndexerIndexProjectionsParameters(  \n",
    "            projection_mode=IndexProjectionMode.SKIP_INDEXING_PARENT_DOCUMENTS  \n",
    "        )  \n",
    "    )\n",
    "\n",
    "    skills = [split_skill, embedding_skill]\n",
    "\n",
    "    return SearchIndexerSkillset(  \n",
    "        name=skillset_name,  \n",
    "        description=\"Skillset to chunk documents and generating embeddings\",  \n",
    "        skills=skills,  \n",
    "        index_projection=index_projections\n",
    "    )\n",
    "\n",
    "skillset = create_skillset()\n",
    "async with SearchIndexerClient(endpoint, credential) as client:\n",
    "    await client.create_or_update_skillset(skillset)\n",
    "    print(f\"{skillset.name} created\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "98cda259",
   "metadata": {},
   "source": [
    "## Create an indexer\n",
    "\n",
    "Use the JSON Array parsing mode to understand the included transcript document, which is stored as an arrays of chunks of transcripts."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d9cb303b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from azure.search.documents.indexes.models import (\n",
    "    SearchIndexer,\n",
    "    IndexingParameters,\n",
    "    IndexingParametersConfiguration,\n",
    "    BlobIndexerParsingMode,\n",
    ")\n",
    "\n",
    "# Create an indexer  \n",
    "indexer_name = f\"{index_name}-indexer\"  \n",
    "\n",
    "indexer_parameters = IndexingParameters(\n",
    "        configuration=IndexingParametersConfiguration(\n",
    "            parsing_mode=BlobIndexerParsingMode.JSON_ARRAY,\n",
    "            query_timeout=None))\n",
    "\n",
    "indexer = SearchIndexer(  \n",
    "    name=indexer_name,  \n",
    "    description=\"Indexer to index documents and generate embeddings\",  \n",
    "    skillset_name=skillset_name,  \n",
    "    target_index_name=index_name,  \n",
    "    data_source_name=data_source.name,\n",
    "    parameters=indexer_parameters\n",
    ")  \n",
    "\n",
    "async with SearchIndexerClient(endpoint, credential) as indexer_client:\n",
    "    indexer_result = await indexer_client.create_or_update_indexer(indexer)\n",
    "\n",
    "    # Run the indexer  \n",
    "    await indexer_client.run_indexer(indexer_name)  \n",
    "    print(f' {indexer_name} is created and running. If queries return no results, please wait a bit and try again.')  \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a7b25666",
   "metadata": {},
   "source": [
    "## Create an index knowledge source and agent on Azure AI Search\n",
    "\n",
    "This step creates an index knowledge source wraps the index you created for querying by a knowledge agent"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e0a2fbe4",
   "metadata": {},
   "outputs": [],
   "source": [
    "from azure.search.documents.indexes.models import SearchIndexKnowledgeSource, SearchIndexKnowledgeSourceParameters\n",
    "from azure.search.documents.indexes.aio import SearchIndexClient\n",
    "\n",
    "knowledge_source = SearchIndexKnowledgeSource(\n",
    "    name=knowledge_source_name,\n",
    "    search_index_parameters=SearchIndexKnowledgeSourceParameters(\n",
    "        search_index_name=index_name,\n",
    "        source_data_select=\"chunk_id,content,session_title,playlist_name\"\n",
    "    )\n",
    ")\n",
    "\n",
    "async with SearchIndexClient(endpoint=endpoint, credential=credential) as client:\n",
    "    await client.create_or_update_knowledge_source(knowledge_source)\n",
    "    print(f\"Created knowledge source: {knowledge_source.name}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f32c8b67",
   "metadata": {},
   "source": [
    "## Create a knowledge agent on Azure AI Search\n",
    "\n",
    "This step creates a knowledge agent, which acts as a wrapper for your knowledge source and LLM deployment.\n",
    "\n",
    "`EXTRACTIVE_DATA` is the default modality and returns content from your knowledge sources without generative alteration. Use the `ANSWER_SYNTHESIS` modality for LLM-generated answers that cite the retrieved content."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f7f61c4a",
   "metadata": {},
   "outputs": [],
   "source": [
    "from azure.search.documents.indexes.models import KnowledgeAgent, KnowledgeSourceReference, KnowledgeAgentOutputConfiguration, KnowledgeAgentOutputConfigurationModality, KnowledgeAgentAzureOpenAIModel\n",
    "\n",
    "chat_model = KnowledgeAgentAzureOpenAIModel(\n",
    "    azure_open_ai_parameters=AzureOpenAIVectorizerParameters(\n",
    "        resource_url=azure_openai_endpoint,\n",
    "        deployment_name=azure_openai_chatgpt_deployment,\n",
    "        api_key=azure_openai_key,\n",
    "        model_name=azure_openai_chatgpt_model_name\n",
    "    )\n",
    ")\n",
    "\n",
    "output_config = KnowledgeAgentOutputConfiguration(\n",
    "    modality=KnowledgeAgentOutputConfigurationModality.ANSWER_SYNTHESIS,\n",
    "    include_activity=True\n",
    ")\n",
    "\n",
    "agent = KnowledgeAgent(\n",
    "    name=knowledge_agent_name,\n",
    "    models=[chat_model],\n",
    "    knowledge_sources=[\n",
    "        KnowledgeSourceReference(\n",
    "            name=knowledge_source.name,\n",
    "            include_reference_source_data=True,\n",
    "            always_query_source=True\n",
    "        )\n",
    "    ],\n",
    "    output_configuration=output_config\n",
    ")\n",
    "\n",
    "async with SearchIndexClient(endpoint=endpoint, credential=credential) as index_client:\n",
    "    await index_client.create_or_update_agent(agent)\n",
    "    print(f\"Created knowledge agent: {agent.name}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "015dd196",
   "metadata": {},
   "source": [
    "## Use agentic retrieval to fetch results\n",
    "\n",
    "This step runs the agentic retrieval pipeline to produce a grounded, citation-backed answer. Given the conversation history and retrieval parameters, your knowledge agent:\n",
    "\n",
    "* Analyzes the entire conversation to infer the user's information need.\n",
    "* Decomposes the compound query into focused subqueries.\n",
    "* Executes the subqueries concurrently against your knowledge source.\n",
    "* Uses semantic ranker to rerank and filter the results.\n",
    "* Synthesizes the top results into a natural-language answer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c9e527a8",
   "metadata": {},
   "outputs": [],
   "source": [
    "from azure.search.documents.agent.aio import KnowledgeAgentRetrievalClient\n",
    "from azure.search.documents.agent.models import KnowledgeAgentRetrievalRequest, KnowledgeAgentMessage, KnowledgeAgentMessageTextContent, SearchIndexKnowledgeSourceParams\n",
    "\n",
    "messages = [\n",
    "    KnowledgeAgentMessage(\n",
    "        role=\"user\",\n",
    "        content=[KnowledgeAgentMessageTextContent(\n",
    "            text=\"Name a few announcements\"\n",
    "        )]\n",
    "    )\n",
    "]\n",
    "\n",
    "agent_client = KnowledgeAgentRetrievalClient(endpoint=endpoint, agent_name=knowledge_agent_name, credential=credential)\n",
    "result = await agent_client.retrieve(KnowledgeAgentRetrievalRequest(messages=messages))\n",
    "await agent_client.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "66bde7de",
   "metadata": {},
   "source": [
    "## Review the retrieval response, activity, and results\n",
    "Because your knowledge agent is configured for answer synthesis, the retrieval response contains the following values:\n",
    "\n",
    "* `response_content`: An LLM-generated answer to the query that cites the retrieved documents.\n",
    "* `activity_content`: Detailed planning and execution information, including subqueries, reranking decisions, and intermediate steps.\n",
    "* `references_content`: Source documents and chunks that contributed to the answer.\n",
    "\n",
    "*Tip:* Retrieval parameters, such as reranker thresholds and knowledge source parameters, influence how aggressively your agent reranks and which sources it queries. Inspect the activity and references to validate grounding and build traceable citations.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4bcdf38f",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(result.response[0].content[0].text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f41a7c73",
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "\n",
    "# Activity -> JSON string of activity as list of dicts\n",
    "\n",
    "activity_content = json.dumps([a.as_dict() for a in result.activity], indent=2)\n",
    "print(\"activity_content:\\n\", activity_content, \"\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0c18ea06",
   "metadata": {},
   "outputs": [],
   "source": [
    "# References -> JSON string of references as list of dicts\n",
    "references_content = json.dumps([r.as_dict() for r in result.references], indent=2)\n",
    "print(\"references_content:\\n\", references_content, \"\\n\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "knowledge",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
