{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "6db70b5c",
   "metadata": {},
   "source": [
    "# Using Image Data\n",
    "\n",
    "This workbook acts as a spike for how to ingest images and ask questions about the data within them using Azure AI \n",
    "Vision and GPT-4-Vision.\n",
    "\n",
    "## How to use\n",
    "\n",
    "To use this you need the following resources provisioned:\n",
    "- Azure AI Search\n",
    "- Azure Storage Account\n",
    "- Azure OpenAI - Check the supported regions here https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#standard-deployment-model-availability\n",
    "    - A `text-embeddings-ada-002` model deployment\n",
    "    - A `gpt-4-vision` model deployment\n",
    "- Azure Computer Vision\n",
    "\n",
    "You can reuse some of the components deployed as part of the Chat With Your Data project. However, at the time of \n",
    "writing you will also additionally need to deploy Azure Computer Vision and a `gpt-4-vision ` model.\n",
    "\n",
    "Once provisioned, you need then to change the configuration below to match your setup.\n",
    "\n",
    "## Data\n",
    "\n",
    "This spike only uploads one image, which is a flow chart detailing when to use various Azure services. This image was \n",
    "sourced from https://learn.microsoft.com/en-us/azure/architecture/guide/technology-choices/compute-decision-tree.\n",
    "\n",
    "Feel free to use a different image and question.\n",
    "\n",
    "## How it works\n",
    "\n",
    "### Ingestion\n",
    "\n",
    "To ingest an image into Azure AI Search, the following steps are performed:\n",
    "1. Create search index\n",
    "2. Upload image to blob storage\n",
    "3. Generate embeddings using computer vision `vectorizeImage` API\n",
    "4. Generate a caption of the image using `gpt-4-vision`\n",
    "5. Generate embeddings of the caption using `text-embeddings-002-ada`\n",
    "6. Store data in search index\n",
    "\n",
    "\n",
    "### Question\n",
    "\n",
    "To ask a question using this data, the following steps are performed:\n",
    "1. Generate embeddings for the question using computer vision `vectorizeText` API\n",
    "2. Generate embeddings for the question using `text-embeddings-002-ada`\n",
    "3. Search index using both embeddings\n",
    "4. Generate blob sas url from returned search results\n",
    "5. Pass question, along with blob sas url to `gpt-4-vision` chat completions end point\n",
    "\n",
    "\n",
    "## Why do we need two different embedding models?\n",
    "\n",
    "It is not required to use two different embedding models, however, using both Azure Computer Vision to embed the image\n",
    "and `gpt-4-vision` to generate a description that is then embedded by `text-embeddings-002-ada` provides richer data and\n",
    "provides better search results. This is useful in particular for diagrams and flow charts which show relationships and\n",
    "decision points.\n",
    "\n",
    "When embedding the question, we need to embed it with both embeddings models to allow us to search over the previously\n",
    "generated vectors. This is due to the embeddings models generating differently sized and formatted vectors."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "e0da0335-4787-499a-abe9-bd99971bb98c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Common imports\n",
    "import re\n",
    "import datetime\n",
    "import base64\n",
    "import json\n",
    "from urllib.parse import urljoin\n",
    "from azure.core.credentials import AzureKeyCredential\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "a2fc464c",
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "# Config\n",
    "## Credentials\n",
    "search_credential = AzureKeyCredential(\"\")\n",
    "storage_credential = \"\"\n",
    "computer_vision_credential = \"\"\n",
    "openai_credential = \"\"\n",
    "\n",
    "## Image\n",
    "image_file_dir = \"./\"\n",
    "image_file = \"azure-services.png\"\n",
    "\n",
    "## Question\n",
    "question = \"Which service should I use if I'm building a new application and require full control?\"\n",
    "\n",
    "## Search\n",
    "search_service = \"\"\n",
    "search_endpoint = f\"https://{search_service}.search.windows.net/\"\n",
    "index_name = \"image-index\"\n",
    "\n",
    "## Storage\n",
    "storage_account = \"\"\n",
    "storage_endpoint = f\"https://{storage_account}.blob.core.windows.net\"\n",
    "storage_container = \"imagecontent\"\n",
    "\n",
    "## Vision\n",
    "computer_vision = \"\"\n",
    "computer_vision_endpoint = f\"https://{computer_vision}.cognitiveservices.azure.com/\"\n",
    "computer_vision_vectorize_image_url = urljoin(computer_vision_endpoint, \"computervision/retrieval:vectorizeImage\")\n",
    "\n",
    "## OpenAI\n",
    "openai_service = \"\"\n",
    "openai_endpoint = endpoint = f\"https://{openai_service}.openai.azure.com/openai/\"\n",
    "gpt4v_deployment_name = \"gpt-4v\"\n",
    "embeddings_deployment_name = \"text-embedding-ada-002\"\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "05f8b3e2-9b46-4cea-9fc9-a6981ff21c67",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create Search Clients\n",
    "from azure.search.documents.indexes.aio import SearchIndexClient, SearchIndexerClient\n",
    "from azure.search.documents.aio import SearchClient\n",
    "\n",
    "search_client = SearchClient(endpoint=search_endpoint, index_name=index_name, credential=search_credential)\n",
    "search_index_client = SearchIndexClient(endpoint=search_endpoint, credential=search_credential)\n",
    "search_indexer_client = SearchIndexerClient(endpoint=search_endpoint, credential=search_credential)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e2171a4b-09a6-43a1-89cb-2b4cecfcadbc",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create Index\n",
    "from azure.search.documents.indexes.models import (\n",
    "    HnswAlgorithmConfiguration,\n",
    "    HnswParameters,\n",
    "    SearchableField,\n",
    "    SearchField,\n",
    "    SearchFieldDataType,\n",
    "    SearchIndex,\n",
    "    SemanticConfiguration,\n",
    "    SemanticField,\n",
    "    SemanticPrioritizedFields,\n",
    "    SemanticSearch,\n",
    "    SimpleField,\n",
    "    VectorSearch,\n",
    "    VectorSearchProfile,\n",
    ")\n",
    "\n",
    "fields = [\n",
    "    SimpleField(name=\"id\", type=\"Edm.String\", key=True),\n",
    "    SearchableField(\n",
    "        name=\"content\",\n",
    "        type=\"Edm.String\",\n",
    "        analyzer_name=\"en.microsoft\",\n",
    "    ),\n",
    "    SearchField(\n",
    "        name=\"embedding\",\n",
    "        type=SearchFieldDataType.Collection(SearchFieldDataType.Single),\n",
    "        hidden=False,\n",
    "        searchable=True,\n",
    "        filterable=False,\n",
    "        sortable=False,\n",
    "        facetable=False,\n",
    "        vector_search_dimensions=1536,\n",
    "        vector_search_profile_name=\"embedding_config\",\n",
    "    ),\n",
    "    SimpleField(name=\"category\", type=\"Edm.String\", filterable=True, facetable=True),\n",
    "    SimpleField(\n",
    "        name=\"sourcepage\",\n",
    "        type=\"Edm.String\",\n",
    "        filterable=True,\n",
    "        facetable=True,\n",
    "    ),\n",
    "    SimpleField(\n",
    "        name=\"sourcefile\",\n",
    "        type=\"Edm.String\",\n",
    "        filterable=True,\n",
    "        facetable=True,\n",
    "    ),\n",
    "]\n",
    "\n",
    "# NEW ADDITIONAL FIELD\n",
    "fields.append(\n",
    "    SearchField(\n",
    "        name=\"imageEmbedding\",\n",
    "        type=SearchFieldDataType.Collection(SearchFieldDataType.Single),\n",
    "        hidden=False,\n",
    "        searchable=True,\n",
    "        filterable=False,\n",
    "        sortable=False,\n",
    "        facetable=False,\n",
    "        vector_search_dimensions=1024,\n",
    "        vector_search_profile_name=\"embedding_config\",\n",
    "    ),\n",
    ")\n",
    "\n",
    "index = SearchIndex(\n",
    "    name=index_name,\n",
    "    fields=fields,\n",
    "    semantic_search=SemanticSearch(\n",
    "        configurations=[\n",
    "            SemanticConfiguration(\n",
    "                name=\"default\",\n",
    "                prioritized_fields=SemanticPrioritizedFields(\n",
    "                    title_field=None, content_fields=[SemanticField(field_name=\"content\")]\n",
    "                ),\n",
    "            )\n",
    "        ]\n",
    "    ),\n",
    "    vector_search=VectorSearch(\n",
    "        algorithms=[\n",
    "            HnswAlgorithmConfiguration(\n",
    "                name=\"hnsw_config\",\n",
    "                parameters=HnswParameters(metric=\"cosine\"),\n",
    "            )\n",
    "        ],\n",
    "        profiles=[\n",
    "            VectorSearchProfile(\n",
    "                name=\"embedding_config\",\n",
    "                algorithm_configuration_name=\"hnsw_config\",\n",
    "            ),\n",
    "        ],\n",
    "    ),\n",
    ")\n",
    "\n",
    "if index_name not in [name async for name in search_index_client.list_index_names()]:\n",
    "    print(f\"Creating {index_name} search index\", )\n",
    "    await search_index_client.create_index(index)\n",
    "else:\n",
    "    print(f\"Search index {index_name} already exists\")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6e76958c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create Blob Storage Clients\n",
    "from azure.storage.blob.aio import BlobServiceClient\n",
    "\n",
    "blob_client = BlobServiceClient(account_url=storage_endpoint, credential=storage_credential)\n",
    "container_client = blob_client.get_container_client(storage_container)\n",
    "if not await container_client.exists():\n",
    "    await container_client.create_container()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b1da97a7",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Upload file\n",
    "from azure.storage.blob import generate_blob_sas\n",
    "\n",
    "file_path = f\"{image_file_dir}{image_file}\"\n",
    "file_contents = open(file_path, mode=\"rb\")\n",
    "blob_client = await container_client.upload_blob(name=image_file, data=file_contents, overwrite=True)\n",
    "blob_sas = generate_blob_sas(\n",
    "    container_client.account_name,\n",
    "    container_client.container_name,\n",
    "    image_file,\n",
    "    account_key=storage_credential,\n",
    "    permission=\"r\",\n",
    "    expiry=datetime.datetime.now() + datetime.timedelta(hours=1),\n",
    ")\n",
    "blob_url = f\"{blob_client.url}?{blob_sas}\"\n",
    "print(blob_url)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "aed5502b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Generate image embeddings\n",
    "import aiohttp\n",
    "\n",
    "headers = {\"Content-Type\": \"application/json\", \"Ocp-Apim-Subscription-Key\": computer_vision_credential}\n",
    "params = {\"api-version\": \"2023-02-01-preview\", \"modelVersion\": \"latest\"}\n",
    "\n",
    "async with aiohttp.ClientSession(headers=headers) as session:\n",
    "    body = {\"url\": blob_url}\n",
    "    async with session.post(url=computer_vision_vectorize_image_url, params=params, json=body) as resp:\n",
    "        resp_json = await resp.json()\n",
    "        image_embeddings = resp_json[\"vector\"]\n",
    "\n",
    "print(image_embeddings)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "644e645b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# OpenAI client\n",
    "from openai import AzureOpenAI\n",
    "\n",
    "openai_client = AzureOpenAI(base_url=openai_endpoint, api_key=openai_credential, api_version=\"2024-02-01\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5a0d76ad",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Generate caption and embeddings for it\n",
    "caption_system_message = \"\"\"You are a assistant that generates rich descriptions of images.\n",
    "You need to be accurate in the information you extract and detailed in the descriptons you generate.\n",
    "Do not abbreviate anything and do not shorten sentances. Explain the image completely.\n",
    "If you are provided with an image of a flow chart, describe the flow chart in detail.\n",
    "If the image is mostly text, use OCR to extract the text as it is displayed in the image.\n",
    "\"\"\"\n",
    "\n",
    "messages = [\n",
    "    {\"role\": \"system\", \"content\": caption_system_message},\n",
    "    {\"role\": \"user\", \"content\": [{\"text\": \"Describe this image in detail\", \"type\": \"text\"},{ \"image_url\": blob_url, \"type\": \"image_url\" }]},\n",
    "]\n",
    "\n",
    "print(json.dumps(messages, indent=4))\n",
    "\n",
    "chat_completion = openai_client.chat.completions.create(\n",
    "    model=gpt4v_deployment_name,\n",
    "    messages=messages,\n",
    "    temperature=0.0,\n",
    "    max_tokens=1024,\n",
    "    n=1\n",
    ")\n",
    "\n",
    "print(chat_completion.choices[0].message.content)\n",
    "caption = chat_completion.choices[0].message.content\n",
    "\n",
    "# Generate embeddings for caption\n",
    "caption_embeddings = openai_client.embeddings.create(\n",
    "    model=embeddings_deployment_name,\n",
    "    input=caption,\n",
    ").data[0].embedding\n",
    "\n",
    "print(caption_embeddings)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c5b01390",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Store embeddings in search index\n",
    "index_doc = {\n",
    "    \"id\": re.sub(\"[^0-9a-zA-Z_-]\", \"_\", image_file),\n",
    "    \"content\": caption,\n",
    "    \"embedding\": caption_embeddings,\n",
    "    \"sourceFile\": image_file,\n",
    "    \"imageEmbedding\": image_embeddings,\n",
    "}\n",
    "await search_client.upload_documents([index_doc])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "93f67dc2",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Generate embeddings for question\n",
    "computer_vision_vectorize_text_url = urljoin(computer_vision_endpoint, \"computervision/retrieval:vectorizeText\")\n",
    "headers = {\"Content-Type\": \"application/json\", \"Ocp-Apim-Subscription-Key\": computer_vision_credential}\n",
    "params = {\"api-version\": \"2023-02-01-preview\", \"modelVersion\": \"latest\"}\n",
    "\n",
    "async with aiohttp.ClientSession(headers=headers) as session:\n",
    "    body = {\"text\": question}\n",
    "    async with session.post(url=computer_vision_vectorize_text_url, params=params, json=body) as resp:\n",
    "        resp_json = await resp.json()\n",
    "        question_image_embeddings = resp_json[\"vector\"]\n",
    "\n",
    "print(question_image_embeddings)\n",
    "\n",
    "question_embeddings = openai_client.embeddings.create(\n",
    "    model=embeddings_deployment_name,\n",
    "    input=question,\n",
    ").data[0].embedding\n",
    "\n",
    "print(question_embeddings)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c81c235d",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Search Index\n",
    "from azure.search.documents.models import VectorizedQuery\n",
    "import json\n",
    "\n",
    "vector_image_query = VectorizedQuery(vector=question_image_embeddings, k_nearest_neighbors=50, fields=\"imageEmbedding\")\n",
    "vector_query = VectorizedQuery(vector=question_embeddings, k_nearest_neighbors=50, fields=\"embedding\")\n",
    "search_results = await search_client.search(search_text=question, top=1, vector_queries=[vector_query, vector_image_query])\n",
    "\n",
    "\n",
    "async for page in search_results.by_page():\n",
    "    async for document in page:\n",
    "        print(document.get(\"@search.score\"))\n",
    "        print(json.dumps(document, indent=2))\n",
    "        found_document = document"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6cd2e9b9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Download source image\n",
    "source_file = found_document.get(\"sourcefile\")\n",
    "print(source_file)\n",
    "blob_client = container_client.get_blob_client(source_file)\n",
    "\n",
    "# Download the image from blob storage\n",
    "# blob = await container_client.get_blob_client(source_file).download_blob()\n",
    "# base64_image = base64.b64encode(await blob.readall()).decode(\"utf-8\")\n",
    "# formatted_image = f\"data:image/png;base64,{base64_image}\"\n",
    "# print(formatted_image)\n",
    "\n",
    "# image_url = {\"url\": formatted_image, \"detail\": \"auto\"}\n",
    "\n",
    "# Alternatively provide a URL\n",
    "blob_sas = generate_blob_sas(\n",
    "    container_client.account_name,\n",
    "    container_client.container_name,\n",
    "    source_file,\n",
    "    account_key=storage_credential,\n",
    "    permission=\"r\",\n",
    "    expiry=datetime.datetime.now() + datetime.timedelta(hours=1),\n",
    ")\n",
    "blob_url = f\"{blob_client.url}?{blob_sas}\"\n",
    "image_url = {\"url\": blob_url}\n",
    "print(image_url)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a46fd752",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Chat\n",
    "system_chat_template_gpt4v = \"\"\"You are an intelligent assistant helping software engineers use Microsoft Azure.\n",
    "Answer the following question using only the data provided in the sources.\n",
    "If you cannot answer using the sources, say you don't know. Return just the answer without any input texts\n",
    "\n",
    "When you give your answer, you ALWAYS MUST include the image url to one or more of the sources in your response in the following format: <answer> [urlX]\n",
    "Always use square brackets to reference the document source url. When you create the answer from multiple sources, list each source separately, e.g. <answer> [urlX][urlY] and so on.\n",
    "Always reply in english.\n",
    "\"\"\"\n",
    "\n",
    "messages = [\n",
    "    {\"role\": \"system\", \"content\": system_chat_template_gpt4v},\n",
    "    {\"role\": \"user\", \"content\": [{\"text\": question, \"type\": \"text\"},{ \"image_url\": image_url, \"type\": \"image_url\" }]},\n",
    "]\n",
    "\n",
    "print(json.dumps(messages, indent=4))\n",
    "\n",
    "chat_completion = openai_client.chat.completions.create(\n",
    "    model=gpt4v_deployment_name,\n",
    "    messages=messages,\n",
    "    temperature=0.0,\n",
    "    max_tokens=1024,\n",
    "    n=1\n",
    ")\n",
    "\n",
    "print(chat_completion)\n",
    "print(chat_completion.choices[0].message)\n",
    "print(chat_completion.choices[0].message.content)\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
