{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "5cda9807-dbdd-496c-b663-af3d77f9bb70",
   "metadata": {},
   "source": [
    "# Module 4 - Working with **Titan Multimodal Embeddings**\n",
    "\n",
    "---\n",
    "\n",
    "This notebook demonstrate how to generate and use embeddings for images and text using Amazon Titan Multimodal Embedding Models. We'll walk through how to extract these embeddings and perform similarity search with a query, laying out a path for building intelligent search and recommendation applications.\n",
    "\n",
    "---\n",
    "\n",
    "### Introduction\n",
    "\n",
    "Amazon Titan Multimodal Embedding Models provide a simple and scalable way to represent images and text as embeddings—dense numerical vectors that capture semantic meaning. These models are ideal for building intelligent systems where understanding the similarity between images, texts, or both is critical.\n",
    "\n",
    "Some key features of the Amazon Titan Multimodal Embedding Models include:\n",
    "\n",
    "- **Multi-modal input support** - Encode text, images, or a combination of both into the same semantic space.\n",
    "\n",
    "- **Enterprise-ready** - Built-in mechanisms to help mitigate bias in search results, support for multiple embedding dimensions for optimizing latency/accuracy trade-offs, and strong privacy and data security guarantees.\n",
    "\n",
    "- **Flexible deployment** - Available through real-time inference and asynchronous batch transform APIs, and easily integrated with vector databases such as **Amazon OpenSearch Service**.\n",
    "\n",
    "These models are pre-trained on large and diverse datasets, making them powerful out-of-the-box. For more specialized applications, you can also customize the embeddings using your own data, without needing to annotate large volumes of training examples.\n",
    "\n",
    "This module will guide you through using Amazon Titan’s multimodal embeddings to extract image and text embeddings, store them in an index, and build a simple semantic search demo. Let’s get started!\n",
    "\n",
    "### Pre-requisites\n",
    "\n",
    "Please make sure that you have enabled the following model access in _Amazon Bedrock Console_:\n",
    "- `Amazon Titan Multimodal Embeddings G1` (model ID: `amazon.titan-embed-image-v1`)\n",
    "- `Amazon Titan Image Generator G1 (V2)` (model ID: `amazon.titan-image-generator-v2:0`)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a948c5c-3d9f-43f6-aaea-b2291313115e",
   "metadata": {},
   "source": [
    "## 1. Setup\n",
    "\n",
    "### 1.1 Install and import the required libraries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7d33343d",
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install -q -r requirements.txt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "58dfef2a-fe91-401e-a030-dbe977cccdd0",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Restart kernel\n",
    "from IPython.core.display import HTML\n",
    "HTML(\"<script>Jupyter.notebook.kernel.restart()</script>\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8ef07a00-9f50-44d2-bdf4-f0a09dfd7152",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# Standard library imports\n",
    "import os\n",
    "import re\n",
    "import sys\n",
    "import json\n",
    "import base64\n",
    "from io import BytesIO\n",
    "\n",
    "# Other library imports\n",
    "import boto3\n",
    "import numpy as np\n",
    "import seaborn as sns\n",
    "from PIL import Image\n",
    "from scipy.spatial.distance import cdist\n",
    "\n",
    "# Print SDK versions\n",
    "print(f\"Python version: {sys.version.split()[0]}\")\n",
    "print(f\"Boto3 SDK version: {boto3.__version__}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "29bfa65a-b27f-4fa6-abca-2e2523df6ae6",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "# Init boto session\n",
    "boto3_session = boto3.session.Session()\n",
    "region_name = boto3_session.region_name\n",
    "\n",
    "# Init Bedrock Runtime client\n",
    "bedrock_client = boto3.client(\"bedrock-runtime\", region_name)\n",
    "\n",
    "print(\"AWS Region:\", region_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7d3d4673-f1ca-437d-825f-0f7fe06ace0d",
   "metadata": {
    "tags": []
   },
   "source": [
    "## 2. Synthetic Dataset\n",
    "\n",
    "### 2.1 Generating Textual Description of Dataset Items with LLM"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "037173d2-5747-4e0c-ade9-c5c3ecf868ef",
   "metadata": {},
   "source": [
    "We can leverage Amazon Bedrock Language Models to randomly generate 7 different products, each with 3 variants, using prompt:\n",
    "\n",
    "```\n",
    "Generate a list of 7 items description for an online e-commerce shop, each comes with 3 variants of color or type. All with separate full sentence description.\n",
    "```\n",
    "\n",
    "Note that when using different language models, the reponses might be different. For illustration purpose, suppose we get the below response."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a535a5a5-11eb-4857-953e-0005ebbca646",
   "metadata": {},
   "outputs": [],
   "source": [
    "response = 'Here is a list of 7 items with 3 variants each for an online e-commerce shop, with separate full sentence descriptions:\\n\\n1. T-shirt\\n- A red cotton t-shirt with a crew neck and short sleeves. \\n- A blue cotton t-shirt with a v-neck and short sleeves.\\n- A black polyester t-shirt with a scoop neck and cap sleeves.\\n\\n2. Jeans\\n- Classic blue relaxed fit denim jeans with a mid-rise waist. \\n- Black skinny fit denim jeans with a high-rise waist and ripped details at the knees.  \\n- Stonewash straight leg denim jeans with a standard waist and front pockets.\\n\\n3. Sneakers  \\n- White leather low-top sneakers with an almond toe cap and thick rubber outsole.\\n- Gray mesh high-top sneakers with neon green laces and a padded ankle collar. \\n- Tan suede mid-top sneakers with a round toe and ivory rubber cupsole.  \\n\\n4. Backpack\\n- A purple nylon backpack with padded shoulder straps, front zipper pocket and laptop sleeve.\\n- A gray canvas backpack with brown leather trims, side water bottle pockets and drawstring top closure.  \\n- A black leather backpack with multiple interior pockets, top carry handle and adjustable padded straps.\\n\\n5. Smartwatch\\n- A silver stainless steel smartwatch with heart rate monitor, GPS tracker and sleep analysis.  \\n- A space gray aluminum smartwatch with step counter, phone notifications and calendar syncing. \\n- A rose gold smartwatch with activity tracking, music controls and customizable watch faces.  \\n\\n6. Coffee maker\\n- A 12-cup programmable coffee maker in brushed steel with removable water tank and keep warm plate.  \\n- A compact 5-cup single serve coffee maker in matt black with travel mug auto-dispensing feature.\\n- A retro style stovetop percolator coffee pot in speckled enamel with stay-cool handle and glass knob lid.  \\n\\n7. Yoga mat \\n- A teal 4mm thick yoga mat made of natural tree rubber with moisture-wicking microfiber top.\\n- A purple 6mm thick yoga mat made of eco-friendly TPE material with integrated carrying strap. \\n- A patterned 5mm thick yoga mat made of PVC-free material with towel cover included.'\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "600e05f3-fd2a-4ba7-941e-9d77d7e489ca",
   "metadata": {},
   "source": [
    "The following function converts the response to a list of descriptions. You may need to write your own function depending on the real response."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "aea9f3df-c55e-4a88-af13-d777ad768ac7",
   "metadata": {},
   "outputs": [],
   "source": [
    "def extract_text(input_string):\n",
    "    pattern = r\"- (.*?)($|\\n)\"\n",
    "    matches = re.findall(pattern, input_string)\n",
    "    extracted_texts = [match[0] for match in matches]\n",
    "    return extracted_texts"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0e70d6bc-7616-4e73-9d05-611b56212b01",
   "metadata": {},
   "source": [
    "Convert the response to a list of product descriptions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cd7996cd-2ef2-42ae-8144-5fcb312ad236",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "product_descriptions = extract_text(response)\n",
    "product_descriptions"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f5685bdf-9fcb-47dd-9e8c-40b568b8a4a6",
   "metadata": {},
   "source": [
    "### 2.2 Generating Image Pairs for the Textual Descriptions\n",
    "\n",
    "The following function calls bedrock to generated images using \"amazon.titan-image-generator-v1\" model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "90e4395e-da68-4c4c-bd99-609a3b12741f",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "def titan_generate_image(payload, num_image=2, cfg=10.0, seed=2024):\n",
    "\n",
    "    body = json.dumps(\n",
    "        {\n",
    "            **payload,\n",
    "            \"imageGenerationConfig\": {\n",
    "                \"numberOfImages\": num_image,   # Number of images to be generated. Range: 1 to 5 \n",
    "                \"quality\": \"premium\",          # Quality of generated images. Can be standard or premium.\n",
    "                \"height\": 1024,                # Height of output image(s)\n",
    "                \"width\": 1024,                 # Width of output image(s)\n",
    "                \"cfgScale\": cfg,               # Scale for classifier-free guidance. Range: 1.0 (exclusive) to 10.0\n",
    "                \"seed\": seed                   # The seed to use for re-producibility. Range: 0 to 214783647\n",
    "            }\n",
    "        }\n",
    "    )\n",
    "\n",
    "    response = bedrock_client.invoke_model(\n",
    "        body=body, \n",
    "        modelId=\"amazon.titan-image-generator-v2:0\",\n",
    "        accept=\"application/json\", \n",
    "        contentType=\"application/json\"\n",
    "    )\n",
    "\n",
    "    response_body = json.loads(response.get(\"body\").read())\n",
    "    images = [\n",
    "        Image.open(\n",
    "            BytesIO(base64.b64decode(base64_image))\n",
    "        ) for base64_image in response_body.get(\"images\")\n",
    "    ]\n",
    "\n",
    "    return images"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fdf9d81e-64e1-4939-a83f-3cbaf78b09fe",
   "metadata": {},
   "source": [
    "Then we leverage the Titan Image Generation models to create product images for each of the descriptions. The following cell may take a few minutes to run."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dded2456-e2fd-400a-bb96-88868c1f8db5",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "embed_dir = \"data/titan-embed\"\n",
    "os.makedirs(embed_dir, exist_ok=True)\n",
    "\n",
    "titles = []\n",
    "for i, prompt in enumerate(product_descriptions, 1):\n",
    "    images = titan_generate_image(\n",
    "        {\n",
    "            \"taskType\": \"TEXT_IMAGE\",\n",
    "            \"textToImageParams\": {\n",
    "                \"text\": prompt, # Required\n",
    "            }\n",
    "        },\n",
    "        num_image=1\n",
    "    )\n",
    "    title = \"_\".join(prompt.split()[:4]).lower()\n",
    "    title = f\"{embed_dir}/{title}.png\"\n",
    "    titles.append(title)\n",
    "    images[0].save(title, format=\"png\")\n",
    "    print(f\"[{i}/{len(product_descriptions)}] Generated: '{title}'..\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "900115e4-5be3-40c4-9982-b579c3f3f863",
   "metadata": {},
   "source": [
    "## 3. Multimodal Dataset Indexing\n",
    "\n",
    "### 3.1 Embedding Images with Titan Multimodal Embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bcac42d5-810e-4ad4-bd09-5ed89991e9d9",
   "metadata": {},
   "source": [
    "The following function converts image, and optionally, text, into multimodal embeddings."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "801aa752-afe0-47bc-b73a-ec2667c9559a",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "def titan_multimodal_embedding(\n",
    "    image_path=None,  # maximum 2048 x 2048 pixels\n",
    "    description=None, # English only and max input tokens 128\n",
    "    dimension=1024,   # 1024 (default), 384, 256\n",
    "    model_id=\"amazon.titan-embed-image-v1\"\n",
    "):\n",
    "    payload_body = {}\n",
    "    embedding_config = {\n",
    "        \"embeddingConfig\": { \n",
    "             \"outputEmbeddingLength\": dimension\n",
    "         }\n",
    "    }\n",
    "\n",
    "    # You can specify either text or image or both\n",
    "    if image_path:\n",
    "        with open(image_path, \"rb\") as image_file:\n",
    "            input_image = base64.b64encode(image_file.read()).decode('utf8')\n",
    "        payload_body[\"inputImage\"] = input_image\n",
    "    if description:\n",
    "        payload_body[\"inputText\"] = description\n",
    "\n",
    "    assert payload_body, \"please provide either an image and/or a text description\"\n",
    "    print(\"\\n\".join(payload_body.keys()))\n",
    "\n",
    "    response = bedrock_client.invoke_model(\n",
    "        body=json.dumps({**payload_body, **embedding_config}), \n",
    "        modelId=model_id,\n",
    "        accept=\"application/json\", \n",
    "        contentType=\"application/json\"\n",
    "    )\n",
    "\n",
    "    return json.loads(response.get(\"body\").read())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "32306437-4931-4450-bc84-999dc8f478a4",
   "metadata": {},
   "source": [
    "Now we can create embeddings for the generated images:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8d3eb696-b5b0-41e5-982b-8da18b9e3978",
   "metadata": {},
   "outputs": [],
   "source": [
    "multimodal_embeddings = []\n",
    "for title in titles:\n",
    "    embedding = titan_multimodal_embedding(image_path=title, dimension=1024)[\"embedding\"]\n",
    "    multimodal_embeddings.append(embedding)\n",
    "    print(f\"generated embedding for {title}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8b6c1f25-dac5-4053-afb8-33b52db7506b",
   "metadata": {},
   "source": [
    "### 3.2 Analyze the Generated Image Embeddings\n",
    "\n",
    "Let's see what we have generated so far:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "43f08305-caa0-43ed-abc5-d7ad71cefb55",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "print(\"Number of generated embeddings for images:\", len(multimodal_embeddings))\n",
    "print(\"Dimension of each image embedding:\", len(multimodal_embeddings[-1]))\n",
    "print(\"Example of generated embedding:\\n\", np.array(multimodal_embeddings[-1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2c8b9942-ca89-4d7c-912f-2e9352a4b12c",
   "metadata": {
    "tags": []
   },
   "source": [
    "The following function produces a heatmap to display the inner product of the embeddings."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e4da941c-2e89-452b-bae6-acff49f41cd2",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "def plot_similarity_heatmap(embeddings_a, embeddings_b):\n",
    "    inner_product = np.inner(embeddings_a, embeddings_b)\n",
    "    sns.set(font_scale=1.1)\n",
    "    graph = sns.heatmap(\n",
    "        inner_product,\n",
    "        vmin=np.min(inner_product),\n",
    "        vmax=1,\n",
    "        cmap=\"OrRd\",\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "de7e129d-f11b-4b83-b6aa-4533383de0d5",
   "metadata": {},
   "source": [
    "Generate a heatmap to display the inner product of the embeddings. You can see that the diagonal is dark red, which means one embedding is closely related to itself. Then you can notice that there are 3X3 squares which are lighter than the diagonal, but darker than the rest. It means those 3 embeddings are less closely related to each other, than to itself, but more closely related to the rest embeddings. This makes sense, as we generated 3 variants of each product. Products are more closely related if they are of the same type. Products are less closely related if they are of different types."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "faac6ff7-2aa9-4af7-af3f-9171f442ed6c",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "plot_similarity_heatmap(multimodal_embeddings, multimodal_embeddings)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "409f2a79-337e-4994-a505-835819ab03a4",
   "metadata": {},
   "source": [
    "## 4. Multimodal Search"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "231cd8fa-75be-4062-9f50-5b3f91743db8",
   "metadata": {},
   "source": [
    "We can now showcase a basic functionality of a multimodal search engine.\n",
    "\n",
    "The following function returns the top similar multimodal embeddings given a query multimodal embedding. Note in practise you can leverage managed vector database, e.g. Amazon OpenSearch Service, and here is for illustration purpose."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "91ec2feb-436d-425d-9d54-6822dc66ad90",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "def search(query_emb:np.array, indexes:np.array, top_k:int=1):\n",
    "    dist = cdist(query_emb, indexes, metric=\"cosine\")\n",
    "    return dist.argsort(axis=-1)[0,:top_k], np.sort(dist, axis=-1)[:top_k]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0f3a8951-53b9-4807-951b-ffafa9380fc0",
   "metadata": {},
   "source": [
    "Now we have created the embeddings, we can search the list with a query, to find the product which the query best describes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4aece41e-40fa-46dc-8edc-f366f5d4136b",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "query_prompt = \"suede sneaker\"\n",
    "query_emb = titan_multimodal_embedding(description=query_prompt, dimension=1024)[\"embedding\"]\n",
    "len(query_emb)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7ca6aa36-ed0b-4aad-b102-aad340118e88",
   "metadata": {},
   "outputs": [],
   "source": [
    "idx_returned, dist = search(\n",
    "    np.array(query_emb)[None], \n",
    "    np.array(multimodal_embeddings)\n",
    ")\n",
    "idx_returned, dist"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "86138c8e-fdd0-4cf1-ab16-393690828064",
   "metadata": {},
   "outputs": [],
   "source": [
    "for idx in idx_returned[:1]:\n",
    "    display(Image.open(f\"{titles[idx]}\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b276f6a8-59ce-42a0-b511-e905b5e551a1",
   "metadata": {},
   "source": [
    "Let's convert the above cells to a helper function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3fc9cfde-b51c-40dc-ba38-c4809d264d10",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "def multimodal_search(description:str, dimension:int):\n",
    "    query_emb = titan_multimodal_embedding(description=description, dimension=dimension)[\"embedding\"]\n",
    "\n",
    "    idx_returned, dist = search(\n",
    "        np.array(query_emb)[None], \n",
    "        np.array(multimodal_embeddings)\n",
    "    )\n",
    "\n",
    "    for idx in idx_returned[:1]:\n",
    "        display(Image.open(f\"{titles[idx]}\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5a711b6b-edd2-49b1-8c57-08f4d1c3214b",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "multimodal_search(description=\"white sneaker\", dimension=1024)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cecb9b78-c4e6-4087-9487-55fdd2e914b6",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "multimodal_search(description=\"mesh sneaker\", dimension=1024)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6db23053-dec8-48fd-a281-bf00865ccf08",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "multimodal_search(description=\"leather backpack\", dimension=1024)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4d73627b-6680-4380-813b-6fb4e9080455",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "multimodal_search(description=\"nylon backpack\", dimension=1024)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "625e66dd-e7c2-43c8-9dcc-78d73183ca90",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "multimodal_search(description=\"canvas backpack\", dimension=1024)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "95eec24b-073d-4ce8-ad88-f7ed49c41632",
   "metadata": {},
   "outputs": [],
   "source": [
    "multimodal_search(description=\"running shoes\", dimension=1024)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2bfa641b-c0ac-489c-b2bb-a3adf4279508",
   "metadata": {},
   "source": [
    "## 5. Conclusions and Next Steps\n",
    "\n",
    "In this module, we explored how to work with multimodal embeddings using Amazon Titan models. By embedding images and text into a shared semantic space, we demonstrated how to build powerful similarity search capabilities that go beyond traditional keyword matching. This approach opens up a wide range of possibilities for intelligent search, recommendation, and classification systems across industries such as e-commerce, media, and enterprise content management.\n",
    "\n",
    "### Next Steps\n",
    "\n",
    "Now please go on to explore the powerful capabilities of the Amazon Nova Canvas model to create compelling visual imagery for use cases such as product prototyping, dynamic content generation, and marketing asset creation:\n",
    "\n",
    "&nbsp; **NEXT ▶** [2_nova-canvas-lab.ipynb](./2\\_nova-canvas-lab.ipynb)."
   ]
  }
 ],
 "metadata": {
  "availableInstances": [
   {
    "_defaultOrder": 0,
    "_isFastLaunch": true,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 4,
    "name": "ml.t3.medium",
    "vcpuNum": 2
   },
   {
    "_defaultOrder": 1,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 8,
    "name": "ml.t3.large",
    "vcpuNum": 2
   },
   {
    "_defaultOrder": 2,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 16,
    "name": "ml.t3.xlarge",
    "vcpuNum": 4
   },
   {
    "_defaultOrder": 3,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 32,
    "name": "ml.t3.2xlarge",
    "vcpuNum": 8
   },
   {
    "_defaultOrder": 4,
    "_isFastLaunch": true,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 8,
    "name": "ml.m5.large",
    "vcpuNum": 2
   },
   {
    "_defaultOrder": 5,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 16,
    "name": "ml.m5.xlarge",
    "vcpuNum": 4
   },
   {
    "_defaultOrder": 6,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 32,
    "name": "ml.m5.2xlarge",
    "vcpuNum": 8
   },
   {
    "_defaultOrder": 7,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 64,
    "name": "ml.m5.4xlarge",
    "vcpuNum": 16
   },
   {
    "_defaultOrder": 8,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 128,
    "name": "ml.m5.8xlarge",
    "vcpuNum": 32
   },
   {
    "_defaultOrder": 9,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 192,
    "name": "ml.m5.12xlarge",
    "vcpuNum": 48
   },
   {
    "_defaultOrder": 10,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 256,
    "name": "ml.m5.16xlarge",
    "vcpuNum": 64
   },
   {
    "_defaultOrder": 11,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 384,
    "name": "ml.m5.24xlarge",
    "vcpuNum": 96
   },
   {
    "_defaultOrder": 12,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 8,
    "name": "ml.m5d.large",
    "vcpuNum": 2
   },
   {
    "_defaultOrder": 13,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 16,
    "name": "ml.m5d.xlarge",
    "vcpuNum": 4
   },
   {
    "_defaultOrder": 14,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 32,
    "name": "ml.m5d.2xlarge",
    "vcpuNum": 8
   },
   {
    "_defaultOrder": 15,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 64,
    "name": "ml.m5d.4xlarge",
    "vcpuNum": 16
   },
   {
    "_defaultOrder": 16,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 128,
    "name": "ml.m5d.8xlarge",
    "vcpuNum": 32
   },
   {
    "_defaultOrder": 17,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 192,
    "name": "ml.m5d.12xlarge",
    "vcpuNum": 48
   },
   {
    "_defaultOrder": 18,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 256,
    "name": "ml.m5d.16xlarge",
    "vcpuNum": 64
   },
   {
    "_defaultOrder": 19,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 384,
    "name": "ml.m5d.24xlarge",
    "vcpuNum": 96
   },
   {
    "_defaultOrder": 20,
    "_isFastLaunch": false,
    "category": "General purpose",
    "gpuNum": 0,
    "hideHardwareSpecs": true,
    "memoryGiB": 0,
    "name": "ml.geospatial.interactive",
    "supportedImageNames": [
     "sagemaker-geospatial-v1-0"
    ],
    "vcpuNum": 0
   },
   {
    "_defaultOrder": 21,
    "_isFastLaunch": true,
    "category": "Compute optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 4,
    "name": "ml.c5.large",
    "vcpuNum": 2
   },
   {
    "_defaultOrder": 22,
    "_isFastLaunch": false,
    "category": "Compute optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 8,
    "name": "ml.c5.xlarge",
    "vcpuNum": 4
   },
   {
    "_defaultOrder": 23,
    "_isFastLaunch": false,
    "category": "Compute optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 16,
    "name": "ml.c5.2xlarge",
    "vcpuNum": 8
   },
   {
    "_defaultOrder": 24,
    "_isFastLaunch": false,
    "category": "Compute optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 32,
    "name": "ml.c5.4xlarge",
    "vcpuNum": 16
   },
   {
    "_defaultOrder": 25,
    "_isFastLaunch": false,
    "category": "Compute optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 72,
    "name": "ml.c5.9xlarge",
    "vcpuNum": 36
   },
   {
    "_defaultOrder": 26,
    "_isFastLaunch": false,
    "category": "Compute optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 96,
    "name": "ml.c5.12xlarge",
    "vcpuNum": 48
   },
   {
    "_defaultOrder": 27,
    "_isFastLaunch": false,
    "category": "Compute optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 144,
    "name": "ml.c5.18xlarge",
    "vcpuNum": 72
   },
   {
    "_defaultOrder": 28,
    "_isFastLaunch": false,
    "category": "Compute optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 192,
    "name": "ml.c5.24xlarge",
    "vcpuNum": 96
   },
   {
    "_defaultOrder": 29,
    "_isFastLaunch": true,
    "category": "Accelerated computing",
    "gpuNum": 1,
    "hideHardwareSpecs": false,
    "memoryGiB": 16,
    "name": "ml.g4dn.xlarge",
    "vcpuNum": 4
   },
   {
    "_defaultOrder": 30,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 1,
    "hideHardwareSpecs": false,
    "memoryGiB": 32,
    "name": "ml.g4dn.2xlarge",
    "vcpuNum": 8
   },
   {
    "_defaultOrder": 31,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 1,
    "hideHardwareSpecs": false,
    "memoryGiB": 64,
    "name": "ml.g4dn.4xlarge",
    "vcpuNum": 16
   },
   {
    "_defaultOrder": 32,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 1,
    "hideHardwareSpecs": false,
    "memoryGiB": 128,
    "name": "ml.g4dn.8xlarge",
    "vcpuNum": 32
   },
   {
    "_defaultOrder": 33,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 4,
    "hideHardwareSpecs": false,
    "memoryGiB": 192,
    "name": "ml.g4dn.12xlarge",
    "vcpuNum": 48
   },
   {
    "_defaultOrder": 34,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 1,
    "hideHardwareSpecs": false,
    "memoryGiB": 256,
    "name": "ml.g4dn.16xlarge",
    "vcpuNum": 64
   },
   {
    "_defaultOrder": 35,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 1,
    "hideHardwareSpecs": false,
    "memoryGiB": 61,
    "name": "ml.p3.2xlarge",
    "vcpuNum": 8
   },
   {
    "_defaultOrder": 36,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 4,
    "hideHardwareSpecs": false,
    "memoryGiB": 244,
    "name": "ml.p3.8xlarge",
    "vcpuNum": 32
   },
   {
    "_defaultOrder": 37,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 8,
    "hideHardwareSpecs": false,
    "memoryGiB": 488,
    "name": "ml.p3.16xlarge",
    "vcpuNum": 64
   },
   {
    "_defaultOrder": 38,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 8,
    "hideHardwareSpecs": false,
    "memoryGiB": 768,
    "name": "ml.p3dn.24xlarge",
    "vcpuNum": 96
   },
   {
    "_defaultOrder": 39,
    "_isFastLaunch": false,
    "category": "Memory Optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 16,
    "name": "ml.r5.large",
    "vcpuNum": 2
   },
   {
    "_defaultOrder": 40,
    "_isFastLaunch": false,
    "category": "Memory Optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 32,
    "name": "ml.r5.xlarge",
    "vcpuNum": 4
   },
   {
    "_defaultOrder": 41,
    "_isFastLaunch": false,
    "category": "Memory Optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 64,
    "name": "ml.r5.2xlarge",
    "vcpuNum": 8
   },
   {
    "_defaultOrder": 42,
    "_isFastLaunch": false,
    "category": "Memory Optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 128,
    "name": "ml.r5.4xlarge",
    "vcpuNum": 16
   },
   {
    "_defaultOrder": 43,
    "_isFastLaunch": false,
    "category": "Memory Optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 256,
    "name": "ml.r5.8xlarge",
    "vcpuNum": 32
   },
   {
    "_defaultOrder": 44,
    "_isFastLaunch": false,
    "category": "Memory Optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 384,
    "name": "ml.r5.12xlarge",
    "vcpuNum": 48
   },
   {
    "_defaultOrder": 45,
    "_isFastLaunch": false,
    "category": "Memory Optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 512,
    "name": "ml.r5.16xlarge",
    "vcpuNum": 64
   },
   {
    "_defaultOrder": 46,
    "_isFastLaunch": false,
    "category": "Memory Optimized",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 768,
    "name": "ml.r5.24xlarge",
    "vcpuNum": 96
   },
   {
    "_defaultOrder": 47,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 1,
    "hideHardwareSpecs": false,
    "memoryGiB": 16,
    "name": "ml.g5.xlarge",
    "vcpuNum": 4
   },
   {
    "_defaultOrder": 48,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 1,
    "hideHardwareSpecs": false,
    "memoryGiB": 32,
    "name": "ml.g5.2xlarge",
    "vcpuNum": 8
   },
   {
    "_defaultOrder": 49,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 1,
    "hideHardwareSpecs": false,
    "memoryGiB": 64,
    "name": "ml.g5.4xlarge",
    "vcpuNum": 16
   },
   {
    "_defaultOrder": 50,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 1,
    "hideHardwareSpecs": false,
    "memoryGiB": 128,
    "name": "ml.g5.8xlarge",
    "vcpuNum": 32
   },
   {
    "_defaultOrder": 51,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 1,
    "hideHardwareSpecs": false,
    "memoryGiB": 256,
    "name": "ml.g5.16xlarge",
    "vcpuNum": 64
   },
   {
    "_defaultOrder": 52,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 4,
    "hideHardwareSpecs": false,
    "memoryGiB": 192,
    "name": "ml.g5.12xlarge",
    "vcpuNum": 48
   },
   {
    "_defaultOrder": 53,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 4,
    "hideHardwareSpecs": false,
    "memoryGiB": 384,
    "name": "ml.g5.24xlarge",
    "vcpuNum": 96
   },
   {
    "_defaultOrder": 54,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 8,
    "hideHardwareSpecs": false,
    "memoryGiB": 768,
    "name": "ml.g5.48xlarge",
    "vcpuNum": 192
   },
   {
    "_defaultOrder": 55,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 8,
    "hideHardwareSpecs": false,
    "memoryGiB": 1152,
    "name": "ml.p4d.24xlarge",
    "vcpuNum": 96
   },
   {
    "_defaultOrder": 56,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 8,
    "hideHardwareSpecs": false,
    "memoryGiB": 1152,
    "name": "ml.p4de.24xlarge",
    "vcpuNum": 96
   },
   {
    "_defaultOrder": 57,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 32,
    "name": "ml.trn1.2xlarge",
    "vcpuNum": 8
   },
   {
    "_defaultOrder": 58,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 512,
    "name": "ml.trn1.32xlarge",
    "vcpuNum": 128
   },
   {
    "_defaultOrder": 59,
    "_isFastLaunch": false,
    "category": "Accelerated computing",
    "gpuNum": 0,
    "hideHardwareSpecs": false,
    "memoryGiB": 512,
    "name": "ml.trn1n.32xlarge",
    "vcpuNum": 128
   }
  ],
  "instance_type": "ml.t3.medium",
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
