{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "0720be9d-2dd5-49d9-9bb4-d1ee7aa030d2",
   "metadata": {},
   "source": [
    "# Fine-tuning Llama 3.2 with Vision Capabilities - Data Preparation\n",
    "\n",
    "## Introduction\n",
    "\n",
    "Fine-tuning multi-modal models allows you to enhance their capabilities for specific visual understanding tasks. This notebook demonstrates how to prepare data for fine-tuning Meta Llama 3.2 with vision capabilities using Amazon Bedrock. We'll use a subset of the llava-instruct dataset to create training, validation, and test sets in the required format.\n",
    "\n",
    "The Llama 3.2 vision model can process and understand both text and images, enabling it to answer questions about visual content. Fine-tuning can improve the model's performance on domain-specific visual tasks.\n",
    "\n",
    "In this notebook, we'll:\n",
    "\n",
    "- Download a subset of the llava-instruct dataset\n",
    "- Process the images and upload them to Amazon S3\n",
    "- Format the data according to the Bedrock conversation schema\n",
    "- Prepare the dataset for fine-tuning\n",
    "\n",
    "## Prerequisites\n",
    "\n",
    "\n",
    "Before starting, ensure you have:\n",
    "\n",
    "- An AWS account with access to Amazon Bedrock\n",
    "- Appropriate IAM permissions for Bedrock and S3\n",
    "- A working Python environment with the necessary libraries\n",
    "\n",
    "You'll need to create an IAM role with the following permissions:\n",
    "\n",
    "```\n",
    "{\n",
    "    \"Version\": \"2012-10-17\",\n",
    "    \"Statement\": [\n",
    "        {\n",
    "            \"Effect\": \"Allow\",\n",
    "            \"Action\": [\n",
    "                \"s3:GetObject\",\n",
    "                \"s3:PutObject\",\n",
    "                \"s3:ListBucket\"\n",
    "            ],\n",
    "            \"Resource\": [\n",
    "                \"arn:aws:s3:::YOUR_BUCKET_NAME\",\n",
    "                \"arn:aws:s3:::YOUR_BUCKET_NAME/*\"\n",
    "            ]\n",
    "        },\n",
    "        {\n",
    "            \"Effect\": \"Allow\",\n",
    "            \"Action\": [\n",
    "                \"bedrock:CreateModelCustomizationJob\",\n",
    "                \"bedrock:GetModelCustomizationJob\",\n",
    "                \"bedrock:ListModelCustomizationJobs\",\n",
    "                \"bedrock:StopModelCustomizationJob\"\n",
    "            ],\n",
    "            \"Resource\": \"arn:aws:bedrock:us-west-2:YOUR_ACCOUNT_ID:model-customization-job/*\"\n",
    "        }\n",
    "    ]\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "91f6b171-c82e-4348-ab5c-b37b20ab334f",
   "metadata": {},
   "source": [
    "## Setup\n",
    "\n",
    "First, let's install and import the necessary libraries:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ac6f7002-28bf-48f0-8c71-2fa834c0bacb",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Install required libraries\n",
    "%pip install --upgrade pip\n",
    "%pip install boto3 datasets pillow tqdm --upgrade --quiet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a9138744-e0be-4cb7-86bb-c4b988051420",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Restart kernel to ensure updated packages take effect\n",
    "from IPython.core.display import HTML\n",
    "HTML(\"<script>Jupyter.notebook.kernel.restart()</script>\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "521efe64-84fc-4268-aad2-e486a0fe7e87",
   "metadata": {},
   "outputs": [],
   "source": [
    "import boto3\n",
    "import os\n",
    "import json\n",
    "import time\n",
    "import shutil\n",
    "from tqdm import tqdm\n",
    "from datasets import load_dataset\n",
    "from PIL import Image\n",
    "import io\n",
    "import uuid\n",
    "import warnings\n",
    "warnings.filterwarnings('ignore')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e2293dad-f4a1-4d06-b9ae-641b6ce022e5",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Set AWS region\n",
    "region = \"us-west-2\"  # Llama 3.2 fine-tuning is currently only available in us-west-2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "921e0489-0e61-4567-ad62-06a016256790",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create AWS clients\n",
    "session = boto3.session.Session(region_name=region)\n",
    "s3_client = session.client('s3')\n",
    "sts_client = session.client('sts')\n",
    "bedrock = session.client(service_name=\"bedrock\")\n",
    "\n",
    "# Get account ID\n",
    "account_id = sts_client.get_caller_identity()[\"Account\"]\n",
    "\n",
    "# Generate bucket name with account ID for uniqueness\n",
    "bucket_name = f\"llama32-vision-ft-{account_id}-{region}\"\n",
    "\n",
    "print(f\"Account ID: {account_id}\")\n",
    "print(f\"Bucket name: {bucket_name}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a542933f-fd17-4a08-925b-107db28b2c7d",
   "metadata": {},
   "source": [
    "## Create S3 Bucket\n",
    "\n",
    "Let's create an S3 bucket to store our images and processed data:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "124b2256-afef-42d8-a8ed-6093201a7947",
   "metadata": {},
   "outputs": [],
   "source": [
    "try:\n",
    "    if region == 'us-east-1':\n",
    "        s3_client.create_bucket(\n",
    "            Bucket=bucket_name\n",
    "        )\n",
    "    else:\n",
    "        # For all other regions, specify the LocationConstraint\n",
    "        s3_client.create_bucket(\n",
    "            Bucket=bucket_name,\n",
    "            CreateBucketConfiguration={'LocationConstraint': region}\n",
    "        )\n",
    "    print(f\"Bucket {bucket_name} created successfully\")\n",
    "except s3_client.exceptions.BucketAlreadyExists:\n",
    "    print(f\"Bucket {bucket_name} already exists\")\n",
    "except s3_client.exceptions.BucketAlreadyOwnedByYou:\n",
    "    print(f\"Bucket {bucket_name} already owned by you\")\n",
    "except Exception as e:\n",
    "    print(f\"Error creating bucket: {e}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "43d8d040-56e6-4f67-8e3c-445bd0750e53",
   "metadata": {},
   "source": [
    "## Download and Prepare the Dataset\n",
    "\n",
    "For this example, we'll use a subset of the llava-instruct dataset from Hugging Face. We'll limit the data to 1000 samples for training, 100 for validation, and 100 for testing to keep this demonstration manageable."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "35aed5d4-1494-40a0-84b4-9fb064efa294",
   "metadata": {},
   "source": [
    "<div style=\"background-color: #FFFFCC; color: #856404; padding: 15px; border-left: 6px solid #FFD700; margin-bottom: 15px;\">\n",
    "<h3 style=\"margin-top: 0; color: #856404;\">⚠️ Large Dataset Warning</h3>\n",
    "<p>This cell downloads the COCO image dataset which:</p>\n",
    "<ul>\n",
    "  <li>Is approximately <b>19.3 GB</b> in size</li>\n",
    "  <li>May take <b>~10 minutes</b> to download depending on your internet connection</li>\n",
    "  <li>Requires at least <b>25 GB</b> of free disk space for download, extraction, and processing</li>\n",
    "</ul>\n",
    "<p>Please ensure you have sufficient storage and a stable internet connection before proceeding.</p>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fd63e2b5-0255-4e3e-bdf4-f12a1d89f828",
   "metadata": {},
   "outputs": [],
   "source": [
    "import requests\n",
    "import zipfile\n",
    "from tqdm import tqdm\n",
    "\n",
    "# Create directories to store images and metadata\n",
    "os.makedirs('llava_images/train', exist_ok=True)\n",
    "os.makedirs('llava_images/val', exist_ok=True)\n",
    "os.makedirs('llava_images/test', exist_ok=True)\n",
    "\n",
    "# Function to download a file with progress bar\n",
    "def download_file(url, save_path):\n",
    "    print(f\"Downloading {url}...\")\n",
    "    response = requests.get(url, stream=True)\n",
    "    total_size = int(response.headers.get('content-length', 0))\n",
    "    \n",
    "    with open(save_path, 'wb') as f:\n",
    "        with tqdm(total=total_size, unit='B', unit_scale=True, desc=os.path.basename(save_path)) as pbar:\n",
    "            for chunk in response.iter_content(chunk_size=8192):\n",
    "                if chunk:\n",
    "                    f.write(chunk)\n",
    "                    pbar.update(len(chunk))\n",
    "    return save_path\n",
    "\n",
    "# Step 1: Download the LLaVA dataset JSON file\n",
    "json_path = 'llava_instruct_150k.json'\n",
    "if not os.path.exists(json_path):\n",
    "    json_url = \"https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/resolve/main/llava_instruct_150k.json\"\n",
    "    download_file(json_url, json_path)\n",
    "\n",
    "# Step 2: Download COCO images if not already downloaded\n",
    "coco_zip_path = 'train2017.zip'\n",
    "images_dir = 'images'\n",
    "os.makedirs(images_dir, exist_ok=True)\n",
    "\n",
    "# Only download if the directory is empty\n",
    "if not os.listdir(images_dir):\n",
    "    if not os.path.exists(coco_zip_path):\n",
    "        images_url = \"http://images.cocodataset.org/zips/train2017.zip\"\n",
    "        download_file(images_url, coco_zip_path)\n",
    "    \n",
    "    print(\"Extracting images...\")\n",
    "    with zipfile.ZipFile(coco_zip_path, 'r') as zip_ref:\n",
    "        zip_ref.extractall('.')\n",
    "    \n",
    "    # Move images to the images directory\n",
    "    print(\"Organizing files...\")\n",
    "    os.makedirs(images_dir, exist_ok=True)\n",
    "    for img in tqdm(os.listdir('train2017'), desc=\"Moving images\"):\n",
    "        shutil.move(os.path.join('train2017', img), os.path.join(images_dir, img))\n",
    "    \n",
    "    # Clean up extraction directory\n",
    "    if os.path.exists('train2017'):\n",
    "        os.rmdir('train2017')\n",
    "        \n",
    "print(\"Loading the LLaVA dataset from JSON...\")\n",
    "# Load the dataset\n",
    "with open(json_path, 'r') as f:\n",
    "    dataset = json.load(f)\n",
    "\n",
    "# Select a subset for our fine-tuning task\n",
    "# We want 1200 examples total (1000 train, 100 val, 100 test)\n",
    "dataset = dataset[:1200]\n",
    "\n",
    "# Process and organize the data\n",
    "dataset_list = []\n",
    "successful_copies = 0\n",
    "failed_copies = 0\n",
    "\n",
    "print(\"Processing images...\")\n",
    "for example in tqdm(dataset, desc=\"Processing dataset\"):\n",
    "    if successful_copies >= 1200:\n",
    "        break\n",
    "    \n",
    "    # Determine if this is for train, val, or test\n",
    "    if successful_copies < 1000:\n",
    "        subset = 'train'\n",
    "    elif successful_copies < 1100:\n",
    "        subset = 'val'\n",
    "    else:\n",
    "        subset = 'test'\n",
    "    \n",
    "    # Get image filename from the example\n",
    "    if \"image\" in example:\n",
    "        image_path = example[\"image\"]\n",
    "        image_filename = os.path.basename(image_path)\n",
    "        \n",
    "        # Source and destination paths\n",
    "        source_path = os.path.join(images_dir, image_filename)\n",
    "        dest_path = f\"llava_images/{subset}/{image_filename}\"\n",
    "        \n",
    "        # Copy the image if it exists\n",
    "        if os.path.exists(source_path):\n",
    "            shutil.copy2(source_path, dest_path)\n",
    "            \n",
    "            # Update example with local path\n",
    "            example_copy = dict(example)\n",
    "            example_copy['image_path'] = dest_path\n",
    "            dataset_list.append(example_copy)\n",
    "            successful_copies += 1\n",
    "        else:\n",
    "            failed_copies += 1\n",
    "\n",
    "print(f\"\\nProcessing complete:\")\n",
    "print(f\"Successful copies: {successful_copies}\")\n",
    "print(f\"Failed copies: {failed_copies}\")\n",
    "\n",
    "# Split into train, validation, and test sets\n",
    "train_data = dataset_list[:1000]\n",
    "val_data = dataset_list[1000:1100]\n",
    "test_data = dataset_list[1100:]\n",
    "\n",
    "print(f\"\\nNumber of training examples: {len(train_data)}\")\n",
    "print(f\"Number of validation examples: {len(val_data)}\")\n",
    "print(f\"Number of test examples: {len(test_data)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ead76317-fe8e-4a48-bfec-c0090f7bb2b1",
   "metadata": {},
   "source": [
    "## Upload Images to S3\n",
    "\n",
    "Now, let's upload the downloaded images to S3:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6951fdfc-fdc9-4630-800b-1b01f887bd7e",
   "metadata": {},
   "outputs": [],
   "source": [
    "def upload_images_to_s3(data_list, subset):\n",
    "    \"\"\"Upload images to S3 and return paths\"\"\"\n",
    "    print(f\"Uploading {subset} images to S3...\")\n",
    "    \n",
    "    s3_paths = []\n",
    "    \n",
    "    for i, example in enumerate(tqdm(data_list)):\n",
    "        try:\n",
    "            # Get local image path\n",
    "            local_path = example['image_path']\n",
    "            \n",
    "            # Create S3 key\n",
    "            file_name = os.path.basename(local_path)\n",
    "            s3_key = f\"images/{subset}/{file_name}\"\n",
    "            \n",
    "            # Upload to S3\n",
    "            s3_client.upload_file(local_path, bucket_name, s3_key)\n",
    "            \n",
    "            # Store S3 path\n",
    "            s3_uri = f\"s3://{bucket_name}/{s3_key}\"\n",
    "            s3_paths.append({\n",
    "                'local_path': local_path,\n",
    "                's3_uri': s3_uri,\n",
    "                'example': example\n",
    "            })\n",
    "            \n",
    "        except Exception as e:\n",
    "            print(f\"Error uploading image {i}: {e}\")\n",
    "    \n",
    "    return s3_paths\n",
    "\n",
    "# Upload images to S3\n",
    "train_s3_paths = upload_images_to_s3(train_data, 'train')\n",
    "val_s3_paths = upload_images_to_s3(val_data, 'val')\n",
    "test_s3_paths = upload_images_to_s3(test_data, 'test')\n",
    "\n",
    "print(f\"Uploaded {len(train_s3_paths)} training images\")\n",
    "print(f\"Uploaded {len(val_s3_paths)} validation images\")\n",
    "print(f\"Uploaded {len(test_s3_paths)} test images\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0c2c66c-6f05-437a-966b-747e5000c906",
   "metadata": {},
   "source": [
    "## Format Data for Fine-tuning\n",
    "\n",
    "Let's prepare the data in the required format for Bedrock Llama 3.2 fine-tuning:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a5b78d50-264d-405e-843d-ca676624ed5a",
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_jsonl_entry(example, s3_uri):\n",
    "    \"\"\"Create a JSONL entry in the Bedrock conversation schema format\"\"\"\n",
    "    \n",
    "    # Extract conversation components\n",
    "    conversations = example.get('conversations', [])\n",
    "    \n",
    "    if len(conversations) >= 2:\n",
    "        question = conversations[0].get('value', \"What's in this image?\")\n",
    "        answer = conversations[1].get('value', \"This is an image.\")\n",
    "    else:\n",
    "        question = \"What's in this image?\"\n",
    "        answer = \"This is an image.\"\n",
    "    \n",
    "    # Create entry in the required format\n",
    "    return {\n",
    "        \"schemaVersion\": \"bedrock-conversation-2024\",\n",
    "        \"system\": [\n",
    "            {\n",
    "                \"text\": \"You are a helpful assistant that can answer questions about images accurately and concisely.\"\n",
    "            }\n",
    "        ],\n",
    "        \"messages\": [\n",
    "            {\n",
    "                \"role\": \"user\",\n",
    "                \"content\": [\n",
    "                    {\n",
    "                        \"text\": question\n",
    "                    },\n",
    "                    {\n",
    "                        \"image\": {\n",
    "                            \"format\": \"png\",\n",
    "                            \"source\": {\n",
    "                                \"s3Location\": {\n",
    "                                    \"uri\": s3_uri,\n",
    "                                    \"bucketOwner\": account_id\n",
    "                                }\n",
    "                            }\n",
    "                        }\n",
    "                    }\n",
    "                ]\n",
    "            },\n",
    "            {\n",
    "                \"role\": \"assistant\",\n",
    "                \"content\": [\n",
    "                    {\n",
    "                        \"text\": answer\n",
    "                    }\n",
    "                ]\n",
    "            }\n",
    "        ]\n",
    "    }\n",
    "\n",
    "def prepare_dataset_jsonl(s3_paths, output_file):\n",
    "    \"\"\"Prepare dataset in JSONL format for fine-tuning\"\"\"\n",
    "    \n",
    "    with open(output_file, 'w') as f:\n",
    "        for item in s3_paths:\n",
    "            # Create JSONL entry\n",
    "            entry = create_jsonl_entry(item['example'], item['s3_uri'])\n",
    "            \n",
    "            # Write to file\n",
    "            f.write(json.dumps(entry) + '\\n')\n",
    "    \n",
    "    print(f\"Created {output_file} with {len(s3_paths)} samples\")\n",
    "\n",
    "# Prepare JSONL files\n",
    "prepare_dataset_jsonl(train_s3_paths, 'train.jsonl')\n",
    "prepare_dataset_jsonl(val_s3_paths, 'validation.jsonl')\n",
    "prepare_dataset_jsonl(test_s3_paths, 'test.jsonl')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a50119a-37d5-4043-a4e7-4bce5ab4283e",
   "metadata": {},
   "source": [
    "## Upload JSONL Files to S3\n",
    "\n",
    "Let's upload our prepared JSONL files to S3:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "89f1b3ed-ea71-4648-a6f3-0bfe40d5e13b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Upload JSONL files to S3\n",
    "s3_client.upload_file('train.jsonl', bucket_name, 'data/train.jsonl')\n",
    "s3_client.upload_file('validation.jsonl', bucket_name, 'data/validation.jsonl')\n",
    "s3_client.upload_file('test.jsonl', bucket_name, 'data/test.jsonl')\n",
    "\n",
    "# Store S3 URIs for later use\n",
    "train_data_uri = f\"s3://{bucket_name}/data/train.jsonl\"\n",
    "validation_data_uri = f\"s3://{bucket_name}/data/validation.jsonl\"\n",
    "test_data_uri = f\"s3://{bucket_name}/data/test.jsonl\"\n",
    "\n",
    "print(f\"Training data URI: {train_data_uri}\")\n",
    "print(f\"Validation data URI: {validation_data_uri}\")\n",
    "print(f\"Test data URI: {test_data_uri}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b3f0dc50-5487-4c76-b58a-b9b8474d171e",
   "metadata": {},
   "source": [
    "## Create IAM Role for Model Fine-tuning\n",
    "Let's create an IAM role that will be used for the fine-tuning job:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c93714a5-7870-4d34-87c8-5c7de5cd5de3",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Generate policy documents\n",
    "trust_policy_doc = {\n",
    "    \"Version\": \"2012-10-17\",\n",
    "    \"Statement\": [\n",
    "        {\n",
    "            \"Effect\": \"Allow\",\n",
    "            \"Principal\": {\n",
    "                \"Service\": \"bedrock.amazonaws.com\"\n",
    "            },\n",
    "            \"Action\": \"sts:AssumeRole\",\n",
    "            \"Condition\": {\n",
    "                \"StringEquals\": {\n",
    "                    \"aws:SourceAccount\": account_id\n",
    "                },\n",
    "                \"ArnLike\": {\n",
    "                    \"aws:SourceArn\": f\"arn:aws:bedrock:{region}:{account_id}:model-customization-job/*\"\n",
    "                }\n",
    "            }\n",
    "        }\n",
    "    ]\n",
    "}\n",
    "\n",
    "access_policy_doc = {\n",
    "    \"Version\": \"2012-10-17\",\n",
    "    \"Statement\": [\n",
    "        {\n",
    "            \"Effect\": \"Allow\",\n",
    "            \"Action\": [\n",
    "                \"s3:GetObject\",\n",
    "                \"s3:PutObject\",\n",
    "                \"s3:ListBucket\",\n",
    "                \"s3:GetBucketLocation\"\n",
    "            ],\n",
    "            \"Resource\": [\n",
    "                f\"arn:aws:s3:::{bucket_name}\",\n",
    "                f\"arn:aws:s3:::{bucket_name}/*\"\n",
    "            ]\n",
    "        }\n",
    "    ]\n",
    "}\n",
    "\n",
    "\n",
    "# Create IAM client\n",
    "iam = session.client('iam')\n",
    "\n",
    "# Role name for fine-tuning\n",
    "role_name = f\"Llama32VisionFineTuningRole-{int(time.time())}\"\n",
    "policy_name = f\"Llama32VisionFineTuningPolicy-{int(time.time())}\"\n",
    "\n",
    "# Create role\n",
    "try:\n",
    "    response = iam.create_role(\n",
    "        RoleName=role_name,\n",
    "        AssumeRolePolicyDocument=json.dumps(trust_policy_doc),\n",
    "        Description=\"Role for fine-tuning Llama 3.2 vision model with Amazon Bedrock\"\n",
    "    )\n",
    "    \n",
    "    role_arn = response[\"Role\"][\"Arn\"]\n",
    "    print(f\"Created role: {role_arn}\")\n",
    "    \n",
    "    # Create policy\n",
    "    response = iam.create_policy(\n",
    "        PolicyName=policy_name,\n",
    "        PolicyDocument=json.dumps(access_policy_doc)\n",
    "    )\n",
    "    \n",
    "    policy_arn = response[\"Policy\"][\"Arn\"]\n",
    "    print(f\"Created policy: {policy_arn}\")\n",
    "    \n",
    "    # Attach policy to role\n",
    "    iam.attach_role_policy(\n",
    "        RoleName=role_name,\n",
    "        PolicyArn=policy_arn\n",
    "    )\n",
    "    \n",
    "    print(f\"Attached policy to role\")\n",
    "    \n",
    "except Exception as e:\n",
    "    print(f\"Error creating IAM resources: {e}\")\n",
    "\n",
    "# Allow time for IAM role propagation\n",
    "print(\"Waiting for IAM role to propagate...\")\n",
    "time.sleep(10)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5c980d52-e78b-4873-9f95-59f5a19e5602",
   "metadata": {},
   "source": [
    "## Save Variables for the Next Notebook\n",
    "Let's save the important variables we'll need in the next notebook:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f2092f97-9043-40cf-8171-8b8f0862a8eb",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Store variables for the next notebook\n",
    "%store bucket_name\n",
    "%store train_data_uri\n",
    "%store validation_data_uri\n",
    "%store test_data_uri\n",
    "%store role_arn\n",
    "%store role_name\n",
    "%store policy_arn\n",
    "\n",
    "print(\"Variables saved for use in the next notebook\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "35e18555-b448-4556-869a-2a3940e98d24",
   "metadata": {},
   "source": [
    "## Conclusion\n",
    "\n",
    "In this notebook, we prepared the data needed for fine-tuning a Llama 3.2 multi-modal model. We:\n",
    "\n",
    "- Downloaded a subset of the llava-instruct dataset with COCO images\n",
    "- Uploaded images to S3\n",
    "- Formatted the data according to the Bedrock conversation schema\n",
    "- Created an IAM role with the necessary permissions\n",
    "\n",
    "The data is now ready for fine-tuning, which we'll perform in the next notebook."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "conda_python3",
   "language": "python",
   "name": "conda_python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
