{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "e70e3322",
   "metadata": {},
   "source": [
    "# Hosting CrewAI multi-agent crew with Amazon Bedrock models in Amazon Bedrock AgentCore Runtime\n",
    "\n",
    "## Overview\n",
    "\n",
    "In this tutorial we will learn how to host your existing multi-agent crew, using Amazon Bedrock AgentCore Runtime. \n",
    "\n",
    "We will focus on a CrewAI with Amazon Bedrock model example. For Strands Agents with Amazon Bedrock model check [here](../01-strands-with-bedrock-model) and for a Strands Agents with an OpenAI model check [here](../03-strands-with-openai-model).\n",
    "\n",
    "\n",
    "### Tutorial Details\n",
    "\n",
    "| Information | Details |\n",
    "|:--------------------|:-----------------------------------------------------------------------------|\n",
    "| Tutorial type | Conversational |\n",
    "| Agent type | Multi-agent crew |\n",
    "| Agentic Framework | CrewAI |\n",
    "| LLM model | Anthropic Claude 3.7 Sonnet |\n",
    "| Tutorial components | Hosting agent on AgentCore Runtime. Using CrewAI and Amazon Bedrock Model |\n",
    "| Tutorial vertical | Cross-vertical |\n",
    "| Example complexity | Easy |\n",
    "| SDK used | Amazon BedrockAgentCore Python SDK and boto3 |\n",
    "\n",
    "### Tutorial Architecture\n",
    "\n",
    "In this tutorial we will describe how to deploy an existing multi-agent crew to AgentCore runtime. \n",
    "\n",
    "For demonstration purposes, we will use a CrewAI crew using Amazon Bedrock models\n",
    "\n",
    "In our example we will use a research crew with two agents: a researcher and an analyst.\n",
    "<div style=\"text-align:left\">\n",
    "    <img src=\"images/architecture_runtime.png\" width=\"60%\"/>\n",
    "</div>\n",
    "\n",
    "\n",
    "### Tutorial Key Features\n",
    "\n",
    "* Hosting Agents on Amazon Bedrock AgentCore Runtime\n",
    "* Using Amazon Bedrock models\n",
    "* Using CrewAI"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3520cf3a",
   "metadata": {},
   "source": [
    "## Prerequisites\n",
    "\n",
    "To execute this tutorial you will need:\n",
    "* Python 3.10+\n",
    "* uv package manager\n",
    "* AWS credentials\n",
    "* Docker running\n",
    "\n",
    "Further, we need to install a few dependencies: \n",
    "* Amazon Bedrock AgentCore SDK\n",
    "* CrewAI \n",
    "* Langchain community package\n",
    "* Duckduckgo search\n",
    "\n",
    "We have packaged all necessary dependencies in a pyproject.toml file so they can be installed conveniently. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "99c5ae06-5009-4bee-bfd0-88bd991e0368",
   "metadata": {},
   "outputs": [],
   "source": [
    "!uv sync --active --force-reinstall"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f55133c2",
   "metadata": {},
   "source": [
    "## Creating your multi-agent crew and experimenting locally\n",
    "\n",
    "Before we deploy our agents to AgentCore Runtime, let's develop and run them locally for experimentation purposes.\n",
    "\n",
    "In this guide, we’ll walk through creating a research crew that will help us research and analyze a topic, then create a comprehensive report. This practical example demonstrates how AI agents can collaborate to accomplish complex tasks. The example is adapted from a [getting started guide](https://docs.crewai.com/en/guides/crews/first-crew) provided directly by CrewAI.\n",
    "\n",
    "The local architecture looks as following:\n",
    "\n",
    "<div style=\"text-align:left\">\n",
    "    <img src=\"images/architecture_local.png\" width=\"60%\"/>\n",
    "</div>\n",
    "\n",
    "\n",
    "### Defining agents, tasks, crew\n",
    "\n",
    "We will first create the artifacts defining a local CrewAI agent, including: \n",
    "* agents.yaml, defining the two agents involved in our crew\n",
    "* tasks.yaml, defining the tasks to be executed by the agents in our crew\n",
    "* crew.py, defining our crew consisting of agents working on tasks as defined\n",
    "* main.py, our local entrypoint kicking off the crew run"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0c491cc8",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile research_crew/config/agents.yaml\n",
    "researcher:\n",
    "  role: >\n",
    "    Senior Research Specialist for {topic}\n",
    "  goal: >\n",
    "    Find comprehensive and accurate information about {topic}\n",
    "    with a focus on recent developments and key insights\n",
    "  backstory: >\n",
    "    You are an experienced research specialist with a talent for\n",
    "    finding relevant information from various sources. You excel at\n",
    "    organizing information in a clear and structured manner, making\n",
    "    complex topics accessible to others.\n",
    "  llm: bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0\n",
    "\n",
    "analyst:\n",
    "  role: >\n",
    "    Data Analyst and Report Writer for {topic}\n",
    "  goal: >\n",
    "    Analyze research findings and create a comprehensive, well-structured\n",
    "    report that presents insights in a clear and engaging way\n",
    "  backstory: >\n",
    "    You are a skilled analyst with a background in data interpretation\n",
    "    and technical writing. You have a talent for identifying patterns\n",
    "    and extracting meaningful insights from research data, then\n",
    "    communicating those insights effectively through well-crafted reports.\n",
    "  llm: bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e7527c04",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile research_crew/config/tasks.yaml\n",
    "research_task:\n",
    "  description: >\n",
    "    Conduct thorough research on {topic}. Focus on:\n",
    "    1. Key concepts and definitions\n",
    "    2. Historical development and recent trends\n",
    "    3. Major challenges and opportunities\n",
    "    4. Notable applications or case studies\n",
    "    5. Future outlook and potential developments\n",
    "\n",
    "    Make sure to organize your findings in a structured format with clear sections.\n",
    "  expected_output: >\n",
    "    A comprehensive research document with well-organized sections covering\n",
    "    all the requested aspects of {topic}. Include specific facts, figures,\n",
    "    and examples where relevant.\n",
    "  agent: researcher\n",
    "\n",
    "analysis_task:\n",
    "  description: >\n",
    "    Analyze the research findings and create a comprehensive report on {topic}.\n",
    "    Your report should:\n",
    "    1. State the topic and begin with an executive summary\n",
    "    2. Include all key information from the research\n",
    "    3. Provide insightful analysis of trends and patterns\n",
    "    4. Offer recommendations or future considerations\n",
    "    5. Be formatted in a professional, easy-to-read style with clear headings\n",
    "  expected_output: >\n",
    "    A polished, professional report on {topic} that presents the research\n",
    "    findings with added analysis and insights. The report should be well-structured\n",
    "    with an executive summary, main sections, and conclusion.\n",
    "  agent: analyst\n",
    "  context:\n",
    "    - research_task"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b3820d4d",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile research_crew/crew.py\n",
    "from crewai import Agent, Crew, Process, Task\n",
    "from crewai.project import CrewBase, agent, crew, task\n",
    "from crewai.agents.agent_builder.base_agent import BaseAgent\n",
    "from typing import List\n",
    "from langchain_community.tools import DuckDuckGoSearchRun\n",
    "from crewai.tools import BaseTool\n",
    "from crewai_tools import SerperDevTool\n",
    "from pydantic import Field\n",
    "\n",
    "\n",
    "class SearchTool(BaseTool):\n",
    "     name: str = \"Search\"\n",
    "     description: str = \"Useful for searching the web for information.\"\n",
    "     search: DuckDuckGoSearchRun = Field(default_factory=DuckDuckGoSearchRun)\n",
    "\n",
    "     def _run(self, query: str) -> str:\n",
    "         \"\"\"Execute the search query and return results\"\"\"\n",
    "         try:\n",
    "             return self.search.invoke(query)\n",
    "         except Exception as e:\n",
    "             return f\"Error performing search: {str(e)}\"\n",
    "\n",
    "@CrewBase\n",
    "class ResearchCrew():\n",
    "    \"\"\"Research crew for comprehensive topic analysis and reporting\"\"\"\n",
    "\n",
    "    agents: List[BaseAgent]\n",
    "    tasks: List[Task]\n",
    "\n",
    "    @agent\n",
    "    def researcher(self) -> Agent:\n",
    "        return Agent(\n",
    "            config=self.agents_config['researcher'], # type: ignore[index]\n",
    "            verbose=True,\n",
    "            tools=[\n",
    "                #SerperDevTool()\n",
    "                SearchTool()\n",
    "                ]\n",
    "        )\n",
    "\n",
    "    @agent\n",
    "    def analyst(self) -> Agent:\n",
    "        return Agent(\n",
    "            config=self.agents_config['analyst'], # type: ignore[index]\n",
    "            verbose=True\n",
    "        )\n",
    "\n",
    "    @task\n",
    "    def research_task(self) -> Task:\n",
    "        return Task(\n",
    "            config=self.tasks_config['research_task'] # type: ignore[index]\n",
    "        )\n",
    "\n",
    "    @task\n",
    "    def analysis_task(self) -> Task:\n",
    "        return Task(\n",
    "            config=self.tasks_config['analysis_task'], # type: ignore[index]\n",
    "            #output_file='output/report.md'\n",
    "        )\n",
    "\n",
    "    @crew\n",
    "    def crew(self) -> Crew:\n",
    "        \"\"\"Creates the research crew\"\"\"\n",
    "        return Crew(\n",
    "            agents=self.agents,\n",
    "            tasks=self.tasks,\n",
    "            process=Process.sequential,\n",
    "            verbose=True,\n",
    "        )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "065200a2",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile research_crew/main.py\n",
    "import os\n",
    "from research_crew.crew import ResearchCrew\n",
    "\n",
    "# Create output directory if it doesn't exist\n",
    "os.makedirs('output', exist_ok=True)\n",
    "\n",
    "def run():\n",
    "    \"\"\"\n",
    "    Run the research crew.\n",
    "    \"\"\"\n",
    "    inputs = {\n",
    "        'topic': 'Artificial Intelligence in Healthcare'\n",
    "    }\n",
    "\n",
    "    # Create and run the crew\n",
    "    result = ResearchCrew().crew().kickoff(inputs=inputs)\n",
    "\n",
    "    # Print the result\n",
    "    print(\"\\n\\n=== FINAL REPORT ===\\n\\n\")\n",
    "    print(result.raw)\n",
    "\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    run()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "309a30b7-4ed8-4fa0-8347-2d49d05e5089",
   "metadata": {},
   "source": [
    "### Invoking crew locally\n",
    "\n",
    "Finally, we can use the CrewAI CLI to lockally kick off the crew. Alternatively, we could also simply run our local entrypoint main.py. This might take a few minutes. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5cd45fbe",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "!crewai run"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "55a36025-eb13-441a-bb95-ab33edcba1e1",
   "metadata": {},
   "source": [
    "## Deploying multi-agent crew to Amazon Bedrock AgentCore\n",
    "\n",
    "For production-grade agentic applications we will need to run our crew in the cloud. Therefor we will deploy our crew to Amazon Bedrock AgentCore. \n",
    "\n",
    "The architecture here will look as following:\n",
    "\n",
    "<div style=\"text-align:left\">\n",
    "     <img src=\"images/architecture_local.png\" width=\"60%\"/>\n",
    "</div>\n",
    "\n",
    "Deploying the crew to AgentCore takes the following steps: \n",
    "\n",
    "### Remote entrypoint\n",
    "\n",
    "First, we create a remote entrypoint. With AgentCore Runtime, we will decorate the invocation part of our agent with the @app.entrypoint decorator and have it as the entry point for our runtime. This also involves: \n",
    "* Import the Runtime App with `from bedrock_agentcore.runtime import BedrockAgentCoreApp`\n",
    "* Initialize the App in our code with `app = BedrockAgentCoreApp()`\n",
    "* Decorate the invocation function with the `@app.entrypoint` decorator\n",
    "* Let AgentCoreRuntime control the running of the agent with `app.run()`\n",
    "\n",
    "### What happens behind the scenes?\n",
    "\n",
    "When you use `BedrockAgentCoreApp`, it automatically:\n",
    "\n",
    "* Creates an HTTP server that listens on the port 8080\n",
    "* Implements the required `/invocations` endpoint for processing the agent's requirements\n",
    "* Implements the `/ping` endpoint for health checks (very important for asynchronous agents)\n",
    "* Handles proper content types and response formats\n",
    "* Manages error handling according to the AWS standards                                                                                                                                                                        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "24eae179",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile research_crew/research_crew.py\n",
    "import os\n",
    "from research_crew.crew import ResearchCrew\n",
    "\n",
    "# ---------- Agentcore imports --------------------\n",
    "from bedrock_agentcore.runtime import BedrockAgentCoreApp\n",
    "\n",
    "app = BedrockAgentCoreApp()\n",
    "#------------------------------------------------\n",
    "\n",
    "\n",
    "@app.entrypoint\n",
    "def agent_invocation(payload, context):\n",
    "    \"\"\"Handler for agent invocation\"\"\"\n",
    "    print(f'Payload: {payload}')\n",
    "    try: \n",
    "        # Extract user message from payload with default\n",
    "        user_message = payload.get(\"prompt\", \"Artificial Intelligence in Healthcare\")\n",
    "        print(f\"Processing topic: {user_message}\")\n",
    "        \n",
    "        # Create crew instance and run synchronously\n",
    "        research_crew_instance = ResearchCrew()\n",
    "        crew = research_crew_instance.crew()\n",
    "        \n",
    "        # Use synchronous kickoff instead of async - this avoids all event loop issues\n",
    "        result = crew.kickoff(inputs={'topic': user_message})\n",
    "\n",
    "        print(\"Context:\\n-------\\n\", context)\n",
    "        print(\"Result Raw:\\n*******\\n\", result.raw)\n",
    "        \n",
    "        # Safely access json_dict if it exists\n",
    "        if hasattr(result, 'json_dict'):\n",
    "            print(\"Result JSON:\\n*******\\n\", result.json_dict)\n",
    "        \n",
    "        return {\"result\": result.raw}\n",
    "        \n",
    "    except Exception as e:\n",
    "        print(f'Exception occurred: {e}')\n",
    "        return {\"error\": f\"An error occurred: {str(e)}\"}\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    app.run()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0e521403",
   "metadata": {},
   "source": [
    "### Deploying the agent to AgentCore Runtime\n",
    "\n",
    "The `CreateAgentRuntime` operation supports comprehensive configuration options, letting you specify container images, environment variables and encryption settings. You can also configure protocol settings (HTTP, MCP) and authorization mechanisms to control how your clients communicate with the agent. \n",
    "\n",
    "**Note:** Operations best practice is to package code as container and push to ECR using CI/CD pipelines and IaC\n",
    "\n",
    "In this tutorial can will the Amazon Bedrock AgentCode Python SDK to easily package your artifacts and deploy them to AgentCore runtime.\n",
    "\n",
    "#### Creation of execution role for remote agentic workload\n",
    "\n",
    "Then, we create a IAM execution role equipping our remote agentic workload with the required permissions to run. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5128f7e9",
   "metadata": {},
   "outputs": [],
   "source": [
    "import sys\n",
    "import os\n",
    "\n",
    "# Get the current notebook's directory\n",
    "current_dir = os.path.dirname(os.path.abspath('__file__' if '__file__' in globals() else '.'))\n",
    "\n",
    "utils_dir = os.path.join(current_dir, '..')\n",
    "utils_dir = os.path.join(utils_dir, '..')\n",
    "utils_dir = os.path.abspath(utils_dir)\n",
    "\n",
    "# Add to sys.path\n",
    "sys.path.insert(0, utils_dir)\n",
    "print(\"sys.path[0]:\", sys.path[0])\n",
    "\n",
    "from utils import create_agentcore_role\n",
    "\n",
    "agent_name=\"langgraph_bedrock\"\n",
    "agentcore_iam_role = create_agentcore_role(agent_name=agent_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7bfc9ce8-8965-47cc-822d-f66d59534692",
   "metadata": {},
   "source": [
    "#### Configure AgentCore Runtime deployment\n",
    "\n",
    "Next we will use our starter toolkit to configure the AgentCore Runtime deployment with an entrypoint, the execution role we just created and a requirements file. We will also configure the starter kit to auto create the Amazon ECR repository on launch.\n",
    "\n",
    "AgentCore configure is required to generate a Dockerfile holding a blueprint for the Docker container the workload will be running in and a .bedrock_agentcore.yaml holding the agentic workload's configuration. During the configure step, your docker file will be generated based on your application code.\n",
    "\n",
    "<div style=\"text-align:left\">\n",
    "    <img src=\"images/configure.png\" width=\"60%\"/>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5a2cdea2",
   "metadata": {},
   "outputs": [],
   "source": [
    "from bedrock_agentcore_starter_toolkit import Runtime\n",
    "from boto3.session import Session\n",
    "boto_session = Session()\n",
    "region = boto_session.region_name\n",
    "region\n",
    "\n",
    "agentcore_runtime = Runtime()\n",
    "\n",
    "response = agentcore_runtime.configure(\n",
    "    entrypoint=\"research_crew/research_crew.py\",\n",
    "    execution_role=agentcore_iam_role['Role']['Arn'],\n",
    "    auto_create_ecr=True,\n",
    "    requirements_file=\"research_crew/requirements.txt\",\n",
    "    region=region\n",
    ")\n",
    "response"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "25c932a8-cd83-4288-82e1-ec4134a4058f",
   "metadata": {},
   "source": [
    "#### Launching agent to AgentCore Runtime: deploying the remote agentic workload\n",
    "\n",
    "Now that we've got a docker file, let's launch the agent to the AgentCore Runtime. This will create the Amazon ECR repository and the AgentCore Runtime. AgentCore launch will then deploy the agentic workload to the cloud. This includes creating a Docker image and pushing it to ECR, as well as getting an endpoint ready for usage.\n",
    "\n",
    "\n",
    "<div style=\"text-align:left\">\n",
    "    <img src=\"images/launch.png\" width=\"85%\"/>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "57fa4d17",
   "metadata": {},
   "outputs": [],
   "source": [
    "launch_result = agentcore_runtime.launch()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f065bd38-b562-4cba-8cef-443f19b9fe39",
   "metadata": {},
   "source": [
    "#### Checking for the AgentCore Runtime Status\n",
    "\n",
    "Now that we've deployed the AgentCore Runtime, let's check for it's deployment status"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9db7dbdf-4abe-4746-9374-8b400c01be86",
   "metadata": {},
   "outputs": [],
   "source": [
    "status_response = agentcore_runtime.status()\n",
    "status = status_response.endpoint['status']\n",
    "end_status = ['READY', 'CREATE_FAILED', 'DELETE_FAILED', 'UPDATE_FAILED']\n",
    "while status not in end_status:\n",
    "    time.sleep(10)\n",
    "    status_response = agentcore_runtime.status()\n",
    "    status = status_response.endpoint['status']\n",
    "    print(status)\n",
    "status"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2b2d2856-356f-44db-bfc0-4c60abb1ec96",
   "metadata": {},
   "source": [
    "### Invoking AgentCore Runtime with boto3\n",
    "\n",
    "Now that your AgentCore Runtime was created you can invoke it with any AWS SDK. For instance, you can use the boto3 `invoke_agent_runtime` method for it. Since this is a long running agent we are overwriting the default `retries`, `connect_timout`and `read_timeout`.\n",
    "\n",
    "<div style=\"text-align:left\">\n",
    "    <img src=\"images/invoke.png\" width=85%\"/>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ff43d1c9-e6cf-49cd-a83d-48764cc8f5ab",
   "metadata": {},
   "outputs": [],
   "source": [
    "from botocore.config import Config\n",
    "\n",
    "# Configure retries and timeout\n",
    "config = Config(\n",
    "    retries={\n",
    "        'max_attempts': 10,  # Increase max retries to 10 (default is 4)\n",
    "        'mode': 'adaptive'   # Options: 'legacy', 'standard', 'adaptive'\n",
    "    },\n",
    "    connect_timeout=600,      # Connection timeout in seconds (default is 60)\n",
    "    read_timeout=3000         # Read timeout in seconds (default is 60)\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "80359a80-75c7-4252-997c-a80eae63b796",
   "metadata": {},
   "outputs": [],
   "source": [
    "agent_arn = launch_result.agent_arn\n",
    "agentcore_client = boto3.client(\n",
    "    'bedrock-agentcore',\n",
    "    region_name=region,\n",
    "    config=config\n",
    ")\n",
    "\n",
    "boto3_response = agentcore_client.invoke_agent_runtime(\n",
    "    agentRuntimeArn=agent_arn,\n",
    "    qualifier=\"DEFAULT\",\n",
    "    payload=json.dumps({\"prompt\": \"Artificial Intelligence in Healthcare\"})\n",
    ")\n",
    "\n",
    "response_body = boto3_response['response'].read()\n",
    "response_data = json.loads(response_body)\n",
    "response_data"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47119e0e-7997-4434-b7d0-141539c0cd47",
   "metadata": {},
   "source": [
    "## Cleanup (Optional)\n",
    "\n",
    "Let's now clean up the AgentCore Runtime created"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "358439c4-d431-4d99-990a-d62031f53471",
   "metadata": {},
   "outputs": [],
   "source": [
    "launch_result.ecr_uri, launch_result.agent_id, launch_result.ecr_uri.split('/')[1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d4b60d6e-58b2-43ae-a7f4-7896fc79a752",
   "metadata": {},
   "outputs": [],
   "source": [
    "agentcore_control_client = boto3.client(\n",
    "    'bedrock-agentcore-control',\n",
    "    region_name=region\n",
    ")\n",
    "ecr_client = boto3.client(\n",
    "    'ecr',\n",
    "    region_name=region\n",
    "    \n",
    ")\n",
    "\n",
    "iam_client = boto3.client('iam')\n",
    "\n",
    "runtime_delete_response = agentcore_control_client.delete_agent_runtime(\n",
    "    agentRuntimeId=launch_result.agent_id\n",
    ")\n",
    "\n",
    "response = ecr_client.delete_repository(\n",
    "    repositoryName=launch_result.ecr_uri.split('/')[1],\n",
    "    force=True\n",
    ")\n",
    "\n",
    "policies = iam_client.list_role_policies(\n",
    "    RoleName=agentcore_iam_role['Role']['RoleName'],\n",
    "    MaxItems=100\n",
    ")\n",
    "\n",
    "for policy_name in policies['PolicyNames']:\n",
    "    iam_client.delete_role_policy(\n",
    "        RoleName=agentcore_role_name,\n",
    "        PolicyName=policy_name\n",
    "    )\n",
    "iam_response = iam_client.delete_role(\n",
    "    RoleName=agentcore_role_name\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eef2e4dc-0e2b-40cf-96bd-efda8a0a90e0",
   "metadata": {},
   "source": [
    "## Congratulations!"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
