{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "![](https://europe-west1-atp-views-tracker.cloudfunctions.net/working-analytics?notebook=tutorials--aws-agentcore--agentcore-tutorial)\n",
        "\n",
        "# AWS Bedrock AgentCore Tutorial: From Local Agent to Managed Agent\n",
        "\n",
        "## Overview\n",
        "\n",
        "This tutorial introduces AWS Bedrock AgentCore, a runtime framework designed to transform local AI agents into production-ready systems. We will walk through the process of creating a simple agent and then wrapping it with AgentCore to enable managed execution, request tracking, and standardized communication patterns.\n",
        "\n",
        "By the end of this tutorial, you will understand how AgentCore serves as an infrastructure layer that sits between your agent logic and the production environment, handling the complexities of request management, execution control, and runtime orchestration.\n",
        "\n",
        "### What You Will Learn\n",
        "\n",
        "This tutorial covers three fundamental concepts. First, you will learn how to build a basic agent using standard Python. Second, you will understand how to wrap your agent with the AgentCore runtime using the Python SDK. Finally, you will see how to run your agent locally through a standardized HTTP interface that provides request tracking and structured responses.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Understanding AgentCore Architecture\n",
        "\n",
        "Before diving into implementation, it helps to understand what AgentCore does and why it matters. When you build an agent locally, it typically exists as a Python function or class that takes input and returns output. This works fine for development and testing, but production environments require additional infrastructure.\n",
        "\n",
        "AgentCore provides this infrastructure layer. Think of it as a control tower for your agent. The control tower does not fly the plane, but it manages takeoffs, landings, and communications. Similarly, AgentCore does not change your agent's logic, but it manages how requests reach your agent, how responses are returned, and how the entire execution is tracked and monitored.\n",
        "\n",
        "The framework introduces several key components. The runtime server handles incoming requests and routes them to your agent. The SDK provides a simple decorator pattern to convert your function into an HTTP service. Together, these components create a managed execution environment that transforms a local function into a production service.\n",
        "\n",
        "### Why Use AgentCore\n",
        "\n",
        "The primary value of AgentCore lies in separation of concerns. Your agent code focuses purely on processing messages and generating responses. AgentCore handles everything else: accepting HTTP requests, validating input, tracking execution time, managing errors, and formatting responses. This separation makes your code cleaner, easier to test, and simpler to maintain.\n",
        "\n",
        "Additionally, AgentCore provides a foundation for production capabilities. Once your agent runs through the runtime, you can add tools, enable logging, implement access control, and deploy to any server without modifying your core agent logic. The infrastructure grows around your agent, not inside it."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## System Architecture Diagram\n",
        "\n",
        "The following diagram illustrates how AgentCore transforms a local agent into a managed system. The flow begins with an HTTP request from a client, passes through the AgentCore runtime, reaches your agent for processing, and returns through the same path with added metadata.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "#  ![AgentCore Architecture Diagram](assets/agentcore-diagram.png)\n",
        " \n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Step 1: Environment Setup\n",
        "\n",
        "We begin by installing the necessary dependencies and configuring our environment. AgentCore requires Python 3.10 or higher and depends on a few core libraries for HTTP handling and configuration management."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Installing Dependencies\n",
        "\n",
        "The following command installs the Bedrock AgentCore Runtime SDK along with supporting libraries. The `python-dotenv` package helps us manage API keys securely, while `requests` allows us to test our agent by making HTTP calls."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "!pip install bedrock-agentcore python-dotenv requests openai"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Configuring API Credentials\n",
        "\n",
        "For this tutorial, we use OpenAI's API to power our agent's language understanding capabilities. The following code loads your API key from a `.env` file, which is a best practice for keeping sensitive credentials out of your code. You should create a `.env` file in your working directory with a line like `OPENAI_API_KEY=your-key-here`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 15,
      "metadata": {},
      "outputs": [],
      "source": [
        "import os\n",
        "from dotenv import load_dotenv\n",
        "\n",
        "load_dotenv()\n",
        "os.environ[\"OPENAI_API_KEY\"] = os.getenv('OPENAI_API_KEY')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Step 2: Building the Base Agent\n",
        "\n",
        "Now we create our agent. This is a standard Python function that processes messages and returns responses. The agent is intentionally simple to keep the focus on understanding how AgentCore wraps and manages it."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Agent Implementation\n",
        "\n",
        "The `process_message` function demonstrates a basic agent that uses OpenAI's API to answer questions. This is just regular Python code with no special AgentCore-specific logic mixed in. Notice that we define the agent logic as a simple function that takes a message and returns a response."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 16,
      "metadata": {},
      "outputs": [],
      "source": [
        "import openai\n",
        "\n",
        "def process_message(message: str) -> str:\n",
        "    \"\"\"Process a message and return a response using OpenAI\"\"\"\n",
        "    client = openai.OpenAI()\n",
        "    response = client.chat.completions.create(\n",
        "        model=\"gpt-4o-mini\",\n",
        "        messages=[{\"role\": \"user\", \"content\": message}],\n",
        "        temperature=0\n",
        "    )\n",
        "    return response.choices[0].message.content"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Testing the Local Agent\n",
        "\n",
        "Before integrating with AgentCore, we verify that our agent works correctly in isolation. This step is important because it confirms that any issues we encounter later are related to the runtime integration, not the agent logic itself."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Agent Response:\n",
            "Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, particularly computer systems. These processes include learning (the acquisition of information and rules for using it), reasoning (using rules to reach approximate or definite conclusions), and self-correction. AI can be categorized into two main types:\n",
            "\n",
            "1. **Narrow AI (Weak AI)**: This type of AI is designed and trained for a specific task. Examples include virtual assistants like Siri and Alexa, recommendation systems, and image recognition software. Narrow AI operates under a limited set of constraints and is not capable of generalizing its knowledge to other tasks.\n",
            "\n",
            "2. **General AI (Strong AI)**: This is a theoretical form of AI that possesses the ability to understand, learn, and apply intelligence across a wide range of tasks, similar to a human being. General AI would be able to perform any intellectual task that a human can do, but as of now, it remains largely a concept and has not been realized.\n",
            "\n",
            "AI technologies encompass various subfields, including machine learning (where algorithms improve through experience), natural language processing (enabling machines to understand and respond to human language), robotics, and computer vision. AI is increasingly being integrated into various industries, enhancing efficiency, decision-making, and automation.\n"
          ]
        }
      ],
      "source": [
        "result = process_message(\"What is AI?\")\n",
        "print(\"Agent Response:\")\n",
        "print(result)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "You should see a clear, informative explanation of artificial intelligence. At this point, we have a functioning agent, but it exists only as a local Python function. There is no server, no request handling, and no way for external systems to interact with it. This is where AgentCore comes in."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Step 3: Integrating with AgentCore Runtime\n",
        "\n",
        "This is the transformation step where we wrap our local agent with AgentCore's runtime infrastructure. The process involves creating a BedrockAgentCoreApp instance, decorating our entrypoint function, and understanding how the server works. Each part plays a specific role in creating a managed execution environment."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Creating the AgentCore Application\n",
        "\n",
        "The `BedrockAgentCoreApp` class is the core of the SDK. When you create an instance, it sets up all the infrastructure needed for handling HTTP requests, health checks, and response formatting. This single line of code initializes the entire runtime framework."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Bedrock AgentCore application created\n"
          ]
        }
      ],
      "source": [
        "from bedrock_agentcore.runtime import BedrockAgentCoreApp\n",
        "\n",
        "app = BedrockAgentCoreApp()\n",
        "\n",
        "print(\"Bedrock AgentCore application created\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Agent Registration\n",
        "\n",
        "Registration is how we connect our agent code to the runtime. The `@app.entrypoint` decorator tells AgentCore to call our function whenever a request arrives. The function signature is important: it receives a `payload` dictionary that contains the request data. By convention, AgentCore expects the user's message in a field called `prompt`.\n",
        "\n",
        "Notice that inside the decorated function, we call our existing `process_message` function. This pattern gives you flexibility in how agents are created and managed. You could use dependency injection, implement caching, or add initialization logic here without changing the AgentCore integration."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Agent registered with AgentCore\n"
          ]
        }
      ],
      "source": [
        "@app.entrypoint\n",
        "def invoke(payload: dict) -> dict:\n",
        "    \"\"\"\n",
        "    Process user input and return a response.\n",
        "    AgentCore will call this function for each HTTP request to /invocations\n",
        "    \"\"\"\n",
        "    user_message = payload.get(\"prompt\", \"Hello\")\n",
        "    result = process_message(user_message)\n",
        "    return {\"result\": result}\n",
        "\n",
        "print(\"Agent registered with AgentCore\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Starting the Runtime Server\n",
        "\n",
        "Now we demonstrate how to start the actual server that will accept HTTP requests. The `app.run()` method starts an HTTP server on port 8080 by default. In a real deployment, you would save this code to a file (e.g., `agent.py`) and run it with `python agent.py`.\n",
        "\n",
        "Note: Running `app.run()` directly in a notebook will block execution. For production, you would run this in a separate process or use the AgentCore starter toolkit for deployment."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 20,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Complete agent code structure:\n",
            "\n",
            "import os\n",
            "import openai\n",
            "from bedrock_agentcore.runtime import BedrockAgentCoreApp\n",
            "from dotenv import load_dotenv\n",
            "\n",
            "load_dotenv()\n",
            "\n",
            "app = BedrockAgentCoreApp()\n",
            "\n",
            "def process_message(message: str) -> str:\n",
            "    client = openai.OpenAI()\n",
            "    response = client.chat.completions.create(\n",
            "        model=\"gpt-4o-mini\",\n",
            "        messages=[{\"role\": \"user\", \"content\": message}],\n",
            "        temperature=0\n",
            "    )\n",
            "    return response.choices[0].message.content\n",
            "\n",
            "@app.entrypoint\n",
            "def invoke(payload: dict) -> dict:\n",
            "    user_message = payload.get(\"prompt\", \"Hello\")\n",
            "    result = process_message(user_message)\n",
            "    return {\"result\": result}\n",
            "\n",
            "if __name__ == \"__main__\":\n",
            "    app.run()  # Server starts on http://localhost:8080\n",
            "\n",
            "\n",
            "Key endpoints when running:\n",
            "  POST http://localhost:8080/invocations - Send requests to your agent\n",
            "  GET http://localhost:8080/ping - Health check endpoint\n"
          ]
        }
      ],
      "source": [
        "# In a real scenario, you would run this in a separate file:\n",
        "# if __name__ == \"__main__\":\n",
        "#     app.run()\n",
        "\n",
        "# For notebook demonstration, we show the complete agent code:\n",
        "complete_agent_code = '''\n",
        "import os\n",
        "import openai\n",
        "from bedrock_agentcore.runtime import BedrockAgentCoreApp\n",
        "from dotenv import load_dotenv\n",
        "\n",
        "load_dotenv()\n",
        "\n",
        "app = BedrockAgentCoreApp()\n",
        "\n",
        "def process_message(message: str) -> str:\n",
        "    client = openai.OpenAI()\n",
        "    response = client.chat.completions.create(\n",
        "        model=\"gpt-4o-mini\",\n",
        "        messages=[{\"role\": \"user\", \"content\": message}],\n",
        "        temperature=0\n",
        "    )\n",
        "    return response.choices[0].message.content\n",
        "\n",
        "@app.entrypoint\n",
        "def invoke(payload: dict) -> dict:\n",
        "    user_message = payload.get(\"prompt\", \"Hello\")\n",
        "    result = process_message(user_message)\n",
        "    return {\"result\": result}\n",
        "\n",
        "if __name__ == \"__main__\":\n",
        "    app.run()  # Server starts on http://localhost:8080\n",
        "'''\n",
        "\n",
        "print(\"Complete agent code structure:\")\n",
        "print(complete_agent_code)\n",
        "print(\"\\nKey endpoints when running:\")\n",
        "print(\"  POST http://localhost:8080/invocations - Send requests to your agent\")\n",
        "print(\"  GET http://localhost:8080/ping - Health check endpoint\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "At this point, your agent is no longer just a Python function. When you run the code as a standalone script, it becomes a service that accepts HTTP requests, processes them through your agent logic, and returns structured responses. The AgentCore SDK has automatically added health check endpoints, request validation, and response formatting."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Step 4: Interacting with the Managed Agent\n",
        "\n",
        "With the runtime server active (when running outside this notebook), we can interact with our agent through HTTP requests. This represents a fundamental shift from direct function calls to network-based communication. The agent logic has not changed, but the way we access it has been transformed."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Making Your First Request\n",
        "\n",
        "We use the `requests` library to send an HTTP POST request to our running AgentCore server. The request goes to the `/invocations` endpoint with a JSON payload containing a `prompt` field. This is the standardized format that AgentCore expects. The runtime receives this request, extracts the payload, calls our decorated entrypoint function, and packages the response."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 21,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Example HTTP request to AgentCore:\n",
            "{\n",
            "  \"url\": \"http://localhost:8080/invocations\",\n",
            "  \"method\": \"POST\",\n",
            "  \"headers\": {\n",
            "    \"Content-Type\": \"application/json\"\n",
            "  },\n",
            "  \"body\": {\n",
            "    \"prompt\": \"What is AI?\"\n",
            "  }\n",
            "}\n",
            "\n",
            "Example using curl:\n",
            "curl -X POST http://localhost:8080/invocations \\\n",
            "  -H 'Content-Type: application/json' \\\n",
            "  -d '{\"prompt\": \"What is AI?\"}'\n"
          ]
        }
      ],
      "source": [
        "# This code would work when the server is running in a separate process:\n",
        "import requests\n",
        "import json\n",
        "\n",
        "# Example HTTP request structure:\n",
        "request_example = {\n",
        "    \"url\": \"http://localhost:8080/invocations\",\n",
        "    \"method\": \"POST\",\n",
        "    \"headers\": {\"Content-Type\": \"application/json\"},\n",
        "    \"body\": {\"prompt\": \"What is AI?\"}\n",
        "}\n",
        "\n",
        "print(\"Example HTTP request to AgentCore:\")\n",
        "print(json.dumps(request_example, indent=2))\n",
        "print(\"\\nExample using curl:\")\n",
        "print(\"curl -X POST http://localhost:8080/invocations \\\\\")\n",
        "print(\"  -H 'Content-Type: application/json' \\\\\")\n",
        "print(\"  -d '{\\\"prompt\\\": \\\"What is AI?\\\"}'\")\n",
        "\n",
        "# If the server were running, you would make the actual request:\n",
        "# response = requests.post(\n",
        "#     \"http://localhost:8080/invocations\",\n",
        "#     json={\"prompt\": \"What is AI?\"}\n",
        "# )\n",
        "# result = response.json()\n",
        "# print(json.dumps(result, indent=2))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Understanding the Response Structure\n",
        "\n",
        "The response you receive from AgentCore follows the structure you defined in your entrypoint function. In our case, we return `{\"result\": result}`, so the HTTP response will contain this structure. This makes it easy to build systems that consume your agent's output, as the format is predictable and consistent."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 22,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Expected response structure:\n",
            "{\n",
            "  \"result\": \"AI (Artificial Intelligence) refers to computer systems that can perform tasks typically requiring human intelligence...\"\n",
            "}\n"
          ]
        }
      ],
      "source": [
        "# Expected response structure based on our entrypoint function:\n",
        "expected_response = {\n",
        "    \"result\": \"AI (Artificial Intelligence) refers to computer systems that can perform tasks typically requiring human intelligence...\"\n",
        "}\n",
        "\n",
        "print(\"Expected response structure:\")\n",
        "print(json.dumps(expected_response, indent=2))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Testing with Different Inputs\n",
        "\n",
        "To better understand how the system works, here is another example request with a different question. Each request is handled independently through the same entrypoint function."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 23,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Example request for machine learning explanation:\n",
            "curl -X POST http://localhost:8080/invocations \\\n",
            "  -H 'Content-Type: application/json' \\\n",
            "  -d '{\"prompt\": \"Explain machine learning in simple terms\"}'\n",
            "\n",
            "Expected response:\n",
            "{\"result\": \"Machine learning is...\"}\n"
          ]
        }
      ],
      "source": [
        "# Another example request:\n",
        "print(\"Example request for machine learning explanation:\")\n",
        "print(\"curl -X POST http://localhost:8080/invocations \\\\\")\n",
        "print(\"  -H 'Content-Type: application/json' \\\\\")\n",
        "print(\"  -d '{\\\"prompt\\\": \\\"Explain machine learning in simple terms\\\"}'\")\n",
        "\n",
        "# Expected response:\n",
        "print(\"\\nExpected response:\")\n",
        "print('{\"result\": \"Machine learning is...\"}')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Comparing Local and Managed Execution\n",
        "\n",
        "It is worth pausing to consider what has changed and what has stayed the same. Your agent's core logic—the `process_message` function that calls the OpenAI API and returns a response—remains completely unchanged. You did not modify the agent code to work with AgentCore.\n",
        "\n",
        "What changed is how you interact with the agent. Before AgentCore, you called a function directly:\n",
        "\n",
        "```python\n",
        "result = process_message(\"What is AI?\")\n",
        "print(result)\n",
        "```\n",
        "\n",
        "After AgentCore, you send an HTTP request:\n",
        "\n",
        "```python\n",
        "response = requests.post(\n",
        "    \"http://localhost:8080/invocations\",\n",
        "    json={\"prompt\": \"What is AI?\"}\n",
        ")\n",
        "result = response.json()\n",
        "```\n",
        "\n",
        "This change brings significant benefits. Your agent is now network-accessible, meaning it can be called from any application that speaks HTTP. You can deploy it to AWS infrastructure using the AgentCore starter toolkit. You can add memory capabilities, implement access control, and scale to handle production traffic. Most importantly, you have built a foundation that supports all these enhancements without modifying your core agent logic.\n",
        "\n",
        "This separation of concerns—agent logic versus infrastructure—is the key insight of AgentCore. Your agent focuses on being good at its task. AgentCore focuses on making that agent accessible, manageable, and production-ready."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Summary\n",
        "\n",
        "In this tutorial, you learned the fundamental pattern of working with AWS Bedrock AgentCore. You created a simple agent, wrapped it with the AgentCore runtime SDK using the `@app.entrypoint` decorator, and understood how to interact with it through a managed HTTP interface. The agent code itself remained clean and focused, while AgentCore handled the infrastructure concerns.\n",
        "\n",
        "This foundation opens up several paths for enhancement. You can add tools that allow your agent to search the web, query databases, or call external APIs. You can enable detailed logging to see exactly what your agent does at each step. You can implement identity and access control to restrict who can use certain capabilities. You can deploy your agent to AWS using the AgentCore starter toolkit where it handles real user requests.\n",
        "\n",
        "All of these enhancements build on the same pattern you learned here: your agent focuses on its core task, while AgentCore manages the infrastructure around it. This separation makes your code easier to test, maintain, and evolve as requirements change.\n",
        "\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": []
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.11.0"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 4
}
