{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "38ae026d",
   "metadata": {},
   "source": [
    "# M2 Agentic AI - Chart Generation\n",
    "\n",
    "We’re excited to have you here in the **Agentic AI** course! In this **ungraded lab**, and those that follow in the rest of the course, you’ll have a chance to try out code examples that implement the concepts and design patterns you’ll see in the lecture videos.\n",
    "\n",
    "Think of these labs as **sandbox**: a safe practice space where you can deepen your understanding of the course concepts, build confidence, and get ready for the graded exercises that come later. In each lab, try running the code cells to see some agentic workflows in action and better understand how they work. \n",
    "\n",
    "In a few places, you’ll be encouraged to try making modifications to the code - such as changing the prompts, testing with different LLMs, or adding additional queries to the workflow. Please try experimenting to see how your changes impact the behavior of the workflow.\n",
    "\n",
    "Most importantly, ungraded labs are an opportunity to learn at your own pace while getting hands-on experience with the core ideas behind **Agentic AI**. And remember—you’re not learning alone! If you have any questions, feel free to ask in the\n",
    "<a href=\"https://community.deeplearning.ai/c/course-q-a/agentic-ai/567\" target=\"_blank\">community</a>\n",
    "\n",
    "\n",
    "## 1. Introduction\n",
    "### 1.1. Lab overview\n",
    "\n",
    "In this ungraded lab, you will implement the **reflection pattern** introduced in the lecture video within an agentic workflow that generates data visualizations. A multi-modal LLM will review the first draft chart, identify potential improvements—such as chart type, labels, or color choices—and then rewrite the chart generation code to produce a more effective visualization.\n",
    "\n",
    "In the video, Andrew presented the following workflow for analyzing coffee sales. You will implement this in code here. The steps that the workflow will carry out are:\n",
    "\n",
    "1. **Generate an initial version (V1):**\n",
    "Use a Large Language Model (LLM) to create the first version of the plotting code.\n",
    "\n",
    "2. **Execute code and create chart:** \n",
    "Run the generated code and display the resulting chart. ** (check everywhere)\n",
    "\n",
    "3. **Reflect on the output:**\n",
    "Evaluate both the code and the chart using an LLM to detect areas for improvement (e.g., clarity, accuracy, design).\n",
    "\n",
    "4. **Generate and execute improved version (V2):**\n",
    "Produce a refined version of the plotting code based on reflection insights and render the enhanced chart.\n",
    "\n",
    "<img src='M2-UGL-2.png'>\n",
    "\n",
    "### 🎯 1.2. Learning outcome\n",
    "\n",
    "By the end of this lab, you will have implemented the reflection pattern in code and used it to improve a data visualization."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ffbdf292",
   "metadata": {},
   "source": [
    "## 2. Setup: Initialize environment and client\n",
    "\n",
    "In this step, you import the key libraries that will support the workflow:  \n",
    "\n",
    "- **`re`**: Python’s regular expression module, which you’ll use to extract snippets of code or structured text from the LLM’s output.  \n",
    "- **`json`**: Provides functions to read and write JSON, useful for handling structured responses returned by the LLM.  \n",
    "- **`utils`**: A custom helper module provided for this lab. It includes utility functions to work with the dataset, generate charts, and display results in a clean, readable format.  \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "c19a3b0c",
   "metadata": {
    "height": 80
   },
   "outputs": [
    {
     "ename": "ModuleNotFoundError",
     "evalue": "No module named 'utils'",
     "output_type": "error",
     "traceback": [
      "\u001b[31m---------------------------------------------------------------------------\u001b[39m",
      "\u001b[31mModuleNotFoundError\u001b[39m                       Traceback (most recent call last)",
      "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[1]\u001b[39m\u001b[32m, line 6\u001b[39m\n\u001b[32m      3\u001b[39m \u001b[38;5;28;01mimport\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mjson\u001b[39;00m\n\u001b[32m      5\u001b[39m \u001b[38;5;66;03m# Local helper module\u001b[39;00m\n\u001b[32m----> \u001b[39m\u001b[32m6\u001b[39m \u001b[38;5;28;01mimport\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34;01mutils\u001b[39;00m\n",
      "\u001b[31mModuleNotFoundError\u001b[39m: No module named 'utils'"
     ]
    }
   ],
   "source": [
    "# Standard library imports\n",
    "import re\n",
    "import json\n",
    "\n",
    "# Local helper module\n",
    "import utils"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "57776368",
   "metadata": {},
   "source": [
    "### 2.1. Loading the dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1203e6fa8c728303",
   "metadata": {},
   "source": [
    "Let’s take a look at the coffee sales data to see what information is contained in the file."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "96a2bbd81cbb24d9",
   "metadata": {
    "height": 97
   },
   "outputs": [],
   "source": [
    "# Use this utils.py function to load the data into a dataframe\n",
    "df = utils.load_and_prepare_data('coffee_sales.csv')\n",
    "\n",
    "# Grab a random sample to display\n",
    "utils.print_html(df.sample(n=5), title=\"Random Sample of Coffee Sales Data\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "838a24112871c226",
   "metadata": {},
   "source": [
    "You’ll build an agentic workflow that generates data visualizations from this dataset, helping you answer questions about coffee sales from the vending machine."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5163dff8",
   "metadata": {},
   "source": [
    "## 3. Building the pipeline"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47a92d89",
   "metadata": {},
   "source": [
    "### 3.1 Step 1 — Generate Code to Create a Chart (V1)\n",
    "\n",
    "In this step, you’ll prompt an LLM to write Python code that generates a chart in response to a user query about the coffee dataset. The dataset includes fields such as `date`, `coffee_type`, `quantity`, and `revenue`, and you will pass this schema into the LLM so it knows what data is available.  \n",
    "\n",
    "The question you’ll ask the model is the same one used in the lecture:  \n",
    "**“Create a plot comparing Q1 coffee sales in 2024 and 2025 using the data in coffee_sales.csv.”**\n",
    "\n",
    "The LLM’s output will be Python code using the **matplotlib** library. Instead of displaying the chart directly, the code will be written between `<execute_python>` tags so it can be extracted and run in later steps. You’ll learn more about these tags in Module 3.  \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "17222765",
   "metadata": {
    "height": 709
   },
   "outputs": [],
   "source": [
    "def generate_chart_code(instruction: str, model: str, out_path_v1: str) -> str:\n",
    "    \"\"\"Generate Python code to make a plot with matplotlib using tag-based wrapping.\"\"\"\n",
    "\n",
    "    prompt = f\"\"\"\n",
    "    You are a data visualization expert.\n",
    "\n",
    "    Return your answer *strictly* in this format:\n",
    "\n",
    "    <execute_python>\n",
    "    # valid python code here\n",
    "    </execute_python>\n",
    "\n",
    "    Do not add explanations, only the tags and the code.\n",
    "\n",
    "    The code should create a visualization from a DataFrame 'df' with these columns:\n",
    "    - date (M/D/YY)\n",
    "    - time (HH:MM)\n",
    "    - cash_type (card or cash)\n",
    "    - card (string)\n",
    "    - price (number)\n",
    "    - coffee_name (string)\n",
    "    - quarter (1-4)\n",
    "    - month (1-12)\n",
    "    - year (YYYY)\n",
    "\n",
    "    User instruction: {instruction}\n",
    "\n",
    "    Requirements for the code:\n",
    "    1. Assume the DataFrame is already loaded as 'df'.\n",
    "    2. Use matplotlib for plotting.\n",
    "    3. Add clear title, axis labels, and legend if needed.\n",
    "    4. Save the figure as '{out_path_v1}' with dpi=300.\n",
    "    5. Do not call plt.show().\n",
    "    6. Close all plots with plt.close().\n",
    "    7. Add all necessary import python statements\n",
    "\n",
    "    Return ONLY the code wrapped in <execute_python> tags.\n",
    "    \"\"\"\n",
    "\n",
    "    response = utils.get_response(model, prompt)\n",
    "    return response"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a28faa498aaf2706",
   "metadata": {},
   "source": [
    "Now, try out the function and analyze the response!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "28e5109f",
   "metadata": {
    "height": 148
   },
   "outputs": [],
   "source": [
    "# Generate initial code\n",
    "code_v1 = generate_chart_code(\n",
    "    instruction=\"Create a plot comparing Q1 coffee sales in 2024 and 2025 using the data in coffee_sales.csv.\", \n",
    "    model=\"gpt-4o-mini\", \n",
    "    out_path_v1=\"chart_v1.png\"\n",
    ")\n",
    "\n",
    "utils.print_html(code_v1, title=\"LLM output with first draft code\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d5a7187",
   "metadata": {},
   "source": [
    "Great! You've generated some python code to create a chart! \n",
    "\n",
    "Notice that the code is wrapped between `<execute_python>` tags. These tags make it easy to automatically extract and run the code in the next step of the workflow.  \n",
    "\n",
    "You don’t need to worry about the details yet — you’ll learn more about how these tags work in **Module 3**.  \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c15c3c875d025c4f",
   "metadata": {},
   "source": [
    "### 3.2. Step 2 — Execute Code and Create Chart\n",
    "\n",
    "In this step, you’ll use a regular expression to extract the Python code that the LLM generated in the previous step (the part written between `<execute_python>` tags). Once extracted, you’ll run this code to produce the **first draft chart**.  \n",
    "\n",
    "Here's how it works:\n",
    "\n",
    "1. **Extract the code:**  \n",
    "   A regex pattern is used to grab the code that’s wrapped inside the `<execute_python>` tags.\n",
    "\n",
    "2. **Execute the code:**\n",
    "   The extracted code is run in a predefined global context where the DataFrame `df` is already available. This means your code can directly use df without needing to reload the dataset.\n",
    "\n",
    "3. **Generate the chart::**\n",
    "   If the code executes successfully, it will create a chart and save it as `chart_v1.png`.\n",
    "\n",
    "4. **View the chart in the notebook:**\n",
    "   The saved chart is then displayed inline using `utils.print_html`, making it easy for you to review the results.\n",
    "\n",
    "By completing this step, you’ll have your first draft visualization (V1) ready — a big milestone in the reflection workflow!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c2e2c15f",
   "metadata": {
    "height": 233
   },
   "outputs": [],
   "source": [
    "# Get the code within the <execute_python> tags\n",
    "match = re.search(r\"<execute_python>([\\s\\S]*?)</execute_python>\", code_v1)\n",
    "if match:\n",
    "    initial_code = match.group(1).strip()\n",
    "    utils.print_html(initial_code, title=\"Extracted Code to Execute\")\n",
    "    exec_globals = {\"df\": df}\n",
    "    exec(initial_code, exec_globals)\n",
    "\n",
    "# If code run successfully, the file chart_v1.png should have been generated\n",
    "utils.print_html(\n",
    "    content=\"chart_v1.png\",\n",
    "    title=\"Generated Chart (V1)\",\n",
    "    is_image=True\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ab1c8daa2c1615aa",
   "metadata": {},
   "source": [
    "### 3.3. Step 3 — Reflect on the output\n",
    "\n",
    "The goal here is to simulate how a human would review a first draft of a chart—looking for strengths, weaknesses, and areas for improvement.\n",
    "\n",
    "Here’s what happens:\n",
    "\n",
    "**1. Provide the chart to the LLM:**\n",
    "The generated chart (chart_v1.png) is shared with the LLM so it can “see” the visualization.\n",
    "\n",
    "**2. Analyze the chart visually:**\n",
    "The LLM reviews elements like clarity, labeling, accuracy, and overall readability.\n",
    "\n",
    "**3. Generate feedback:**\n",
    "The LLM suggests improvements—for example, fixing axis labels, adjusting the chart type, improving color choices, or highlighting missing legends.\n",
    "\n",
    "By doing this, you create an intelligent feedback loop where the chart is not just produced once, but actively critiqued—setting the stage for a stronger second version (V2)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "84719997",
   "metadata": {
    "height": 1168
   },
   "outputs": [],
   "source": [
    "def reflect_on_image_and_regenerate(\n",
    "    chart_path: str,\n",
    "    instruction: str,\n",
    "    model_name: str,\n",
    "    out_path_v2: str,\n",
    "    code_v1: str,  \n",
    ") -> tuple[str, str]:\n",
    "    \"\"\"\n",
    "    Critique the chart IMAGE and the original code against the instruction, \n",
    "    then return refined matplotlib code.\n",
    "    Returns (feedback, refined_code_with_tags).\n",
    "    Supports OpenAI and Anthropic (Claude).\n",
    "    \"\"\"\n",
    "    media_type, b64 = utils.encode_image_b64(chart_path)\n",
    "    \n",
    "\n",
    "    prompt = f\"\"\"\n",
    "    You are a data visualization expert.\n",
    "    Your task: critique the attached chart and the original code against the given instruction,\n",
    "    then return improved matplotlib code.\n",
    "\n",
    "    Original code (for context):\n",
    "    {code_v1}\n",
    "\n",
    "    OUTPUT FORMAT (STRICT):\n",
    "    1) First line: a valid JSON object with ONLY the \"feedback\" field.\n",
    "    Example: {{\"feedback\": \"The legend is unclear and the axis labels overlap.\"}}\n",
    "\n",
    "    2) After a newline, output ONLY the refined Python code wrapped in:\n",
    "    <execute_python>\n",
    "    ...\n",
    "    </execute_python>\n",
    "\n",
    "    3) Import all necessary libraries in the code. Don't assume any imports from the original code.\n",
    "\n",
    "    HARD CONSTRAINTS:\n",
    "    - Do NOT include Markdown, backticks, or any extra prose outside the two parts above.\n",
    "    - Use pandas/matplotlib only (no seaborn).\n",
    "    - Assume df already exists; do not read from files.\n",
    "    - Save to '{out_path_v2}' with dpi=300.\n",
    "    - Always call plt.close() at the end (no plt.show()).\n",
    "    - Include all necessary import statements.\n",
    "\n",
    "    Schema (columns available in df):\n",
    "    - date (M/D/YY)\n",
    "    - time (HH:MM)\n",
    "    - cash_type (card or cash)\n",
    "    - card (string)\n",
    "    - price (number)\n",
    "    - coffee_name (string)\n",
    "    - quarter (1-4)\n",
    "    - month (1-12)\n",
    "    - year (YYYY)\n",
    "\n",
    "    Instruction:\n",
    "    {instruction}\n",
    "    \"\"\"\n",
    "\n",
    "\n",
    "    # In case the name is \"Claude\" or \"Anthropic\", use the safe helper\n",
    "    lower = model_name.lower()\n",
    "    if \"claude\" in lower or \"anthropic\" in lower:\n",
    "        # ✅ Use the safe helper that joins all text blocks and adds a system prompt\n",
    "        content = utils.image_anthropic_call(model_name, prompt, media_type, b64)\n",
    "    else:\n",
    "        content = utils.image_openai_call(model_name, prompt, media_type, b64)\n",
    "\n",
    "    # --- Parse ONLY the first JSON line (feedback) ---\n",
    "    lines = content.strip().splitlines()\n",
    "    json_line = lines[0].strip() if lines else \"\"\n",
    "\n",
    "    try:\n",
    "        obj = json.loads(json_line)\n",
    "    except Exception as e:\n",
    "        # Fallback: try to capture the first {...} in all the content\n",
    "        m_json = re.search(r\"\\{.*?\\}\", content, flags=re.DOTALL)\n",
    "        if m_json:\n",
    "            try:\n",
    "                obj = json.loads(m_json.group(0))\n",
    "            except Exception as e2:\n",
    "                obj = {\"feedback\": f\"Failed to parse JSON: {e2}\", \"refined_code\": \"\"}\n",
    "        else:\n",
    "            obj = {\"feedback\": f\"Failed to find JSON: {e}\", \"refined_code\": \"\"}\n",
    "\n",
    "    # --- Extract refined code from <execute_python>...</execute_python> ---\n",
    "    m_code = re.search(r\"<execute_python>([\\s\\S]*?)</execute_python>\", content)\n",
    "    refined_code_body = m_code.group(1).strip() if m_code else \"\"\n",
    "    refined_code = utils.ensure_execute_python_tags(refined_code_body)\n",
    "\n",
    "    feedback = str(obj.get(\"feedback\", \"\")).strip()\n",
    "    return feedback, refined_code\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88bf0b94",
   "metadata": {},
   "source": [
    "Note that, the model is instructed to return its response in **JSON format**.  \n",
    "\n",
    "- JSON is a lightweight, structured format (key–value pairs) that makes it easy to parse the LLM’s output programmatically.  \n",
    "- Here, we require two fields:  \n",
    "  - **`feedback`**: a short critique of the current chart.  \n",
    "  - **`refined_code`**: an improved Python code snippet wrapped in `<execute_python>` tags.  \n",
    "\n",
    "We also include a **“constraints” section** in the prompt. These rules (e.g., use matplotlib only, save the file to a specific path, call `plt.close()` at the end) help the model generate consistent, runnable code that fits the workflow. Without these constraints, the output might vary too much or include unwanted formatting.  \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "57fb31ab",
   "metadata": {},
   "source": [
    "### 3.4 Step 4 — Generate and Execute Improved Version (V2)\n",
    "\n",
    "In this final step, it’s time to generate and run the improved version of the chart (V2).  \n",
    "After running the cell, you’ll see **both the reflection written by the LLM** (explaining what needed improvement) **and the new code it generated**. The new code will then be executed to produce the updated chart.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "83240f00",
   "metadata": {
    "height": 182
   },
   "outputs": [],
   "source": [
    "# Generate feedback alongside reflected code\n",
    "feedback, code_v2 = reflect_on_image_and_regenerate(\n",
    "    chart_path=\"chart_v1.png\",            \n",
    "    instruction=\"Create a plot comparing Q1 coffee sales in 2024 and 2025 using the data in coffee_sales.csv.\", \n",
    "    model_name=\"o4-mini\",\n",
    "    out_path_v2=\"chart_v2.png\",\n",
    "    code_v1=code_v1,   # pass in the original code for context        \n",
    ")\n",
    "\n",
    "utils.print_html(feedback, title=\"Feedback on V1 Chart\")\n",
    "utils.print_html(code_v2, title=\"Regenerated Code Output (V2)\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6a172a45",
   "metadata": {},
   "source": [
    "Now you’ll execute the refined code returned by the reflection step.  The code inside the `<execute_python>` tags is extracted, run against the dataset, and used to generate the updated chart.  \n",
    "\n",
    "If the execution is successful, you’ll see the new image (`chart_v2.png`) displayed below as the **Regenerated Chart (V2)**.  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c1dad5c4",
   "metadata": {
    "height": 233
   },
   "outputs": [],
   "source": [
    "# Get the code within the <execute_python> tags\n",
    "match = re.search(r\"<execute_python>([\\s\\S]*?)</execute_python>\", code_v2)\n",
    "if match:\n",
    "    reflected_code = match.group(1).strip()\n",
    "    exec_globals = {\"df\": df}\n",
    "    exec(reflected_code, exec_globals)\n",
    "\n",
    "# If code run successfully, the file chart_v2.png should have been generated\n",
    "utils.print_html(\n",
    "    content=\"chart_v2.png\",\n",
    "    title=\"Regenerated Chart (V2)\",\n",
    "    is_image=True\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e7061a56d2b26878",
   "metadata": {},
   "source": [
    "### 4. Put it all together — creating the end-to-end workflow\n",
    "\n",
    "Now it’s time to wrap everything into a single automated workflow the agent can run from start to finish.\n",
    "\n",
    "The `run_workflow` function links together the components you implemented earlier:\n",
    "\n",
    "1) **Load and prepare data** — via `utils.load_and_prepare_data(...)`.  \n",
    "2) **Generate V1 code** — with `generate_chart_code(...)`, which returns the first-draft matplotlib code (wrapped in `<execute_python>` tags).  \n",
    "3) **Execute V1 immediately** — the workflow extracts the code between `<execute_python>` tags and runs it to produce the first chart image.  \n",
    "4) **Reflect and refine** — `reflect_on_image_and_regenerate(...)` critiques the V1 image (and the original code) against the instruction, returns concise **feedback** plus **revised code (V2)**.  \n",
    "5) **Execute V2 immediately** — the refined code is extracted and executed to generate the improved chart.\n",
    "\n",
    "### What this workflow accepts\n",
    "- **`dataset_path`**: location of the input CSV.  \n",
    "- **`user_instructions`**: the chart request (e.g., “Create a plot comparing Q1 coffee sales in 2024 and 2025 using the data in coffee_sales.csv.”).  \n",
    "- **`generation_model`**: model used for the initial code generation.  \n",
    "- **`reflection_model`**: model used for the image-based reflection and code refinement.  \n",
    "- **`image_basename`**: base filename for saving chart images (e.g., `chart_v1.png`, `chart_v2.png`).  \n",
    "\n",
    "> Note: The chart execution steps are intentionally **hard-coded** to run right after code generation/refinement. This mirrors the workflow in the lecture and ensures you see each draft’s output before moving on.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9f979070",
   "metadata": {
    "height": 998
   },
   "outputs": [],
   "source": [
    "def run_workflow(\n",
    "    dataset_path: str,\n",
    "    user_instructions: str,\n",
    "    generation_model: str,\n",
    "    reflection_model: str,   \n",
    "    image_basename: str = \"chart\",\n",
    "):\n",
    "    \"\"\"\n",
    "    End-to-end pipeline:\n",
    "      1) load dataset\n",
    "      2) generate V1 code\n",
    "      3) execute V1 → produce chart_v1.png\n",
    "      4) reflect on V1 (image + original code) → feedback + refined code\n",
    "      5) execute V2 → produce chart_v2.png\n",
    "\n",
    "    Returns a dict with all artifacts (codes, feedback, image paths).\n",
    "    \"\"\"\n",
    "    # 0) Load dataset; utils handles parsing and feature derivations (e.g., year/quarter)\n",
    "    df = utils.load_and_prepare_data(dataset_path)\n",
    "    utils.print_html(df.sample(n=5), title=\"Random Sample of Dataset\")\n",
    "\n",
    "    # Paths to store charts\n",
    "    out_v1 = f\"{image_basename}_v1.png\"\n",
    "    out_v2 = f\"{image_basename}_v2.png\"\n",
    "\n",
    "    # 1) Generate code (V1)\n",
    "    utils.print_html(\"Step 1: Generating chart code (V1)… 📈\")\n",
    "    code_v1 = generate_chart_code(\n",
    "        instruction=user_instructions,\n",
    "        model=generation_model,\n",
    "        out_path_v1=out_v1,\n",
    "    )\n",
    "    utils.print_html(code_v1, title=\"LLM output with first draft code (V1)\")\n",
    "\n",
    "    # 2) Execute V1 (hard-coded: extract <execute_python> block and run immediately)\n",
    "    utils.print_html(\"Step 2: Executing chart code (V1)… 💻\")\n",
    "    match = re.search(r\"<execute_python>([\\s\\S]*?)</execute_python>\", code_v1)\n",
    "    if match:\n",
    "        initial_code = match.group(1).strip()\n",
    "        exec_globals = {\"df\": df}\n",
    "        exec(initial_code, exec_globals)\n",
    "    utils.print_html(out_v1, is_image=True, title=\"Generated Chart (V1)\")\n",
    "\n",
    "    # 3) Reflect on V1 (image + original code) to get feedback and refined code (V2)\n",
    "    utils.print_html(\"Step 3: Reflecting on V1 (image + code) and generating improvements… 🔁\")\n",
    "    feedback, code_v2 = reflect_on_image_and_regenerate(\n",
    "        chart_path=out_v1,\n",
    "        instruction=user_instructions,\n",
    "        model_name=reflection_model,\n",
    "        out_path_v2=out_v2,\n",
    "        code_v1=code_v1,  # pass original code for context\n",
    "    )\n",
    "    utils.print_html(feedback, title=\"Reflection feedback on V1\")\n",
    "    utils.print_html(code_v2, title=\"LLM output with revised code (V2)\")\n",
    "\n",
    "    # 4) Execute V2 (hard-coded: extract <execute_python> block and run immediately)\n",
    "    utils.print_html(\"Step 4: Executing refined chart code (V2)… 🖼️\")\n",
    "    match = re.search(r\"<execute_python>([\\s\\S]*?)</execute_python>\", code_v2)\n",
    "    if match:\n",
    "        reflected_code = match.group(1).strip()\n",
    "        exec_globals = {\"df\": df}\n",
    "        exec(reflected_code, exec_globals)\n",
    "    utils.print_html(out_v2, is_image=True, title=\"Regenerated Chart (V2)\")\n",
    "\n",
    "    return {\n",
    "        \"code_v1\": code_v1,\n",
    "        \"chart_v1\": out_v1,\n",
    "        \"feedback\": feedback,\n",
    "        \"code_v2\": code_v2,\n",
    "        \"chart_v2\": out_v2,\n",
    "    }\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ca638afa",
   "metadata": {},
   "source": [
    "### 4.2. Try the workflow\n",
    "\n",
    "Now it’s your turn to put the full workflow into action with the updated example from the lecture.\n",
    "\n",
    "- **Instruction to use:**  \n",
    "  “Create a plot comparing Q1 coffee sales in 2024 and 2025 using the data in coffee_sales.csv.”\n",
    "\n",
    "When you run the workflow with this instruction, it will:\n",
    "1. Generate first-draft code to create the chart.  \n",
    "2. Execute that code immediately to produce the first version of the chart (V1).  \n",
    "3. Reflect on the chart and the original code, producing feedback and revised code (V2).  \n",
    "4. Execute the refined code to generate the improved chart (V2).  \n",
    "\n",
    "### Customize and experiment\n",
    "\n",
    "After trying the example above, feel free to update the `user_instructions` parameter with your own chart prompts.  \n",
    "Remember to also adjust the `image_basename` so each run saves its results under a new filename — this keeps your charts organized and avoids overwriting previous outputs.\n",
    "\n",
    "### Choosing models\n",
    "\n",
    "You can mix and match different models for generation and reflection. For example:\n",
    "- Use a fast model for initial code generation (`gpt-4.1-mini` or `gpt-3.5-turbo`).  \n",
    "- Use a stronger reasoning model for reflection (`gpt-4.1` or `claude-3-7-sonnet`).  \n",
    "\n",
    "This flexibility lets you explore trade-offs between speed and quality.\n",
    "\n",
    "👉 **Call to action:** Run the workflow now with the example instruction from the lecture. Then experiment with your own prompts to see how the agent adapts!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5963464a",
   "metadata": {
    "height": 250
   },
   "outputs": [],
   "source": [
    "# Here, insert your updates\n",
    "user_instructions=\"Create a plot comparing Q1 coffee sales in 2024 and 2025 using the data in coffee_sales.csv.\" # write your instruction here\n",
    "generation_model=\"gpt-4o-mini\"\n",
    "reflection_model=\"o4-mini\"\n",
    "image_basename=\"drink_sales\"\n",
    "\n",
    "# Run the complete agentic workflow\n",
    "_ = run_workflow(\n",
    "    dataset_path=\"coffee_sales.csv\",\n",
    "    user_instructions=user_instructions,\n",
    "    generation_model=generation_model,\n",
    "    reflection_model=reflection_model,\n",
    "    image_basename=image_basename\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c1c28f3",
   "metadata": {},
   "source": [
    "## 5. Final Takeaways\n",
    "\n",
    "In this lab, **you** practiced using reflection to improve chart outputs.\n",
    "You learned to:\n",
    "\n",
    "* Generate an initial chart (V1).\n",
    "* Critique and refine it into a better version (V2).\n",
    "* Automate the full workflow with different models.\n",
    "\n",
    "The key idea: reflection helps **you** create clearer, more accurate, and more effective visualizations.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1135a30f",
   "metadata": {},
   "source": [
    "<div style=\"border:1px solid #22c55e; border-left:6px solid #16a34a; background:#dcfce7; border-radius:6px; padding:14px 16px; color:#064e3b; font-family:system-ui,-apple-system,Segoe UI,Roboto,Ubuntu,Cantarell,Noto Sans,sans-serif;\">\n",
    "\n",
    "🎉 <strong>Congratulations!</strong>  \n",
    "\n",
    "You’ve completed the lab on building an **agentic chart generation workflow**.  \n",
    "Along the way, **you** practiced generating charts, reflecting on their quality, and refining them into clearer and more effective visualizations.  \n",
    "\n",
    "With these skills, **you** are ready to design agentic pipelines that create data visualizations automatically while keeping them accurate, explainable, and polished. 🌟  \n",
    "\n",
    "</div>\n",
    "\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.14.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
