{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "OqKumYqaNrPC",
   "metadata": {
    "id": "OqKumYqaNrPC"
   },
   "source": [
    "# LangGraph Code Assistant\n",
    "\n",
    "- Author: [Junseong Kim](https://www.linkedin.com/in/%EC%A4%80%EC%84%B1-%EA%B9%80-591b351b2/)\n",
    "- Design: [Junseong Kim](https://www.linkedin.com/in/%EC%A4%80%EC%84%B1-%EA%B9%80-591b351b2/)\n",
    "- Peer Review:\n",
    "- Proofread : [fastjw](https://github.com/fastjw)\n",
    "- This is a part of [LangChain Open Tutorial](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial)\n",
    "\n",
    "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/17-LangGraph/03-Use-Cases/11-LangGraph-Code-Assistant.ipynb)[![Open in GitHub](https://img.shields.io/badge/Open%20in%20GitHub-181717?style=flat-square&logo=github&logoColor=white)](https://github.com/LangChain-OpenTutorial/LangChain-OpenTutorial/blob/main/17-LangGraph/03-Use-Cases/11-LangGraph-Code-Assistant.ipynb)\n",
    "## Overview\n",
    "\n",
    "In this tutorial, we will build a simple code assistant workflow using **langgraph** and LangChain. We will demonstrate how to:\n",
    "\n",
    "- Load and parse documentation from a URL using ```RecursiveUrlLoader```.\n",
    "- Create a Pydantic model (```code```) to structure code-generation responses.\n",
    "- Use an LLM (Anthropic’s Claude) with a specialized prompt to generate code solutions.\n",
    "- Integrate a custom parsing function to handle raw and structured outputs.\n",
    "- Construct a state machine with ```langgraph``` to:\n",
    "  - Generate code (```generate``` node)\n",
    "  - Check imports and execution (```check_code``` node)\n",
    "  - Reflect and retry if needed (```reflect``` node)\n",
    "- Visualize the workflow graph and finally display the generated code in a clean Markdown format.\n",
    "\n",
    "By the end of this tutorial, you’ll be able to set up a multi-step code assistant pipeline that can iteratively generate, validate, and reflect on generated code.\n",
    "\n",
    "![](./assets/11-langgraph-code-assistant.png)\n",
    "\n",
    "\n",
    "### Table of Contents\n",
    "\n",
    "- [Overview](#overview)\n",
    "- [Environment Setup](#environment-setup)\n",
    "- [Building the Code Assistant Workflow](#building-the-code-assistant-workflow)\n",
    "- [Loading and Preparing Documentation](#loading-and-preparing-documentation)\n",
    "- [Defining a Pydantic Model for Code](#defining-a-pydantic-model-for-code)\n",
    "- [Setting Up the Prompt and LLM Chain](#setting-up-the-prompt-and-llm-chain)\n",
    "- [Parsing the LLM Output](#parsing-the-llm-output)\n",
    "- [Constructing the State Machine](#Constructing-the-State-Machine)\n",
    "- [Final Invocation and Display](#final-invocation-and-display)\n",
    "\n",
    "### References\n",
    "\n",
    "- [Langgraph](https://www.langchain.com/langgraph)\n",
    "- [LangChain Anthropic Integration](https://python.langchain.com/docs/integrations/llms/anthropic)\n",
    "----"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "rhUZI_8lN_7x",
   "metadata": {
    "id": "rhUZI_8lN_7x"
   },
   "source": [
    "## Environment Setup\n",
    "\n",
    "Setting up your environment is the first step. See the [Environment Setup](https://wikidocs.net/257836) guide for more details.\n",
    "\n",
    "\n",
    "**[Note]**\n",
    "\n",
    "The langchain-opentutorial is a package of easy-to-use environment setup guidance, useful functions and utilities for tutorials.\n",
    "Check out the  [```langchain-opentutorial```](https://github.com/LangChain-OpenTutorial/langchain-opentutorial-pypi) for more details."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "Ir6WsaC1OB8y",
   "metadata": {
    "id": "Ir6WsaC1OB8y"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "[notice] A new release of pip is available: 24.3.1 -> 25.0\n",
      "[notice] To update, run: python.exe -m pip install --upgrade pip\n"
     ]
    }
   ],
   "source": [
    "%%capture --no-stderr\n",
    "%pip install langchain-opentutorial"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "WhSXCH7kODHR",
   "metadata": {
    "id": "WhSXCH7kODHR"
   },
   "outputs": [],
   "source": [
    "# Install required packages\n",
    "from langchain_opentutorial import package\n",
    "\n",
    "package.install(\n",
    "    [\n",
    "        \"langsmith\",\n",
    "        \"langchain\",\n",
    "        \"langchain_core\",\n",
    "        \"langchain_anthropic\",\n",
    "        \"langchain_community\",\n",
    "        \"langgraph\",\n",
    "    ],\n",
    "    verbose=False,\n",
    "    upgrade=False,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "vzgjAj2mOKEJ",
   "metadata": {
    "id": "vzgjAj2mOKEJ"
   },
   "source": [
    "You can set API keys in a ```.env``` file or set them manually.\n",
    "\n",
    "[Note] If you’re not using the ```.env``` file, no worries! Just enter the keys directly in the cell below, and you’re good to go."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "RmZFhLKBOMrQ",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "RmZFhLKBOMrQ",
    "outputId": "32cb3dd1-d3ad-44a9-e97e-45d3d1c84473"
   },
   "outputs": [],
   "source": [
    "from dotenv import load_dotenv\n",
    "from langchain_opentutorial import set_env\n",
    "\n",
    "# Attempt to load environment variables from a .env file; if unsuccessful, set them manually.\n",
    "if not load_dotenv():\n",
    "    set_env(\n",
    "        {\n",
    "            \"ANTHROPIC_API_KEY\": \"\",\n",
    "            \"LANGCHAIN_API_KEY\": \"\",\n",
    "            \"LANGCHAIN_TRACING_V2\": \"true\",\n",
    "            \"LANGCHAIN_ENDPOINT\": \"https://api.smith.langchain.com\",\n",
    "            \"LANGCHAIN_PROJECT\": \"11-LangGraph-Code-Assistant\",\n",
    "        }\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6c09f86d",
   "metadata": {},
   "source": [
    "## Building the Code Assistant Workflow\n",
    "\n",
    "Below, we will build a code assistant workflow using **langgraph**. This workflow can:\n",
    "\n",
    "1. **Load** documentation with a custom loader.\n",
    "2. **Use** a large language model (Anthropic’s Claude) with a structured output format.\n",
    "3. **Check** generated code for errors and prompt for a retry if needed.\n",
    "4. **Reflect** on errors and regenerate code.\n",
    "\n",
    "We will demonstrate each step with a combination of Markdown explanations and code cells.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "52201e9b",
   "metadata": {},
   "source": [
    "## Loading and Preparing Documentation\n",
    "\n",
    "We use the ```RecursiveUrlLoader``` (from ```langchain_community```) to fetch and parse documentation. We’ll store the documents in ```docs```, sort them, and then concatenate the page content into a single string for use in our prompt.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "21f06357",
   "metadata": {},
   "outputs": [],
   "source": [
    "from bs4 import BeautifulSoup as Soup\n",
    "from langchain_community.document_loaders.recursive_url_loader import RecursiveUrlLoader\n",
    "\n",
    "# LCEL docs\n",
    "url = \"https://python.langchain.com/v0.2/docs/tutorials/rag/\"\n",
    "loader = RecursiveUrlLoader(\n",
    "    url=url, max_depth=20, extractor=lambda x: Soup(x, \"html.parser\").text\n",
    ")\n",
    "docs = loader.load()\n",
    "\n",
    "# Sort the list based on the URLs and get the text\n",
    "d_sorted = sorted(docs, key=lambda x: x.metadata[\"source\"])\n",
    "d_reversed = list(reversed(d_sorted))\n",
    "concatenated_content = \"\\n\\n\\n --- \\n\\n\\n\".join(\n",
    "    [doc.page_content for doc in d_reversed]\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3fb4034b",
   "metadata": {},
   "source": [
    "## Defining a Pydantic Model for Code\n",
    "\n",
    "We define a Pydantic model named ```code``` to capture structured outputs for code generation. This model enforces a specific schema with fields for ```prefix```, ```imports```, and ```code```.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "a9ad87b7",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pydantic import BaseModel, Field\n",
    "\n",
    "\n",
    "# Data model\n",
    "\n",
    "\n",
    "class code(BaseModel):\n",
    "    \"\"\"Schema for code solutions to questions about LCEL.\"\"\"\n",
    "\n",
    "    prefix: str = Field(description=\"Description of the problem and approach\")\n",
    "\n",
    "    imports: str = Field(description=\"Code block import statements\")\n",
    "\n",
    "    code: str = Field(description=\"Code block not including import statements\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c9d3b9b2",
   "metadata": {},
   "source": [
    "## Setting Up the Prompt and LLM Chain\n",
    "\n",
    "We construct a prompt that instructs the LLM (Anthropic’s Claude) to produce answers in our specified ```code``` format.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "65edba80",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_anthropic import ChatAnthropic\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "# Prompt to enforce tool use\n",
    "code_gen_prompt_claude = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\n",
    "            \"system\",\n",
    "            \"\"\"<instructions> You are a coding assistant with expertise in LCEL, LangChain expression language. \\n \n",
    "    Here is the LCEL documentation:  \\n ------- \\n  {context} \\n ------- \\n Answer the user  question based on the \\n \n",
    "    above provided documentation. Ensure any code you provide can be executed with all required imports and variables \\n\n",
    "    defined. Structure your answer: 1) a prefix describing the code solution, 2) the imports, 3) the functioning code block. \\n\n",
    "    Invoke the code tool to structure the output correctly. </instructions> \\n Here is the user question:\"\"\",\n",
    "        ),\n",
    "        (\"placeholder\", \"{messages}\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "# LLM\n",
    "expt_llm = \"claude-3-opus-20240229\"\n",
    "llm = ChatAnthropic(\n",
    "    model=expt_llm,\n",
    "    default_headers={\"anthropic-beta\": \"tools-2024-04-04\"},\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d2c09007",
   "metadata": {},
   "source": [
    "Next, we wrap this language model with ```with_structured_output``` to ensure it returns data that can be parsed into our ```code``` Pydantic model:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "78548b64",
   "metadata": {},
   "outputs": [],
   "source": [
    "structured_llm_claude = llm.with_structured_output(code, include_raw=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8696bd5d",
   "metadata": {},
   "source": [
    "Finally, we define a function ```parse_output``` to extract the parsed solution from the LLM’s raw response, and then we combine everything into a single chain:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "17b0af3d",
   "metadata": {},
   "outputs": [],
   "source": [
    "def parse_output(solution):\n",
    "    \"\"\"When we add 'include_raw=True' to structured output,\n",
    "    it will return a dict w 'raw', 'parsed', 'parsing_error'.\"\"\"\n",
    "\n",
    "    return solution[\"parsed\"]\n",
    "\n",
    "\n",
    "code_gen_chain = code_gen_prompt_claude | structured_llm_claude | parse_output"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "830371b5",
   "metadata": {},
   "source": [
    "## Parsing the LLM Output\n",
    "\n",
    "We run a quick test by asking a sample question. The chain returns a structured ```code``` object with ```prefix```, ```imports```, and ```code```.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "427e097b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "code(prefix='Here is how you can build a RAG (Retrieval Augmented Generation) chain in LCEL (LangChain Expression Language):', imports='from langchain_chroma import Chroma\\nfrom langchain_openai import OpenAIEmbeddings\\nfrom langchain_text_splitters import RecursiveCharacterTextSplitter\\nfrom langchain import hub\\nfrom langchain_core.output_parsers import StrOutputParser\\nfrom langchain_core.runnables import RunnablePassthrough', code='text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\\nsplits = text_splitter.split_documents(docs)\\n\\nvectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())\\n\\nretriever = vectorstore.as_retriever()\\nprompt = hub.pull(\"rlm/rag-prompt\")\\n\\ndef format_docs(docs):\\n    return \"\\\\\\\\n\\\\\\\\n\".join(doc.page_content for doc in docs)\\n\\nrag_chain = (\\n    {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\\n    | prompt\\n    | llm \\n    | StrOutputParser()\\n)')"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Test\n",
    "question = \"How do I build a RAG chain in LCEL?\"\n",
    "solution = code_gen_chain.invoke(\n",
    "    {\"context\": concatenated_content, \"messages\": [(\"user\", question)]}\n",
    ")\n",
    "solution"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1efdca37",
   "metadata": {},
   "source": [
    "## Constructing the State Machine\n",
    "\n",
    "To handle multiple steps—generating code, checking it, and reflecting on errors—we can use a **langgraph** state machine.\n",
    "\n",
    "First, we create a ```TypedDict``` called ```GraphState``` to store our pipeline’s state. We also define some global parameters:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "0800d70d",
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import List\n",
    "from typing_extensions import TypedDict\n",
    "\n",
    "\n",
    "class GraphState(TypedDict):\n",
    "    \"\"\"\n",
    "    Represents the state of our graph.\n",
    "\n",
    "    Attributes:\n",
    "        error : Binary flag for control flow to indicate whether test error was tripped\n",
    "        messages : With user question, error messages, reasoning\n",
    "        generation : Code solution\n",
    "        iterations : Number of tries\n",
    "    \"\"\"\n",
    "\n",
    "    error: str\n",
    "    messages: List\n",
    "    generation: str\n",
    "    iterations: int"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "24b1c71a",
   "metadata": {},
   "source": [
    "**1. Define the Nodes**\n",
    "\n",
    "We have three main nodes in our state machine:\n",
    "\n",
    "1. **generate**: Calls the LLM chain to produce code.\n",
    "2. **code_check**: Attempts to ```exec``` the code. If it fails, we raise a flag.\n",
    "3. **reflect**: Optionally reflect on errors and refine the solution.\n",
    "\n",
    "We’ll define each one in its own cell for clarity.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6bf07d3b",
   "metadata": {},
   "source": [
    "**1.1 generate Node**\n",
    "\n",
    "This node invokes the ```code_gen_chain``` to generate a new code solution. If there was a previous error, it adds a user message instructing the LLM to try again.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "0c52c7a1",
   "metadata": {},
   "outputs": [],
   "source": [
    "### Parameter\n",
    "\n",
    "# Max tries\n",
    "max_iterations = 5\n",
    "# Reflect\n",
    "flag = \"reflect\"\n",
    "# flag = \"do not reflect\"\n",
    "\n",
    "### Nodes\n",
    "\n",
    "\n",
    "def generate(state: GraphState):\n",
    "    \"\"\"\n",
    "    Generate a code solution\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): New key added to state, generation\n",
    "    \"\"\"\n",
    "\n",
    "    print(\"---GENERATING CODE SOLUTION---\")\n",
    "\n",
    "    # State\n",
    "    messages = state[\"messages\"]\n",
    "    iterations = state[\"iterations\"]\n",
    "    error = state[\"error\"]\n",
    "\n",
    "    # We have been routed back to generation with an error\n",
    "    if error == \"yes\":\n",
    "        messages += [\n",
    "            (\n",
    "                \"user\",\n",
    "                \"Now, try again. Invoke the code tool to structure the output with a prefix, imports, and code block:\",\n",
    "            )\n",
    "        ]\n",
    "\n",
    "    # Solution\n",
    "    code_solution = code_gen_chain.invoke(\n",
    "        {\"context\": concatenated_content, \"messages\": messages}\n",
    "    )\n",
    "    messages += [\n",
    "        (\n",
    "            \"assistant\",\n",
    "            f\"{code_solution.prefix} \\n Imports: {code_solution.imports} \\n Code: {code_solution.code}\",\n",
    "        )\n",
    "    ]\n",
    "\n",
    "    # Increment\n",
    "    iterations = iterations + 1\n",
    "    return {\"generation\": code_solution, \"messages\": messages, \"iterations\": iterations}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c2b471ec",
   "metadata": {},
   "source": [
    "**1.2 code_check Node**\n",
    "\n",
    "This node checks the generated code by attempting to ```exec``` the imports and the main code block. If anything fails, it appends an error message and flags the state.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "1408c5de",
   "metadata": {},
   "outputs": [],
   "source": [
    "def code_check(state: GraphState):\n",
    "    \"\"\"\n",
    "    Check code\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): New key added to state, error\n",
    "    \"\"\"\n",
    "\n",
    "    print(\"---CHECKING CODE---\")\n",
    "\n",
    "    # State\n",
    "    messages = state[\"messages\"]\n",
    "    code_solution = state[\"generation\"]\n",
    "    iterations = state[\"iterations\"]\n",
    "\n",
    "    # Get solution components\n",
    "    imports = code_solution.imports\n",
    "    code = code_solution.code\n",
    "\n",
    "    # Check imports\n",
    "    try:\n",
    "        exec(imports)\n",
    "    except Exception as e:\n",
    "        print(\"---CODE IMPORT CHECK: FAILED---\")\n",
    "        print(e)\n",
    "        error_message = [(\"user\", f\"Your solution failed the import test: {e}\")]\n",
    "        messages += error_message\n",
    "        return {\n",
    "            \"generation\": code_solution,\n",
    "            \"messages\": messages,\n",
    "            \"iterations\": iterations,\n",
    "            \"error\": \"yes\",\n",
    "        }\n",
    "\n",
    "    # Check execution\n",
    "    try:\n",
    "        exec(imports + \"\\n\" + code)\n",
    "    except Exception as e:\n",
    "        print(\"---CODE BLOCK CHECK: FAILED---\")\n",
    "        error_message = [(\"user\", f\"Your solution failed the code execution test: {e}\")]\n",
    "        messages += error_message\n",
    "        return {\n",
    "            \"generation\": code_solution,\n",
    "            \"messages\": messages,\n",
    "            \"iterations\": iterations,\n",
    "            \"error\": \"yes\",\n",
    "        }\n",
    "\n",
    "    # No errors\n",
    "    print(\"---NO CODE TEST FAILURES---\")\n",
    "    return {\n",
    "        \"generation\": code_solution,\n",
    "        \"messages\": messages,\n",
    "        \"iterations\": iterations,\n",
    "        \"error\": \"no\",\n",
    "    }"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a8e5ea5",
   "metadata": {},
   "source": [
    "**1.3 reflect Node**\n",
    "\n",
    "If code checking fails, we can optionally reflect on errors and refine the solution. This node prompts the LLM for additional insights and attaches them to the current message list.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "fffbb62b",
   "metadata": {},
   "outputs": [],
   "source": [
    "def reflect(state: GraphState):\n",
    "    \"\"\"\n",
    "    Reflect on errors\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        state (dict): New key added to state, generation\n",
    "    \"\"\"\n",
    "\n",
    "    print(\"---GENERATING CODE SOLUTION (reflect)---\")\n",
    "\n",
    "    # State\n",
    "    messages = state[\"messages\"]\n",
    "    iterations = state[\"iterations\"]\n",
    "    code_solution = state[\"generation\"]\n",
    "\n",
    "    # Prompt reflection\n",
    "\n",
    "    # Add reflection\n",
    "    reflections = code_gen_chain.invoke(\n",
    "        {\"context\": concatenated_content, \"messages\": messages}\n",
    "    )\n",
    "    messages += [(\"assistant\", f\"Here are reflections on the error: {reflections}\")]\n",
    "    return {\"generation\": code_solution, \"messages\": messages, \"iterations\": iterations}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "56a569b5",
   "metadata": {},
   "source": [
    "**2. Defining the Workflow**\n",
    "\n",
    "Next, we tie these nodes together with **langgraph** to create our state machine. We also define a helper function ```decide_to_finish``` that determines whether we have encountered an error or exceeded our iteration limit.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "0926f8d6",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langgraph.graph import END, StateGraph, START\n",
    "\n",
    "\n",
    "def decide_to_finish(state: GraphState):\n",
    "    \"\"\"\n",
    "    Determines whether to finish.\n",
    "\n",
    "    Args:\n",
    "        state (dict): The current graph state\n",
    "\n",
    "    Returns:\n",
    "        str: Next node to call\n",
    "    \"\"\"\n",
    "    error = state[\"error\"]\n",
    "    iterations = state[\"iterations\"]\n",
    "\n",
    "    if error == \"no\" or iterations == max_iterations:\n",
    "        print(\"---DECISION: FINISH---\")\n",
    "        return \"end\"\n",
    "    else:\n",
    "        print(\"---DECISION: RE-TRY SOLUTION---\")\n",
    "        if flag == \"reflect\":\n",
    "            return \"reflect\"\n",
    "        else:\n",
    "            return \"generate\"\n",
    "\n",
    "\n",
    "# Build the state machine\n",
    "workflow = StateGraph(GraphState)\n",
    "\n",
    "workflow.add_node(\"generate\", generate)\n",
    "workflow.add_node(\"check_code\", code_check)\n",
    "workflow.add_node(\"reflect\", reflect)\n",
    "\n",
    "# Sequence:\n",
    "# 1. START -> generate\n",
    "# 2. generate -> check_code\n",
    "# 3. check_code -> either \"end\" or \"reflect\" or \"generate\"\n",
    "workflow.add_edge(START, \"generate\")\n",
    "workflow.add_edge(\"generate\", \"check_code\")\n",
    "workflow.add_conditional_edges(\n",
    "    \"check_code\",\n",
    "    decide_to_finish,\n",
    "    {\n",
    "        \"end\": END,\n",
    "        \"reflect\": \"reflect\",\n",
    "        \"generate\": \"generate\",\n",
    "    },\n",
    ")\n",
    "workflow.add_edge(\"reflect\", \"generate\")\n",
    "\n",
    "app = workflow.compile()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c2fdee4",
   "metadata": {},
   "source": [
    "**3. Visualizing the Workflow**\n",
    "\n",
    "We can visualize the state machine using the ```visualize_graph``` function from ```langchain_opentutorial.graphs```.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "0ae29806",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOYAAAF0CAIAAAC40dIQAAAAAXNSR0IArs4c6QAAIABJREFUeJztnXdgU1X7x5/sNKNJ914pnbRlb1kKVvZQKVAQRF5QARVeRF8RZaoMRQUEQaZgAQUEUaZAoWxoC5S2tHTvmdns8fsj/ZXV3STn3vZ8/kpvbs75Nv32uWc+h2IymQCDIQ9U1AIwmJaBLYshGdiyGJKBLYshGdiyGJKBLYshGbTly5ej1kA+7kjKb4nLtCbD6bJ8qU7jz7HPUcpOlOQQ+bVMr/Pj8HOV8kJ1jQOTRaNQUH+LrQRH2eaSLK3ckJl0V1Ih02uTJBUVGpVCr9cZDUqDoUqrlum0BH+tMhiqtepCteKvkpxfch7qTMbMGqnOZET9vbYYCp5KaJJHcrEfhx9XmOnPtY+yd0Itx2Kkyav/LMl5xy88jO+AWksLwJZtDI3R8E3G3aEu3p35jqi1WItidU0gV/BQVt3LwRW1lmaBLdsgcr0uWymjAniyuai1WJ09eenh9o4j3HxRC2kabNn6uSetFDBZfBoDtRDbcUtc9qqrL/G7Zbj7VQ/Xqkr+Ks3tUH4FgN4Obqly8S1xGWohTYCjbD3I9Tqt0YBaBRpOlubaM5gTPESohTQItuzz7M5LG+0R0JGfPnK91pPNY1IJ+h0QVBYqduQ9dGaxO/iXYkdjlGhqUKtoEBxln6AxGjIVUg82B7UQ9BwszAjlO77i4o1aSD108IDyDAwK1cZ+TUu9V15WjLaEeol288uukVq8WIuALfuEiTf/seX05eEDv8yMiWYwWQhLaAgHBmuyd5DFi7UI2LK13BKXR9g72fLrSHmQ6OPj7+DQ4hlgvV7fxhKaw2OFLE0utkbJbQS3ZWvRm0xinZoKVhlIP/XXH/t2bioqzPP2DZg1d+Gw6LFvvTn8UfoD87tcnv2/1x5RKBSTyXTk4J4/Du4uLMhhsuxCwiI+WPxlWHiX7KxHU8YPXrL0m7u3r169fH7Cm9M/WrKy3hIsqDmrRnahovDT4O4WLNMi0FELIBBW8uu1KxeWfzZ/7MSpb72z4NK/p+w4HACYv2jZgjmTpkyfM2TYKLadndlta1d/8tfR396aNT+yS6/7ybd27/ihvLQ4LLxLTtYjAPh1z5Z35i6aPG0Oj89vqAQL4svhcWk0y5ZpEbBla5l/79J8URdrdL9uXL0IAIs+XW1nxxkx5g3zRQaTCQCDXh7RtXsf85X4C6eOHd73+cqNYyZMAQBFjRwAQsOiACAnKxMAFn2yatDQ1+qKfbEEy8KgUN/x72yNktsIbsvWojLoeXSrzNAGhYQDwJefzq8oL6m7mJ56DwBCwiLqruze8YOPn2j0+Mm1Nzy85+Dg5ObhBQC52Y/cPLye9mu9JVicixWFcoPOeuW3DmzZWrZ3e5lvHcuOHj/5v5+uvnMr4c3RA44fOWC+mJ5639c/kMvlm3+srqpIS0mOHjmh7vmenv4gJDzS/Do7K7NzxPNtyudKsAbXqkvlOq31ym8d2LK1lGtURrBKT5RCoUyKnX34xBVv34D1az5VqZQAkP7wfkjokwBZmJ8LAJ5etWv/VCplSvKdkLBIADAYDAW5WaJOwc8V+1wJ1qC3g5sjk23VKloBtmwtO3NTM+RWGTzXajUA4Ozi1n/gy3q93mg0anXavNzHLq4edfcwGAwAqBthPX5kv0ajdnPzAoDC/BytTusvCnmmzBdKsAbRbr5sKuF6YLj7VUsIX1ilVQEILVvs3ZsJX6/8eGLMDAA4cmjf0GGjuVyeyWSy43D/PXsiMChUKpPEvvWunyjIXiA8cnB3p6DQ1JTkn77/CgBUqhoAyMl+BACBz0ZZBp3xXAmWlQ0AVVr1qbK8aT4hzbjXpuAoW8sU7+AhLl4WL1aj1XK5/G0/fnPw1+1jJ0z+fNVGc1Phg/9+WVNTs3bVJ5fO/w0AHA531dptEnH1rKkjD+7f8e4Hnzo5u2Y8emhuyNJoNB+/wKeLfbEEi/NQVm2yTkupjeCphFqMAI8VEifiNd1QkSqvDuM7OhPvC8GWfcLGx8mRAqduApeGbtixdUPcvu0vXg/rHJn28EG9H9l54GSA6Pmek8VRyKVjX+1V71uRXbo/uJf44nWh0OHoqZuNlMml0dk0IrYbsWWfUKBSnCrLe90zsKEbZFKJQi578bp5rrXej7i4eZi7VlbFaDSWFhfW+xaFSjEZ69FGo9HMg771crmq2JlpN9DJut271oEt+wwGk0mi06BWgRi10bAq7da2bkNRC6kfbNlnqDHo9uc/esOrE2ohKDGCyZHBJuxWWzxi8AxcGqMTT/hbYQZqIcio0qrFWg1h/YqjbP1ojAaxVsMg6n4965GjlJ0vL/g0uAdqIY3R4f4qzYFFpQmYrDPl+aiF2BS10WBHpRHcr9iyDWJHpVEolBvVpaiF2IiTpXn2dGZnMiTJw5ZtkCleQZ3tnbh0xm1xOWot1uXPkmwunWalhWwWB7dlm+a3gox/KwvWdR5gAuvsW0CBEUzXqkvL1cpZfuEyvdaBYfk9j1YCW7ZZVOu0QgZDotUsTb0RyBW87RemMRpSa8R0EyXS3klp0D+QVnHodIK/lut1t8RlOqNhlLt/mlycIqsa7R5AurwNuGHQLBwZTCpQHJnspaE9ezq4OjBYbBo9SyZ9KKu2ZzBpFMotcZmlXm8+fkQnk1u2TPNrAFAZ9YFcgZDB6ufo/h//zqTzK46yRGTkyJG7d+92c3NDLYSg4CiLIRnYshiSgS1LOIKCCJpZiCBgyxKOzMxM1BIIDbYs4RAIBBZP/dKewJYlHFKpFA/jNAK2LOHAw1uNgy1LOMrKiH4mDFqwZQlHSAjhUgcQCmxZwvHo0SPUEggNtiyGZGDLEg6h0MJJltoZ2LKEQyKRoJZAaLBlCYezszNqCYQGW5ZwVFZWopZAaLBlMSQDW5ZwBAQE4DUGjYAtSzhycnLwGoNGwJbFkAxsWcIREhKCGwaNgC1LOB49eoQbBo2ALYshGdiyhCM0NBS1BEKDLUs40tPTUUsgNNiyGJKBLUs48KbwxsGWJRx4U3jjYMtiSAa2LOHAeQwaB1uWcOA8Bo2DLUs4RCIRagmEBluWcGRnZ6OWQGiwZTEkA1uWcLi6uqKWQGiwZQlHeXk7P7OpjWDLEg68XrZxsGUJB14v2zjYsoQjNDQUR9lGwJYlHOnp6TjKNgK2LOHw8vLCUbYR8FF1RGHEiBEMBgMAqqqqBAIBnU43mUwCgWD//v2opRELOmoBmFooFEpxcbH5tXmci8VizZkzB7UuwoEbBkShT58+zz3xfHx8xowZg04RQcGWJQqxsbHu7u51P7JYrNjYWKSKCAq2LFHo1KlTjx496gKtSCTCIbZesGUJxMyZM82BlsPhTJkyBbUcgoItSyBEIpE50Pr7+48cORK1HIKCRwyeoUStzFPKdUYDKgFRMRNuSCt6jh59pbIYlQYqheLB5vpy+HRCDg/jcdlaHsqqd+enFasUne2dxFo1ajkosWcysxUyDp0+2i1gpLsfajnPg6MsAEBmjXRDZuJbvmF2NBpqLcTABUwAx4qz9SbDWA9ibezBbVkoVtd8kXpjbkAE9uvTUAAmeoquVJWcK89HreUZsGVhb376WI8A1CoIymh3/z9LcoyoZTwNtiwkSyocmWzUKggKi0qr1KgqNSrUQp7Q0S2rBxOTRhXQmaiFEBc/Lr9EXYNaxRM6umWpABVqAoUQAqLQ61FLeIaOblkM6cCWxZAMbFkMycCWxZAMbFkMycCWxZAMbFkMycCWxZAMbFkMycCWxZAMbFkMycCWbVeUFeSlJd5CrcK6YMu2H26c/+e/k6LvxJ9HLcS6YMu2lcrSInEFIfJuq2oUqCXYArz3q8Vo1apju7deO3tSWl3lHdBJq1YZDPrlOw7xhQ7S6qpDW79LSvhXXaP0EgWNnv6fvq+8BgC5Gamfz5j42uQZJfk5mfeTmWx2z8GvTH7/YzaHYy4z59HDw1u/y7ifSKFQg6O6vfnuwoCQzgCw8ZN5dy//O/z12NS7N8qK8kO79vr0x12Htn539dQJqbiSay/o0nfQ1AVL+EKHhFPHd37zBQCcObzvzOF9rl4+3/1xDgAakkRecJRtMTvXfvHXvu06tSo4smtRdmZxXnZot958oYNCKlkxZ/Llk0c4PPuA8Mii7MzNn3904fihug+ePri3rDC/zyuvsdjs80fiDvz4jfl6ZkryyrmxD25e9fQPdPfxv38jYdW7sXmZaXUfPHfkgIOLW/eBr7wycTIA1EglfKFDcFR3MBqv/HNs+5rPAMDF0ysgLAIA3H39+w4b0W3AUABoUhIZwVG2Zeh1uuvn/qEzWd/89re9g+Od+LPff/pBaX4OABzb/VN5UcHLE2Le/ng5hUIpyMr4fObEw1s3Dh79hvmzbj5+a/YcZdlxZJLqD8cOufLPsZkff0mj0fasW6HTqOet/Lbf8FEAcOHPQ7vWfnn0l80L124xf7DvsBHzV22s0/D2JyvMCWjVSuXHMSOSr15S1ihCuvR8edyknWkpXfoOmr7wM/OdDUkaOnYSeVPYYsu2HJOJYjJRqVQAoNGZAKDTagEg8coFs43iNq0z32jH5SmkkvLC2h2q9g5OLDsOANgLHZ09vUrycsQVpQCQl5lGo9Nz0lJy0lIAQKtVA0BW6v26CvsOG/F0/TlpD4/v3Zab/lAmFZuMBpPJVFVazAkMflFpQ5JqZFKeQGjN78iKYMu2DDqDMWTMGxeOH1761gS/kLC0u7cAoOfg4QAgrqwAgGtn/nruI0w2S/PCdj8GkwUABp1eLpMAgEGv/ydu9zOfemoHJZvDq3udcT9xzby3TCZTZJ8BTm4eiVcuSCorNA3sBWpIEo1O4u3v2LIt5vU5H96OPyuuKpfdqLJ3cIqe/Naoae8AAIfHk1Vr1sX94+n/fK4KuVTSUGl2XB4ACJ1dNv91pTm1X/jzoEGvf2vR0lffnA4ApQX5ksqKp1P+mIxPtnA3Iom84O5Xizm4eb1cIlmweuPuy/d/OH7xjf98QKPRACCsWy9z81Gn05pbvVmpD5oszcM3QODkLKmsOPvHAfMVaXVVaX5uQ/erapQA4OzhDQCqmprCx+kAYDToAcCOyweAkvwcADAajXq9vnWSCA6Osi2mqqIUAB7cSHh0766kslzo7Dp49Os+gcETZs1LvhZ//ezJ1Ls3XD19ygpyKTTaxiPnmazGkiRQqdSY9xZtX/3Zvm9Xnf39Vzsurzg3K6JX/7q+13OEdu159/L5HV8tDe3SMzs9RSYRA0BJXk5Il56i8Agqjfbg1tVPp41VKeSfbdrTOkkEB0fZFjMq9h0nV48Lxw+fPrj3xvlTpw/uXTlnqkxc7S0KWrbtQNf+g7UqdXbaAzaHNyB67NOP6YYYNGriB1/9EBAWUVVSXJCV6e7tH9VnYEM3D39z2ogpM6lU6r0bl/2Dwxet+4lrL3iUfBcAXD19Zv9vlZObR0letsloYrBZrZZEZDp65kMjmEZePbE8rE/zP6JVq+7fvNrtpaE0Gk2rVq2ZNyMr9f5nm/aE9+xrTaXI2FfwaK5/5y4CZ9RCasENgxazdcUnty+dZdnZObi4ycRipVzKd3D0CwlDraujgC3bYoZNnGI0GjLu360sKRI4ufR5JXrs9DlcvgC1ro4CtmyL6dyrX+de/VCr6Ljg7heGZGDLYkgGtiyGZGDLYkgGtiyGZGDLYkgGtiyGZGDLYkgGtiyGZGDLYkhGR7csBSgirqBDL2ZrCiGdySLSuZPYsmAwmQh1rhXRSJZWBHIJtOino1sWAAa7eBVhyzZAnlLxkpMng0IgnxBICiqmegdnK6T3ZZWohRAOlUF/tOTxf4O6oRbyDB19V0Id8+/Fi3j2AjrLk801QYf+TqhArdCq5HrtubL8PT2GCxjEOi0VW/YJJ0pzEsXlRpMpRyk3X9HpdBKJxMXFxZYyKiorHB0caZbu8ZSWljIYDBaLxWSxmAxGI3e6szlUoEQJnGJ9QiyrwSJgyzbIjz/+eOPGjW+++cbX19dmlZ44cWLNmjWjRo364osvLFisTqebPHlyXl4eAPB4PKFQGBgY2Lt375iYGAvWYhtwW7Ye7t+/P3r0aIFA8Ntvv9nSrwAQFxdnMBiuX7+en59vwWIZDMagQYPMebgUCkVhYWF8fPz69euHDRs2b948C1ZkA7Bln2f9+vUbN2785ZdfZsyYYeOq//77b3MgLC8v37dvn2ULHzNmjIeHx3MX2Wz2li31J0wgLNiyT7h169YHH3zg4+Oze/dud3d32wv49ddftVotAFAolDt37lg20IpEIpFIZHwqiQGfzz958qQFq7AN2LK1rF69evfu3StWrJg8eTISAX///ffTHi0sLDxw4IBlqxg/fry9vb35NYVCmTJlimXLtw3YsnDjxo2xY8dGRERs3brVwcEBlYy6EPu0sIKCAgtWMWTIEDc3N5PJRKFQbt++bTKZfvjhBwuWbxs6umVXr169f//+gwcPjh8/Hq2S3Nxc47Pk5+fv2bPHsrUMGzaMSqXevn0bAObOnRseHj579mzLVmF1TB2V5OTk6dOnHz16FLWQ56moqNDr9dYrf/LkyU//mJSUNH36dOtVZ3E66Ljspk2bkpKSNm7cKBAQaMEHKkpKSmJiYuLj40mRjb7DNQwqKio++eQTPp+/a9cuYvo1Ojpapao/K7eV8PDwOHXqVExMjJEUSRFRh3mb8tdff0VHR2dnZ6MW0iB6vb5Xr16oau/Zs6fRaERVezPpQA2DZcuW0Wi05cuXoxbSGCaTqaamhsfjNeNeqzB37txt27YRuYXQISxbVVW1YMGCadOmjRw5ErUWolNdXR0TE3Pu3DnUQhoGdZi3OvHx8cOHDy8pKUEtpFkcO3Zsw4YNaDWkpaVNnToVrYZGaOfdr/379x87duzs2bNIJmBbwb1794KCgtBqCA0NnTt37po1a9DKaIj23DBYunRpVFQUudbXqdVqJpNpPgcPLd9//72zs/O0adNQC3ke9F+NlZg5c+bAgQPJ5Vej0ahSqYjgVwD46KOPLl68mJycjFrIC6BumViFKVOm3L9/H7WKFrNz587NmzejVvEEg8Ewbdo01CqehxD/0JblpZde2rJlS2RkJGohLSYpKWncuHGoVTyBSqXGxsYuXboUtZBnaG9t2X79+l28eJHNJvFRbERj8eLFo0aNGjp0KGohtbSfKGte8BEfH09Sv965c6eiogK1inpYt27drl27UKt4Qvux7JgxYzZs2MBkEmsHczNJSkratm2bjbfyNhMqlRodHb1x40bUQmppJw2D+fPnx8bG9utH1rONrl+/HhYWJhQKUQtpkOjo6AMHDjg7oz9jsT1Y9ueff3Z3dydUx6X98ffff9+8eXPlypWohZC/YZCYmHjnzh3y+lWpVKLabdYiRo0apdPpsrKyUAshv2W/+OKLVatWoVbRejZv3rx69WrUKppFnz594uLiUKsgecPg119/ZbFYkyZNQi2ko9CvX7/4+Hi0fVxyR9ktW7ZMmDABtYpWcuPGDYtv+7Y2kyZNOnz4MFoNJLbsoUOHJk6cyGg0IxphSUtLu3jxYmxsLGohLWPy5MlnzpxBq4HEJ4XfvHnzo48+Qq2iNUil0rCwsLCwMNRCWoyHhweNRktJSYmIiEClgaxRtqSkJCMjw8Y53ixCaWnp77//jlpF6xk4cODly5cRCiCrZa9fv06cWe8WsWzZMvJlu3iKQYMGYcu2hpSUlE6dOqFW0TIKCwsBYMeOHaiFtImgoCCFQlFaWopKAFktm5+f7+fnh1pFC7h3794ff/yBWoVlGDt27K1bt1DVTlbLcjgcb29v1CpawLlz50jaWXwRHx8fbNkWk5KSQpbhrX/++ce86hS1EIvRuXPnhw8foqqdrJbt1KkTh8NBraJp1q9fT6eTeCSxXnx9fcVisVwuR1I7WS2bm5srk8lQq2ia/v37v/rqq6hVWJ6IiAhUgZaslo2MjKypIe6RiJmZmeY10QMGDECtxSr06dMnOzsbSdVktazBYLDsUQKWZdmyZR9++CFqFVbE2dk5NTUVSdVktWxUVBQxd0pdvXoVAA4ePEiQdARWwsfHxzzMbHvI+rV6enp+//33I0eO7Nu376BBg1DLAfN+ydjYWCcnJ9RCbIG3t7dlz3FoPiTrzMbExOTk5NRl7jU3Z4mQb0utVhcXFy9btiw0NBS1FlsgFAoNBoNcLufz+TaummRRdv78+a6urk9fMZlMnp6e6BQBAGzbtk0ikYhEog7iVzPe3t5I2gYks+zAgQMnTpzI5XLrrtDp9P79+yOUdOPGDRqNRoRIb2NCQ0PLyspsXy/JGgYAMGvWrPT09IsXL5q3ADk7O0dFRSFRkpiYKBAIgoOD+/bti0QAWuh0emVlpe3rJVmUNbN27VqRSGR+TafTkVg2NTV169atgYGBjo6Otq+dCDg4OFRXV9u+XlJalkKhrFy50t/fHwC6dOmCRIPRaCT7MsI24ujoiMSyzW0YmACK1TXEOfOB7+c9ftaMXbt2hQ3oW6y23TRYcXHxwoULDx065NgpoHX18uhMezo5FvQ0joODg1gstn29TW8Kvy+rOlDwKElSEcwTVmnVthJGUGpqlFxum5bjsKg0lUE/2iNguk+I5XQhICkpafv27Vu3brVxvU1E2duS8m3ZDyZ6Bk7wENlKUvtHqtcmSiq+enT3s5AeqLW0Hj6fjyTKNtaWvSMp356T8h//zk5MUqa/JCwCOnOosxeTRv3q0R3UWloPl8tVKBS2r7cxy8YVZEz1JvfDi8i85OihNRoTJQjGiSwCh8NRKpW2r7dBy1ZoVAUqhR2NZls9HQs6hfJIgaDTbRG4XC6xLFugVoTwiJvutH3gweZVacjao6XT6RQKRavV2rjeBi1rNJqkOlur6WjoTXqpnsRfMpJAS8qpBAxBcHJyUqtt/ZTAlsW0Hq1WS6CGAQbTJEwmE1sWQyYYDIZOp7NxpdiymNaDoyyGZDCZTBxlMWTC39+/bh+ezcCWxbSe4uJiHGUxZIJKpdr+RCNsWUzroVAo7bNhIJNUJ5w6nnjlgqUKTL17c8+GFTnpKZYqsPnkZab9vf8XmZisa1ksC5VKbZ+WvXr6xLaVn6Qn3bZUgWcO7zt/JK5GjiDz4c+r/he3ZYNKgSZPJdGgUBAcdYgbBpjWgy2LIRlcLpdCsfUeVgun3ih4/Ojozi3pybc0arWXf+CYt+b0Hhptfis3M/3LdyYVZGc4uLi9PD5m5JS3637bG/+e/mvvz8W5WWwer9uAoZPf/6+9Q21ygDvx50/F7c7LTKPSGJ06R056b5F/cPjTNV49fWLriiUOzm4rd/3u4OL6gqJnkEmqj/2yJTHhgqy6ytHdY+DICaOnzabT6UW5WQe3bEhLvGU0GgLDo96c82Fwl9pdWSaT6e/fdl04dlBcXubm7Vtd+Uy6xUaUdwRUKpXBYLBxpZaMshkPkr6YHXP70lkOz96vU2hRblZu+pNEz6l3rleVl3oFBJYV5MVtWnfhz9qjUE8f2rv584+K83NE4ZF2dtzLJ4+sei9WVVNjfuv7T+dn3E909wlwcfe8fyNBLnlmf1xOesov33zBZLMXrdvSpF/lEvHy2THnjhzQajUB4ZFKufTetXg6nV5RXLhiztSkhItu3n5+QWFpibe+WjAzK/WB+VO/bvzq4Ob1laXFngGdVMoapVxaV2AjyjsItg+xFo6ye9av0GnU495+7805HwJAVXmJHfdJWrzIPgMWrd/KYDDjTx7ZsWbp5ZNHXpkQI62qPLTlWzaHu2rXHx5+ASaTaeuKJdfO/HXpr9/7DRt1aMu3FArlkx92RvTqDwBFuVle/oF1BcrE1TvWLNVp1PNXfx8Q1vT5lH/u3lpeVBDZZ8DCbzYz2XZatUpaXQUAR3duUcqlL0+ImbVkBQAc37vt923fH9nxw5KNvxTmPD77+68MFvuLbfsDQiMMBsMnU0eV5ucCQCPKR0yeacFvFfMcFrOstKoyPzPdjsOb8Pb75itOrh5P3+AjCmYwmADQe+irO9YsrSguAIB7NxN0Oq3QxfXi8dqgq6pRAEBW6gOuvVCn00b1fcnsVwB42q8AsH/jGplEHNqtV99XXmuOwsSECwDw+n8+YLLtAIDJtnPx9AaAlFvXAODVN6aZbxs86vXft32fnnwHAO5dvQQA/YaNDAiNAAAajcZk1W42bkR5275IkmH77pfFLCuXSQDAyc2d3tTZRjQ6AwB0Oj0ASCsrAKCiuPCfuN1P38Nksc1vuXo1eEqtTCIGgPSk22lJt8O69WpSobi2QJ/nriukYgAQOrmYf+Q7OAKAVq3WaTXiqvo/0rjyJpW0G5CMGFjMsmw7OwCQVFeaTKbmN3E4PD4A9B02cv6q7557699jhwBAXFHe0Gd7DBoWHNk1bsuGvetXrN57rMl/FS6fL63SSCrK7YXP9JB4AgdxZZlUXMUTCAFAUlkGAGwOh8FkCR2dAUBcWU9KykaUdxzMOxZtXKnFul+Orh5CZ1eFVFIXdaRVlWVFTeQmD+3eCwDuXrlQ9zzNefRQo1ICQGi3ngCQfO1SxoOkure0T+1HHf7G1OjJM70DOhXmPD59cE+TCsO69Ta3aHVaDQDodNqctBQACO/ZGwDqnu9nDu8HgPAefQHALyQcAK6dPlmQlWF+CJo/27jyjoPBYLD97JfFoiyVSo15778/r/okbtO6f4/E8YUOBdkZ3Qe+Mn/lt418yss/cOCI8VdO/bniPzG+QWF6va445/GUBUtGTJ7p5R84aPTrl08eWf1urJcoiEKhFGZlzPz4y5fHxzxRT6fPWPzFmnlvHdv9U79XRzm5NZbOe8I785KvXbp18Ux60i03b7+ywjwGk/3tkXPjZrx3J/786YN705PuUCiQk/6QzmRNnD0fACJ7DwiK6p55P3HpjAleAZ2UcllVWUmTyi31lWLqxZKDXANHjvvom02B4VHVleUH3PIeAAAZO0lEQVRFuY89fAKi+jR96tXspWvefPcjF0/v/MfpVSXFod17+3WqTd/+zqcrY977r4uXT3FuVlVZSWj3Pt6ioOc+Hta9d79XR2tUqn3ffdV4RV7+gV/+HNftpaE6rS73USqbwxvw2hijQe/pL/r8p18jevUvyc8uys0K79Hn85/21Y3+Lvxm80sjxrI5vMriIm9Rp6f7lI0ox1iPBpvPd8Tl+/LTp/oE21xSB+KetKJSq1ka0hO1kFby2WefDR48ODo62paVki/xfEOolcofPlvQ0LuvTJjcc/Bw2yrCWIX2Y1mDQffg5tWG3o3qO9C2cjDWov1YlssX7L+ejlpFxwLJCZJ4JRem9RgMBhKPy2IwtgFbFtN6WjTTaSmwZTEkA1sW0yZwlMWQCdsv48KWxZAPbFlM6+FwOHS6rYf2sWUxrUehULTP1BuY9gqxBrloVIojPlTRytCpNEcGC7WKNkEgy/px7FNkVbYV0+EoUiqcWXaoVbQeYo0YODJYnXgCmcHW2UM7FAYwhduTOFUHsRoGADDLL3x/Pl4bZS3OlOe7sOw680lsWaPRaPvFXI3VF8wTLg/t80PWvTyVQqHH4dYyGMFUqFL8U57va8efL4pCLadNuLm5MZra2GxxmhhUE3HtN0YN/DX/0R/VmQ5MdokKwWHmL2I0mUwmE836/986vZ5h6XFHAZMlZLDGegS86tpgigayUFxcTMQdtu4szsdB3QCgxqAnyJDYJ598Mm7cuP59+lu7op07d3bt2rVHjx4WLJNNoyNIZGUdkDQMWhBCuDRCbGEwmUy9u3Z7ZeAgG9Q1f87c1NRUik7PZuPxvnogXFuWmFAolGnTptmsuvDwcDqdvmjRIpvVSCKwZZvFkSNHkpOTbVkjnU4fN25cQkKCLSslBQaDgUaj2bhS8ll206ZNgYGBzbjRkgwePDg0NLS0tNTG9RIcHGWbRi6Xb968mc/nN+NeC+Ps7Ozs7Pzhhx/avmrC4uPjY/tBLpJZlsvldu7cGVXtdDp92bJl6enpSCYqCUh2djaxZr8IyMSJE4uKihAKcHZ2Dg4O3rdvn0QiQSiDIOj1erxetjEKCgp8fX29vb3RyqBSqTNmzIiJiWnGve0cvV6Pu1+N4ePj8+OPP6JWUcuZM2cAoLKyErUQlOAo2wRqtdr204ONk5iYePPmTdQqkIEHuRpDJpONGDECSRKoRnj11Vf37t2r0WhQC0EDjrKNcf/+/TfffBO1inr46aefVCpVWloaaiEICA4Otr1lEZwo0i45f/68TqcbMWIEaiE2ZejQocePH7e3t7dlpaSJsgkJCXq9HrWKBhk2bFhOTg5qFbZGp9PhqYT6SU1N/fnnn23/DGoR77//vjncohZiO/R6PbZs/VRUVMyYMQO1imZBpVL37t2LWoUtMJlMSLpfhI5bdQwePBi1hOby8ssvnzhxArUKW4CkVUCOKGsyma5ebfAQBAIyduxYADh+/DhqIdYFW7ZBbt68GRcXh1pFi/Hy8mrfLQS9Xt+1a1fb10uChoFWq7XlNgRL0bNnTyIPcbQdlUqVnZ1t+3pJEGUHDRrUt29f1Cpag1n2119/jVqIVVCr1SwWguxMRLesXq/fvn07ahVtYsiQIf/88w9qFZZHrVYj2cVJ9IZBYmJiUlISahVtol+/fkgeoNZGo9EgibJWt2wbl4zw+fwFCxY0pxAajUbYuQaRSJScnHzt2jXzdEP7oH1GWb1eL5VK21KCk5MTADSnEA6Hw+Px2lKXVenatSuDwdi7dy9Z5kSapH1atu0olUoOh4NahWXo3Lkzwo1rFker1To6IsiBR+jul8FgaH9LUa9du7Z69WrUKiyAXC63/V5FolsWAIj8rG8d/fv3nzx58smTJ1ELaStKpZLL5dq+XkI3DGg0mu33adiATp06derUCbWKtqJQKJBYltBRVqlU1rvZa926dXPmzEGhyJKcOHFizZo1qFW0HlTdDEJbVqVSEW2zlwUZO3bs66+/Tt7djrhh8Dwmk8nGOzRsT2hoqFarRTUm30a4XK5QKLR9vQgsW1paumPHjqSkJBaLFRgY+NZbbwUHBwPAypUrvb29aTTa6dOn9Xp9r1695s2bV7e8LT4+/rfffisvL/f19SXa1vC2wGQyt2zZYmdnN2vWLNRaWkZmZmb//lbPSv0itn7sVldXL168WC6Xz5079+2339br9UuWLMnNzTW/e/To0bKysuXLl8+dOzchIeHAgQPm6xcvXly7dq2jo+PcuXO7d+/eznZZzZs3r3v37llZWaiFtAyZTCYQCGxfr62jbFxcnFAo/Oqrr8yTqy+//PLs2bPPnDkzd+5c8xrTjz/+mEKhhISExMfHm1cXaDSa7du3R0RErF692jyAUFJS0s5m7bt27SqRSFQqlZ0daY4Bk0qlSFputrbsnTt3KioqXn/99borOp2uoqLC/JrFYtWNTru7u2dmZpr3Kkql0gULFtQNeLXLPplQKFywYMG0adP69OmDWkuzkMlkHcKyYrG4d+/eb7/99tMX6+14slgsg8EAAOXl5ebzemwoEw2bNm26dOmSRCJB0q1pEQaDQaVSIZnosbVleTyeTCbz8fFp/DaTyaTT1Z40Zm4wtXF5DVkYMmRIZWUlklxXLUImk6Fad2/rJ2zXrl1TU1PNT3wzKpXqxdv0en3dsIBIJKJSqRcvXrShTJQ4Ozv37duX4KMiVVVVZWVlSKq2dZSNjY29ffv2559/PmHCBKFQePfuXYPB8MUXXzx3G5VKrRvecnV1HT58+JkzZ7RabY8ePaqrq2/fvu3g4GBj5bbkypUrp06dGjVqFGohDSIWi1H9CWxtWQ8Pjw0bNuzcufPw4cPm2fYxY8a8eBuNRnu6j/Xuu+8ymcxLly4lJiZ27txZJBKJxWLbCrcpbDY7Ojq6uroayeq+5oBQm3XTyOn1+urq6lZ8UKPRtHSXAcGXeLeCf//998yZM+vWrUMtpB7i4uKKiooWL15s+6oJOmGr0WjwgYavvPKKSCRKT08PDQ1FreV5JBKJs7MzkqoJOsDJZDKRZCIhGgEBAc7OzjKZDLWQ5ykvL0fVMCCoZdlsNpIV7wTE2dl5yZIlt2/fRi3kGSoqKlxdXZFUTVDLKpVK1BIIxLZt24qLiwn1nZSVlWHLPsFkMqnVatQqiMW4ceMItW0TYZS1bveLRqO1Yhq6pqamqKjI39+/RZ8ibBIDS5Genr5q1aq61W0IUalUer0e1fgMPiuBTKSnp+fl5UVHR6OVkZeX9+WXX+7ZswdJ7URsGOTn558+fRq1CiISGhqK3K/mxZ8Ih8CJaNmUlBRy5UC2MR9//PGNGzcQCigtLXV3d0dVOxEtGxgYaE6EjamX9evXp6enIxxAQGtZInZZQkJCUEsgOjNnzkRYu1KpRJipiYhR9vz58/Hx8ahVEJ3k5GRUyZbT0tJcXFyQVE1Qy967d6+oqAi1CqLTtWvXiIiI33//3fZVFxUVeXl52b5eM0Qc5MrNzeVyuQj/jzGNoNPpBg4ciLD/R8Qo6+/vj/3afNatWyeRSGxWXVFRUWBgoM2qexEiWvaPP/5IT09HrYI0zJkz59NPP7VZdQUFBWgDChEtm5CQUFlZiVoFaRAKhdu2bbNZdfn5+b6+vjar7kWIaNnx48cHBQWhVkEyEhISbt26ZYOKCgoK0FqWiOOyQ4YMQS2BfLz00kuzZ89mMBjdunWzakV6vT4gIMCqVTQOEaPsrl27CgoKUKsgH7/88osNWpnXrl3z9va2di2NQETLXr58uYMk2rA4fD7fqsNPSqVSLpejzdxDRMsuXrwY7aOHvAgEgqKioh07dlip/Pz8/N69e1up8GZCxLZsREQEagkk5vXXX1coFHq93rzmfezYsRwO5+DBgxYpPDs7G/nmCCJG2cOHD+NBrrbA4/ESEhK0Wu2wYcOKi4sVCoWlkhHl5ua2dLeIxSGiZY8ePWrL6Zx2SUBAQL9+/cxfo0KhyMvLs0ixNTU15pTrCCGiZSdNmoQqrUP7YNSoURMmTKjbVS+Xyy2V9/zmzZtNZq20NkRsy06cOBG1BBIzePBghULxXNbotLS0tpdsMplww6B+vv766+LiYtQqyEp8fHx4ePjTnSQKhZKRkdH2krOzs0UiUdvLaSNEtGxycjKh0kyQjl9//XXFihWRkZECgcC8uFQmk7V9qDsnJ4cIlqUtX74ctYbniYyM9PX1bfd5CaxKQEDA+PHjAwMDKysrtVqtTqfr0aOHp6dnW8o8f/68UCjs2bOn5WS2BgIt8e7Ro8dzV0wmU0BAwJEjRxApIiJHi7OvVBaZALIUzR1UMRiNer2exWS2sWq9wUClUqlWy5XmyGL7c+zf9AqKsG8sQR2BIpm3t/dz+2d4PF47OKvWgnyeekPAYPZ0cPNic6ntLstejUFfqlb+8Dg51jdkiHODG3UIZNmRI0c+N9Po4+NDhEwTBGFxSoKPHb+PQ7s9mYdJpTkwWGF8h4OFmRKdZrxH/e1mAnW/YmJinl4ixOVyY2NjkSoiEGfL812YnHbs16eZ7B10ubKoUlt/KkECWVYoFI4cObLuR19f3xEjRiBVRCBuicscGG1tjJIIClBSZfUfWUAgyz4daDkcztSpU1HLIRBag8HTDsFJ8qjw4/BLNDX1vkUsywoEgtdee808RoND7NPkqxRGogzt2AK1wSDX6+p9q63dL5lBy6bQmVTquYoChU6nNxnHeASwqbS/SnLURkMrXsfExBwryAx65WW10dCKck6U5GiMhnEeIiaVeqIkh0mlDXP1oVMoxeoaT3YHilLtmFaOy2qNRr3JuOjBlVK10pXFURp0VVqN3mQAoAAABcAEJiK8pgBFyGAJGMwSdQ0VKFu7DXVncWgkPIVh5t3zEz0DXVmkOUe8jVyuLHZksWf7hb/4VoujrFSv/fHxPalee19au6Q1V1l7XgoFnliBOK8lOo1EpzFfefvueS87HgAs6tQ10t6ppb87hgi0zLJpcvHajLvF6vrbxaSgSKUAgOVpN98XRQ5w8mRTCX26MeZFWmDZzx5efyCr0hgN1tRjI+R63dqMRF+7zEVB3cL57fk43PZHc0cMZidduCMpbx9+rSNfJV+ZdlNt0KMWgmkBzbLsydLcfKXc+mIQUK3T/O/hdT2xT5LHPE3Tlt2Qmbg5+75NxKDhobx6ZuL5hqYHMUSjCctuzUmJryw2EmaBopUo16iWpl5HrQLTLJqwbKm6pp21XxuiSKWowoGWDDRm2fjKouvVpTYUgxKt0fh1xh2dCTdqiU6Dli1UKX7KfmBbMYhJkVUfLMxErYIoaDXq04f27vtuDWohz9OgZVPl1VK91rZiWoA8K+/cS+OrEy35T2U0maQ64v7KNiD+5JHFMa/NHNxlxdypcql4//dfP7iV0JYCdTrtzQuntRpLtrganErg0hlE7nXJM7IAgBdg4TQQ0v+f2u2A3Lt2eceapXZcflSfAT4iy6SkXjp9fHFe9s9nbzJZbIsU2KBlVQb9iRLL5BexEvKMHIZQwHQQWrbY69Wl16tL+zkiOzoQIQlnTgDAB199H9l7AABUlZe0vUxVjeXn9uu3bKpcnFMjs3hldUjTH2fvOii5n2YyGh26hIctfo/t5qyTyW+//5nvm6NrCopLz8YbVGrnfj0iln1EZTAAQCeTZ/0SVx5/Xaeo8YgeUpNTwBNZPv251mi4Wl3SDiybm5H6+YyJXqIg/+DQ5GuXtSrVku9/CeveW1pddWjrd0kJ/6prlF6ioNHT/9P3ldcA4O1BUTqdFgDWfviOvaPTT3/Xc4ZwzqOHh7d+l3E/kUKhBkd1e/PdhQEhtWcsyiTVx37ZkphwQVZd5ejuMXDkhNHTZi+ZPEJcWQYAc1/tAwDLtu0P6WKBDeX1t2UZFIoRrNUqqLh25/Z7n2rF0qD33gpZMEualvlo004AoHM5NflFGZt368TSkAWznPp2L7twteziNQDQyWtuv/+/kjOXvMZFhy1+V5z8UPIgzeKtAjMijr01ikVCUXbmgxsJPQYNi+o3KLRbL4VUsmLO5Msnj3B49gHhkUXZmZs//+jC8UMAENVvENdeAADBXXpE9u7/YlGZKckr58Y+uHnV0z/Q3cf//o2EVe/G5mWmAYBcIl4+O+bckQNarSYgPFIpl967Fk+n07sNGMpgsQGg5+DhfYeN4Asa2+rdfOqPslECZ5XeKjPvOpkiZdVG+yBRzy1rzOGz7NJ1TXkVABjUGjAafSaODHrvLQAQdu1cfvGaqqQMAB5v26csKO798zr70E4AwPH2vP3epzyRnzUU+tjxrVEsEqhU6mdb9nn/f8P02O6fyosKXp4Q8/bHyykUSkFWxuczJx7eunHw6DcWrt28+v3p6Um3p33wP1F4Pfl996xbodOo5638tt/wUQBw4c9Du9Z+efSXzQvXbvlz99byooLIPgMWfrOZybbTqlXS6ioAmL7ws1sXzog16v8sXc3lCyz1S9Vv2TuSciuNUJaci9fLa1wH99UrlDq5ouzi1eo7yZ3mTAMARW4BADj2jDLfaVCpAYBhz9crVcWnLroPG2T2KwDoFTUAwAuwyrkoh4oyejm4WqNk2+MlCvJ+qiOVeOUCAKiVyrhN68xX7Lg8hVRSXpjv4ddY2vTK0qK8zDQanZ6TlpKTlgIAWq0aALJS7wNAYsIFAHj9Px8w2XYAwGTbuXha8TCF+i1bplFx6fQaKwRaWdpjCo2atftQ5tZ9AEDn80SzpvhNGQ8ANTn5AMD1r33cKwuKAYDr6yV/lGXUah17RNUVUpNbAABc61iWwMMkLYbNeWbvkLiyAgCunfnruduYbFbj5UiqKgHAoNf/E7f7mQ8y2XXFunrZKIln/ZbtJnB2YLBr9AqL12fS65lODv33b67JLaDZ2XG83KlMhvktRU4Bncdluzj9/4/5Zl+Kk1MAgOn0ZFWrOPkhy8WJwbfKXq63/MKsUSwR4PB4smrNurh/PP1blg3OjssDAKGzy+a/rrz4LpfPl1ZpJBXl9sL6W6smi+60rL/75cnm+tjxLFhNHWw3F22V2KBUCcKDeQE+dX41R1mu/5MHiiI7n2HPZzkKmQJ7AFAV1Y65yLPyKm8kWmO4wEwYr92u+A7r1svcojUPDuh1uqzUJuZidFotAHj4BgicnCWVFWf/OGC+Lq2uKs3P/f9iewPAn7u36rQa8/SBufEAAHZcLgAU5+eYDwyzyG/R4FSCzDpTX+6vDs797djdhct9JowAKkX6IC1i2ULzW4qcAue+3evurHOwIDyY6SjM3n2IymQCQNauOJPBYKVWAZ/OyKqRhrXTfQoTZs1LvhZ//ezJ1Ls3XD19ygpyKTTaxiPn6x3nZ9txAKCypKgwO9NbFBTz3qLtqz/b9+2qs7//asflFedmRfTqv3DtFgCY8M685GuXbl08k550y83br6wwj8Fkf3vkHJ1OD4rqXpyXvWHRXDcf36Fj33x5fEzbf4sGJ2ytFGX5gX5Rq5dQqJSMzbtz9v3Ocq59lOgUNZqKqrqGrMlorMkrMv9Is2N3/eYztrtr+nfbc387FjD9Dev1vagUSnv1KwB4i4KWbTvQtf9grUqdnfaAzeENiB5ramB5O5cv6DXkVZ5AaO5jDRo18YOvfggIi6gqKS7IynT39o/qM9B8p5d/4Jc/x3V7aahOq8t9lMrm8Aa8NsZo0APApHcXdu0/2GDQleRlm1sXbafBTeFVWvXHKVcLVZZvzhKZ7yJfiiDkzlu8KbyOBhsGTkz2O/7hK9IaPMlXJ1MkTJpb71t2Xu6qonpWLbq81Dvi8w+bLbsJKq7dSVm5sUUC/KaME82Y1FCBg529iOlXzNM0tsOWT2c6M9kN7TCh8zh9d9fvGHPeixehNTWY0iIcu0e2VACd1+AgA6V9DW+1YxqzbJS9UwBX0JBlKVSqnQfKIXcam2VBAa4szpLg7s24EYOYJjbSrAnvO80nxFZikOHH4f/cbQiTSqykeph6afqPNNZDFNp+O9EAYEelj/UI4NAYzbgXg56mLStkMFeH9RW203y8LCp9ln/YGHd8MDlpaNaj0J7B3Bg1aKZve5vJDLd3/C5ywLgGcvJjiElzc3J5sblTfYLLNcqr1SXtYIMUh0a3pzNXhfXl03F7gGS0LPPhR526hpQ5qo363woyyLtNaqSbX5i943BXX9zbIiMtzi87ws0XAIa7+ix5cLVMo6RRKBKdtjYNMUDdC4K8rrvCpFADeQKlQT/OQzTGHfHBwZi20MrE8zwa46euQ6R6rYDOLNOoduWlGk2m0e7+5RrlseJsNxZnvKeICK8rNOrbktIgnsMbnoE1Bh0XDwuQHwIdCIppBLzGoA7cnCMHrmyO9Q6PJSAsGq2hBOvYsuSABtChstyVqZVOzPqzdWDLkoMogYuEtEM0rSOAW/+mXGxZcjDFOyihsrih09vaGQlVJc4sdiiv/lRAuPtFGmoMurfvnp/g2SmA034yLTyH1mi8XFlEo1L/F9yjoXuwZcmEzmTc+Dj53/KC7kJXsVaFWo6FURoNKoN+rLtoum9jiwexZcmHEeCxQqJtd9nVHZl2HmxOk8Mi2LIYkoG7XxiSgS2LIRnYshiSgS2LIRnYshiSgS2LIRn/BxgyBNSXe13ZAAAAAElFTkSuQmCC",
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from langchain_opentutorial.graphs import visualize_graph\n",
    "\n",
    "visualize_graph(app)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf819f18",
   "metadata": {},
   "source": [
    "## Final Invocation and Display\n",
    "\n",
    "Finally, we pass a user question into our compiled state machine. The pipeline will:\n",
    "1. Generate code.\n",
    "2. Check it.\n",
    "3. Reflect if there’s an error.\n",
    "4. Repeat until success or until we reach our maximum number of attempts.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "d1198f9f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "---GENERATING CODE SOLUTION---\n",
      "---CHECKING CODE---\n",
      "---NO CODE TEST FAILURES---\n",
      "---DECISION: FINISH---\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "code(prefix='To pass a string directly through to a prompt, you can use the RunnablePassthrough class. This will take the input string and pass it through unchanged to be used in constructing the prompt input.', imports='from langchain_core.prompts import PromptTemplate\\nfrom langchain_core.runnables import RunnablePassthrough\\nfrom langchain_openai import ChatOpenAI\\nfrom langchain_core.output_parsers import StrOutputParser', code='template = \"\"\"Answer the following question.\\n\\nQuestion: {input}\\n\\nAnswer:\"\"\"\\n\\nprompt = PromptTemplate.from_template(template)\\n\\nllm = ChatOpenAI()\\n\\nchain = (\\n    RunnablePassthrough()\\n    | prompt \\n    | llm\\n    | StrOutputParser()\\n)\\n\\nchain.invoke(\"What is the capital of France?\")')"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "question = \"How can I directly pass a string to a runnable and use it to construct the input needed for my prompt?\"\n",
    "solution = app.invoke({\"messages\": [(\"user\", question)], \"iterations\": 0, \"error\": \"\"})\n",
    "solution[\"generation\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "87756a8a",
   "metadata": {},
   "source": [
    "We define a helper function to display the resulting code in a Markdown cell:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "35972782",
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import Markdown, display\n",
    "\n",
    "\n",
    "def display_markdown_code(solution):\n",
    "    \"\"\"\n",
    "    Render the final solution (Structured Output) as a Markdown code block.\n",
    "    \"\"\"\n",
    "\n",
    "    md_text = f\"\"\"\n",
    "\n",
    "**{solution.prefix}**\n",
    "\n",
    "\n",
    "```python\n",
    "\n",
    "# Imports\n",
    "\n",
    "{solution.imports}\n",
    "\n",
    "\n",
    "# Code\n",
    "\n",
    "{solution.code}\n",
    "\"\"\"\n",
    "\n",
    "    display(Markdown(md_text))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aaa7ac42",
   "metadata": {},
   "source": [
    "The final output is displayed in Markdown, allowing you to immediately see the generated code solution"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "bc4ecad4",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "\n",
       "\n",
       "**To pass a string directly through to a prompt, you can use the RunnablePassthrough class. This will take the input string and pass it through unchanged to be used in constructing the prompt input.**\n",
       "\n",
       "\n",
       "```python\n",
       "\n",
       "# Imports\n",
       "\n",
       "from langchain_core.prompts import PromptTemplate\n",
       "from langchain_core.runnables import RunnablePassthrough\n",
       "from langchain_openai import ChatOpenAI\n",
       "from langchain_core.output_parsers import StrOutputParser\n",
       "\n",
       "\n",
       "# Code\n",
       "\n",
       "template = \"\"\"Answer the following question.\n",
       "\n",
       "Question: {input}\n",
       "\n",
       "Answer:\"\"\"\n",
       "\n",
       "prompt = PromptTemplate.from_template(template)\n",
       "\n",
       "llm = ChatOpenAI()\n",
       "\n",
       "chain = (\n",
       "    RunnablePassthrough()\n",
       "    | prompt \n",
       "    | llm\n",
       "    | StrOutputParser()\n",
       ")\n",
       "\n",
       "chain.invoke(\"What is the capital of France?\")\n"
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "display_markdown_code(solution[\"generation\"])"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [
    "0856a092",
    "9d230b5c",
    "05f6efaa",
    "b8c5d026",
    "ef28ddf5"
   ],
   "provenance": []
  },
  "kernelspec": {
   "display_name": "langchain-opentutorial-QDzDRI-1-py3.11",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
