{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Generating Bug Free Leetcode Solutions\n",
    "\n",
    ":::note\n",
    "To download this tutorial as a Jupyter notebook, click [here](https://github.com/guardrails-ai/guardrails/blob/main/docs/src/examples/bug_free_python_code.ipynb).\n",
    ":::\n",
    "\n",
    "In this example, we want to solve String Manipulation leetcode problems such that the code is bug free.\n",
    "\n",
    "We make the assumption that:\n",
    "\n",
    "1. We don't need any external libraries that are not already installed in the environment.\n",
    "2. We are able to execute the code in the environment.\n",
    "\n",
    "## Objective\n",
    "\n",
    "We want to generate bug-free code for solving leetcode problems. In this example, we don't account for semantic bugs, only for syntactic bugs.\n",
    "\n",
    "In short, we want to make sure that the code can be executed without any errors.\n",
    "\n",
    "## Step 1: Install validators from the hub\n",
    "\n",
    "First, we install the validators and packages we need to make sure generated python is valid."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Installing hub:\u001b[35m/\u001b[0m\u001b[35m/reflex/\u001b[0m\u001b[95mvalid_python...\u001b[0m\n",
      "✅Successfully installed reflex/valid_python!\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "!guardrails hub install hub://reflex/valid_python --quiet"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Step 2: Create a `Guard` object\n",
    "\n",
    "The Guard object contains the validations we aim to check the generated code against. This object also takes corrective action to fix the code if it doesn't pass the validations. As configured here, it will reask the LLM to correct the code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "from guardrails.hub import ValidPython\n",
    "from guardrails import Guard\n",
    "\n",
    "guard = Guard().use(ValidPython(on_fail=\"reask\"))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 3: Use the `Guard` to make and validate the LLM API call"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ValidationOutcome(\n",
      "    call_id='14508958944',\n",
      "    raw_llm_output='def longest_palindrome(s: str) -> str:\\n    if len(s) == 0:\\n        return \"\"\\n    \\n    start, end = 0, 0\\n    \\n    def expand_around_center(left: int, right: int) -> int:\\n        while left >= 0 and right < len(s) and s[left] == s[right]:\\n            left -= 1\\n            right += 1\\n        return right - left - 1\\n    \\n    for i in range(len(s)):\\n        len1 = expand_around_center(i, i)\\n        len2 = expand_around_center(i, i + 1)\\n        max_len = max(len1, len2)\\n        \\n        if max_len > end - start:\\n            start = i - (max_len - 1) // 2\\n            end = i + max_len // 2\\n    \\n    return s[start:end + 1]',\n",
      "    validated_output='def longest_palindrome(s: str) -> str:\\n    if len(s) == 0:\\n        return \"\"\\n    \\n    start, end = 0, 0\\n    \\n    def expand_around_center(left: int, right: int) -> int:\\n        while left >= 0 and right < len(s) and s[left] == s[right]:\\n            left -= 1\\n            right += 1\\n        return right - left - 1\\n    \\n    for i in range(len(s)):\\n        len1 = expand_around_center(i, i)\\n        len2 = expand_around_center(i, i + 1)\\n        max_len = max(len1, len2)\\n        \\n        if max_len > end - start:\\n            start = i - (max_len - 1) // 2\\n            end = i + max_len // 2\\n    \\n    return s[start:end + 1]',\n",
      "    reask=None,\n",
      "    validation_passed=True,\n",
      "    error=None\n",
      ")\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/dtam/dev/guardrails/guardrails/validator_service/__init__.py:85: UserWarning: Could not obtain an event loop. Falling back to synchronous validation.\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "source": [
    "# Add your OPENAI_API_KEY as an environment variable if it's not already set\n",
    "# import os\n",
    "# os.environ[\"OPENAI_API_KEY\"] = \"YOUR_API_KEY\"\n",
    "\n",
    "prompt = \"\"\"\n",
    "Given the following high level leetcode problem description, write a short Python code snippet that solves the problem.\n",
    "Do not include any markdown in the response.\n",
    "\n",
    "Problem Description:\n",
    "${leetcode_problem}\n",
    "\"\"\"\n",
    "\n",
    "leetcode_problem = \"\"\"\n",
    "Given a string s, find the longest palindromic substring in s. You may assume that the maximum length of s is 1000.\n",
    "\"\"\"\n",
    "\n",
    "response = guard(\n",
    "    model=\"gpt-4o\",\n",
    "    messages=[{\"role\": \"user\", \"content\": prompt}],\n",
    "    prompt_params={\"leetcode_problem\": leetcode_problem},\n",
    "    temperature=0,\n",
    ")\n",
    "\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The response above shows a brief summary of what the Guard did. We can destructure it to show the final output:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "def longest_palindrome(s: str) -> str:\n",
      "    if len(s) == 0:\n",
      "        return \"\"\n",
      "    \n",
      "    start, end = 0, 0\n",
      "    \n",
      "    def expand_around_center(left: int, right: int) -> int:\n",
      "        while left >= 0 and right < len(s) and s[left] == s[right]:\n",
      "            left -= 1\n",
      "            right += 1\n",
      "        return right - left - 1\n",
      "    \n",
      "    for i in range(len(s)):\n",
      "        len1 = expand_around_center(i, i)\n",
      "        len2 = expand_around_center(i, i + 1)\n",
      "        max_len = max(len1, len2)\n",
      "        \n",
      "        if max_len > end - start:\n",
      "            start = i - (max_len - 1) // 2\n",
      "            end = i + max_len // 2\n",
      "    \n",
      "    return s[start:end + 1]\n"
     ]
    }
   ],
   "source": [
    "print(response.validated_output)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can confirm that the code is bug free by executing the code in the environment."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Success!\n"
     ]
    }
   ],
   "source": [
    "try:\n",
    "    exec(response.validated_output)\n",
    "    print(\"Success!\")\n",
    "except Exception:\n",
    "    print(\"Failed!\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "litellm",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
