{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/mshumer/gpt-prompt-engineer/blob/main/Claude_3_5_Sonnet_to_gpt_4o_mini_Conversion.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WljjH8K3s7kG"
      },
      "source": [
        "# Claude 3.5 Sonnet to gpt-4o-mini - part of the `gpt-prompt-engineer` repo\n",
        "\n",
        "This notebook gives you the ability to go from Claude 3.5 Sonnet to GPT-4o-mini -- reducing costs massively while keeping quality high.\n",
        "\n",
        "By Matt Shumer (https://twitter.com/mattshumer_)\n",
        "\n",
        "Github repo: https://github.com/mshumer/gpt-prompt-engineer"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "dQmMZdkG_RA5"
      },
      "outputs": [],
      "source": [
        "!pip install openai\n",
        "\n",
        "OPENAI_API_KEY = \"YOUR API KEY HERE\" # enter your OpenAI API key here\n",
        "ANTHROPIC_API_KEY = \"YOUR API KEY HERE\" # enter your Anthropic API key here"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "wXeqMQpzzosx"
      },
      "outputs": [],
      "source": [
        "import re\n",
        "import json\n",
        "import requests\n",
        "from openai import OpenAI\n",
        "\n",
        "client = OpenAI(api_key=OPENAI_API_KEY)\n",
        "\n",
        "def generate_candidate_prompts(task, prompt_example, response_example):\n",
        "    headers = {\n",
        "        \"x-api-key\": ANTHROPIC_API_KEY,\n",
        "        \"anthropic-version\": \"2023-06-01\",\n",
        "        \"content-type\": \"application/json\"\n",
        "    }\n",
        "\n",
        "    data = {\n",
        "        \"model\": 'claude-3-5-sonnet-20240620',\n",
        "        \"max_tokens\": 4000,\n",
        "        \"temperature\": .5,\n",
        "        \"system\": \"\"\"<task>Given an example training sample, create seven additional samples for the same task that are even better. Each example should contain a <prompt> and a <response>.</task>\n",
        "\n",
        "<rules>\n",
        "1. Ensure the new examples are diverse and unique from one another.\n",
        "2. They should all be perfect. If you make a mistake, this system won't work.\n",
        "</rules>\n",
        "\n",
        "Respond in this format:\n",
        "<response_format>\n",
        "<example_one>\n",
        "<prompt>\n",
        "PUT_PROMPT_HERE\n",
        "</prompt>\n",
        "<response>\n",
        "PUT_RESPONSE_HERE\n",
        "</response>\n",
        "</example_one>\n",
        "\n",
        "<example_two>\n",
        "<prompt>\n",
        "PUT_PROMPT_HERE\n",
        "</prompt>\n",
        "<response>\n",
        "PUT_RESPONSE_HERE\n",
        "</response>\n",
        "</example_two>\n",
        "\n",
        "...\n",
        "</response_format>\"\"\",\n",
        "        \"messages\": [\n",
        "            {\"role\": \"user\", \"content\": f\"\"\"<training_task>{task}</training_task>\n",
        "\n",
        "<prompt_example>\n",
        "{prompt_example}\n",
        "</prompt_example>\n",
        "\n",
        "<response_example>\n",
        "{response_example}\n",
        "</response_example>\"\"\"},\n",
        "        ]\n",
        "    }\n",
        "\n",
        "\n",
        "    response = requests.post(\"https://api.anthropic.com/v1/messages\", headers=headers, json=data)\n",
        "\n",
        "    response_text = response.json()['content'][0]['text']\n",
        "\n",
        "    # Parse out the prompts and responses\n",
        "    prompts_and_responses = []\n",
        "    examples = re.findall(r'<example_\\w+>(.*?)</example_\\w+>', response_text, re.DOTALL)\n",
        "    for example in examples:\n",
        "        prompt = re.findall(r'<prompt>(.*?)</prompt>', example, re.DOTALL)[0].strip()\n",
        "        response = re.findall(r'<response>(.*?)</response>', example, re.DOTALL)[0].strip()\n",
        "        prompts_and_responses.append({'prompt': prompt, 'response': response})\n",
        "\n",
        "    return prompts_and_responses\n",
        "\n",
        "def generate_system_prompt(task, prompt_examples):\n",
        "    headers = {\n",
        "        \"x-api-key\": ANTHROPIC_API_KEY,\n",
        "        \"anthropic-version\": \"2023-06-01\",\n",
        "        \"content-type\": \"application/json\"\n",
        "    }\n",
        "\n",
        "    data = {\n",
        "        \"model\": 'claude-3-5-sonnet-20240620',\n",
        "        \"max_tokens\": 1000,\n",
        "        \"temperature\": .5,\n",
        "        \"system\": \"\"\"<your_role>Given a user-description of their <task> a set of prompt / response pairs (it'll be in JSON for easy reading) for the types of outputs we want to generate given inputs, write a fantastic system prompt that describes the task to be done perfectly.</your_role>\n",
        "\n",
        "<rules>\n",
        "1. Do this perfectly.\n",
        "2. Respond only with the system prompt, and nothing else. No other text will be allowed.\n",
        "</rules>\n",
        "\n",
        "Respond in this format:\n",
        "<system_prompt>\n",
        "WRITE_SYSTEM_PROMPT_HERE\n",
        "</system_prompt>\"\"\",\n",
        "        \"messages\": [\n",
        "            {\"role\": \"user\", \"content\": f\"\"\"<task>{task}</task>\n",
        "\n",
        "<prompt_response_examples>\n",
        "{str(prompt_examples)}\n",
        "</prompt_response_examples>\"\"\"},\n",
        "        ]\n",
        "    }\n",
        "\n",
        "\n",
        "    response = requests.post(\"https://api.anthropic.com/v1/messages\", headers=headers, json=data)\n",
        "\n",
        "    response_text = response.json()['content'][0]['text']\n",
        "\n",
        "    # Parse out the prompt\n",
        "    system_prompt = response_text.split('<system_prompt>')[1].split('</system_prompt>')[0].strip()\n",
        "\n",
        "    return system_prompt\n",
        "\n",
        "def test_mini(generated_examples, prompt_example, system_prompt):\n",
        "    messages = [{\"role\": \"system\", \"content\": system_prompt}]\n",
        "\n",
        "    for example in generated_examples:\n",
        "        messages.append({\"role\": \"user\", \"content\": example['prompt']})\n",
        "        messages.append({\"role\": \"assistant\", \"content\": example['response']})\n",
        "\n",
        "    messages.append({\"role\": \"user\", \"content\": prompt_example.strip()})\n",
        "\n",
        "    response = client.chat.completions.create(\n",
        "        model=\"gpt-4o-mini\",\n",
        "        messages=messages,\n",
        "        max_tokens=2000,\n",
        "        temperature=0.5\n",
        "    )\n",
        "\n",
        "    response_text = response.choices[0].message.content\n",
        "\n",
        "    return response_text\n",
        "\n",
        "def run_mini_conversion_process(task, prompt_example, response_example):\n",
        "    print('Generating the prompts / responses...')\n",
        "    # Generate candidate prompts\n",
        "    generated_examples = generate_candidate_prompts(task, prompt_example, response_example)\n",
        "\n",
        "    print('Prompts / responses generated. Now generating system prompt...')\n",
        "\n",
        "    # Generate the system prompt\n",
        "    system_prompt = generate_system_prompt(task, generated_examples)\n",
        "\n",
        "    print('System prompt generated:', system_prompt)\n",
        "\n",
        "    print('\\n\\nTesting the new prompt on GPT-4o-mini, using your input example...')\n",
        "    # Test the generated examples and system prompt with the GPT-4o-mini model\n",
        "    mini_response = test_mini(generated_examples, prompt_example, system_prompt)\n",
        "\n",
        "    print('GPT-4o-mini responded with:')\n",
        "    print(mini_response)\n",
        "\n",
        "    print('\\n\\n!! CHECK THE FILE DIRECTORY, THE PROMPT IS NOW SAVED THERE !!')\n",
        "\n",
        "    # Create a dictionary with all the relevant information\n",
        "    result = {\n",
        "        \"task\": task,\n",
        "        \"initial_prompt_example\": prompt_example,\n",
        "        \"initial_response_example\": response_example,\n",
        "        \"generated_examples\": generated_examples,\n",
        "        \"system_prompt\": system_prompt,\n",
        "        \"mini_response\": mini_response\n",
        "    }\n",
        "\n",
        "    # Save the GPT-4o-mini prompt to a Python file\n",
        "    with open(\"gpt4o_mini_prompt.py\", \"w\") as file:\n",
        "        file.write('system_prompt = \"\"\"' + system_prompt + '\"\"\"\\n\\n')\n",
        "\n",
        "        file.write('messages = [\\n')\n",
        "        for example in generated_examples:\n",
        "            file.write('    {\"role\": \"user\", \"content\": \"\"\"' + example['prompt'] + '\"\"\"},\\n')\n",
        "            file.write('    {\"role\": \"assistant\", \"content\": \"\"\"' + example['response'] + '\"\"\"},\\n')\n",
        "\n",
        "        file.write('    {\"role\": \"user\", \"content\": \"\"\"' + prompt_example.strip() + '\"\"\"}\\n')\n",
        "        file.write(']\\n')\n",
        "\n",
        "    return result"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Fill in your task, prompt_example, and response_example here. Make sure you keep the quality really high here... this is the most important step!"
      ],
      "metadata": {
        "id": "ZujTAzhuBMea"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "task = \"refactoring complex code\"\n",
        "\n",
        "prompt_example = \"\"\"def calculate_total(prices, tax, discount, shipping_fee, gift_wrap_fee, membership_discount):\n",
        "\n",
        "    total = 0\n",
        "\n",
        "    for i in range(len(prices)):\n",
        "\n",
        "        total += prices[i]\n",
        "\n",
        "    if membership_discount != 0:\n",
        "\n",
        "        total = total - (total * (membership_discount / 100))\n",
        "\n",
        "    if discount != 0:\n",
        "\n",
        "        total = total - (total * (discount / 100))\n",
        "\n",
        "    total = total + (total * (tax / 100))\n",
        "\n",
        "    if total < 50:\n",
        "\n",
        "        total += shipping_fee\n",
        "\n",
        "    else:\n",
        "\n",
        "        total += shipping_fee / 2\n",
        "\n",
        "    if gift_wrap_fee != 0:\n",
        "\n",
        "        total += gift_wrap_fee * len(prices)\n",
        "\n",
        "    if total > 1000:\n",
        "\n",
        "        total -= 50\n",
        "\n",
        "    elif total > 500:\n",
        "\n",
        "        total -= 25\n",
        "\n",
        "    total = round(total, 2)\n",
        "\n",
        "    if total < 0:\n",
        "\n",
        "        total = 0\n",
        "\n",
        "    return total\"\"\"\n",
        "\n",
        "response_example = \"\"\"def calculate_total(prices, tax_rate, discount_rate, shipping_fee, gift_wrap_fee, membership_discount_rate):\n",
        "\n",
        "    def apply_percentage_discount(amount, percentage):\n",
        "\n",
        "        return amount * (1 - percentage / 100)\n",
        "\n",
        "    def calculate_shipping_fee(total):\n",
        "\n",
        "        return shipping_fee if total < 50 else shipping_fee / 2\n",
        "\n",
        "    def apply_tier_discount(total):\n",
        "\n",
        "        if total > 1000:\n",
        "\n",
        "            return total - 50\n",
        "\n",
        "        elif total > 500:\n",
        "\n",
        "            return total - 25\n",
        "\n",
        "        return total\n",
        "\n",
        "    subtotal = sum(prices)\n",
        "\n",
        "    subtotal = apply_percentage_discount(subtotal, membership_discount_rate)\n",
        "\n",
        "    subtotal = apply_percentage_discount(subtotal, discount_rate)\n",
        "\n",
        "\n",
        "\n",
        "    total = subtotal * (1 + tax_rate / 100)\n",
        "\n",
        "    total += calculate_shipping_fee(total)\n",
        "\n",
        "    total += gift_wrap_fee * len(prices)\n",
        "\n",
        "\n",
        "\n",
        "    total = apply_tier_discount(total)\n",
        "\n",
        "    total = max(0, round(total, 2))\n",
        "\n",
        "\n",
        "\n",
        "    return total\"\"\""
      ],
      "metadata": {
        "id": "XSZqqOoQ-5_E"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### Now, let's run this system and get our new prompt! At the end, you'll see a new file pop up in the directory that contains everything you'll need to reduce your costs while keeping quality high w/ gpt-4o-mini!"
      ],
      "metadata": {
        "id": "cMO3cJzWA-O0"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "result = run_mini_conversion_process(task, prompt_example, response_example)"
      ],
      "metadata": {
        "id": "O-Bn0rupAJqb"
      },
      "execution_count": null,
      "outputs": []
    }
  ],
  "metadata": {
    "colab": {
      "provenance": [],
      "include_colab_link": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.8.8"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}